AI Abuse & Threat Intelligence Analyst
## In this role, you will: Drive insights and recommendations through comprehensive qualitative and quantitative analysis of emerging risks in AI and online platforms, translating complex patterns into actionable safety insights. Collaborate on safety reporting tools by contributing technical expertise in Python (essential) and SQL (desirable), helping to scale and automate safety insights for diverse internal and external stakeholders. Support safety enhancements on OpenAI’s platforms, leveraging experience with React and TypeScript to assist in building robust safety features. Conduct and evaluate targeted risk assessments and simulations, working with cross-functional teams to assess and mitigate AI misuse scenarios, while ensuring insights are effectively communicated and actionable. Build tooling for internal and external users to help track abuse on first-party and third-party platforms. ## You might thrive in this role if you: Have experience in performing in-depth qualitative and quantitative analysis to inform safety strategies, particularly within online content and AI applications. Possess technical skills in Python, and modern front-end development technologies like React, and TypeScript, with a track record of contributing to projects aimed at risk assessment, safety reporting, or building technical systems that detect abuse at scale. Demonstrate a deep understanding of AI and generative AI technologies and the potential abuse scenarios, with a proactive approach to identifying and addressing emerging threats. Exhibit excellent communication skills, capable of synthesizing complex findings into clear, concise recommendations for various audiences. Apply tot his job