OpenAI says it’s hiring a head safety executive to mitigate AI risks
OpenAI is on the lookout for a new “head of preparedness” to oversee the company’s safety strategy in light of growing concerns about the potential misuse of artificial intelligence tools. The chosen candidate for this position will be compensated with a salary of $555,000 to lead OpenAI’s safety systems team, which is dedicated to ensuring that AI models are developed and deployed responsibly. Additionally, the head of preparedness will be responsible for identifying risks and developing strategies to mitigate potential harm from what OpenAI refers to as “frontier capabilities that create new risks of severe harm.”
In a recent job posting on their website, OpenAI highlighted the critical nature of this role, emphasizing the need for immediate engagement and the challenges that lie ahead. CEO Sam Altman expressed the urgency of the position in a post on X, stating, “This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges.”
The company’s focus on safety measures has become more pronounced as concerns mount regarding the impact of artificial intelligence on mental health. Recent allegations have linked OpenAI’s chatbot, ChatGPT, to instances of suicide. In response to these incidents, OpenAI has implemented new safety protocols for users under 18 and is working on enhancing its technology to recognize and address signs of mental or emotional distress.
Apart from mental health implications, there are also increasing worries about the potential cybersecurity threats posed by artificial intelligence. Samantha Vinograd, a former Homeland Security official, highlighted these concerns on a recent episode of “Face the Nation with Margaret Brennan,” emphasizing that AI has made different types of threats more credible and effective.
Altman acknowledged the evolving safety risks associated with AI in his post on X, noting the rapid advancement of AI models and their capabilities. He stressed the need for a deeper understanding of how these capabilities could be abused and how to mitigate potential downsides while still reaping the benefits of AI technology.
To qualify for the position of head of preparedness at OpenAI, applicants should possess expertise in machine learning, AI safety, evaluations, security, or related risk domains. Experience in designing and executing rigorous evaluations for complex technical systems is also required.
OpenAI initially announced the establishment of a preparedness team in 2023, demonstrating their proactive approach to addressing safety concerns in the ever-evolving field of artificial intelligence. This move underscores the company’s commitment to responsible AI development and deployment.
In conclusion, the search for a head of preparedness at OpenAI reflects the company’s dedication to prioritizing safety and ethical considerations in the development of AI technology. As AI continues to advance, it is imperative to have robust safety measures in place to ensure that the benefits of AI can be enjoyed without compromising the well-being of individuals and society as a whole.


