OpenAI has set up a team of AI engineers and experts that will actively focus on assessing, evaluating, and probing AI models to protect against various dangers posed by AI systems well before they become a reality
In order to address the potentially catastrophic risks associated with AI systems, OpenAI has announced the formation of a new team called “Preparedness.” The team, led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, will focus on assessing, evaluating, and probing AI models to protect against various dangers posed by future AI systems.
OpenAI CEO Sam Altman has been vocal about his concerns regarding AI and its potential to lead to “human extinction.” The formation of the Preparedness team is a proactive step toward addressing these concerns.
The key responsibilities of the Preparedness team will include tracking and forecasting the risks associated with AI, ranging from its ability to deceive and manipulate humans, as seen in phishing attacks, to its capacity to generate malicious code. Some of the risk categories that Preparedness will examine may appear speculative, such as threats related to “chemical, biological, radiological, and nuclear” scenarios.
Related Articles

OpenAI to go public? ChatGPT-maker is looking to sell its shares at a valuation of $86 billion
‘No AI iPhone:’ OpenAI CEO Sam Altman dismisses rumours of AI-powered smartphone
OpenAI acknowledges the importance of studying less obvious but grounded areas of AI risk. To engage the wider community in this effort, OpenAI has launched a competition soliciting ideas for risk studies. The top ten submissions will have a chance to win a $25,000 prize and an opportunity to work with the Preparedness team.
One contest question prompts participants to consider the “most unique, while still being probable, potentially catastrophic misuse of the model” when given unrestricted access to various AI models developed by OpenAI.
The Preparedness team’s mission extends beyond risk assessment. They will also work on formulating a “risk-informed development policy” that will guide OpenAI’s approach to evaluating AI models, monitoring tooling, risk mitigation actions, and governance structures throughout the model development process. This effort will complement OpenAI’s existing work on AI safety and cover both pre- and post-model deployment phases.
OpenAI emphasizes the potential benefits of highly capable AI systems but also acknowledges the increasingly severe risks they may pose. The establishment of the Preparedness team is driven by the belief that understanding and infrastructure for the safety of advanced AI systems are essential.
The unveiling of the Preparedness team coincides with a major UK government summit on AI safety. It follows OpenAI’s earlier announcement about forming a team to research and control emerging forms of “superintelligent” AI. The company, along with its leaders like Sam Altman and Ilya Sutskever, is deeply committed to researching ways to limit and restrict AI with intelligence that surpasses that of humans, which they anticipate could become a reality within the next decade.