OpenAI Disbands AI Risk Team (Letting ChatGPT Make Decisions?)
#OpenAI , one of the pioneers of productive artificial intelligence technologies, came to the fore in recent months with the dismissal of CEO Sam Altman, the chaos that followed, and his subsequent return. Although they portrayed Altman as the representative of free artificial intelligence projects at that time, in fact the employee support was largely due to the investments to be made and the payments they would receive. At the end of the day, Altman returned, but OpenAI took on a very different structure compared to its original form.
OpenAI started out with a dual system. Accordingly, one wing would work as an association to develop artificial intelligence and make it available to everyone, and the other wing would be a company that brings money through commercial agreements. In the new regulation, association managers left the board of directors and were replaced by company managers and investor representatives. These changes continue to change the internal structure of the firm.
Finally, it was stated that Ilya Sutskever, one of the founders of OpenAI, also left the company, and the Superalignment team, which was established in 2023, was also disbanded. This team was studying the possible existential risks of artificial intelligence more advanced than humans and whether it could turn against humans. The work of this unit is now distributed among other studies.
Jan Leike, one of the former Superalignment managers who recently left #OpenAI , explained the reasons for the separation at length in #X . While Leike stated that he loved his team and that they were doing important work, he stated that they could not agree with OpenAI management on the company's main priorities. Additionally, Leike stated that the necessary resources were not allocated for his work. According to Leike, the company has put security in the background.