You are currently viewing OpenAI commits to providing the U.S. AI Safety Institute with early access to its upcoming model

OpenAI commits to providing the U.S. AI Safety Institute with early access to its upcoming model

  • Post author:
  • Post category:AI
  • Post comments:0 Comments
  • Post last modified:August 1, 2024

Sam Altman revealed in a tweet that OpenAI will provide the U.S. AI Safety Institute with early access to its next model as part of its safety initiatives. The company has been collaborating with the consortium to advance the science of AI evaluations. The National Institute of Standards and Technology (NIST) officially established the Artificial Intelligence Safety Institute earlier this year, following an announcement by Vice President Kamala Harris in 2023 at the UK AI Safety Summit. According to NIST, the consortium aims to develop science-based and empirically supported guidelines and standards for AI measurement and policy, establishing a foundation for global AI safety.

Last year, OpenAI and DeepMind similarly pledged to share AI models with the UK government. TechCrunch notes growing concerns that OpenAI is prioritizing the development of more powerful AI models over safety. Speculation arose that Sam Altman’s brief removal from the company was due to safety and security concerns, though an internal memo attributed it to a communication breakdown.

In May, OpenAI acknowledged disbanding its Superalignment team, originally created to ensure the safety of humanity as the company advanced in generative AI. Before this, team leaders Ilya Sutskever and Jan Leike left the company, with Leike expressing in tweets his long-standing disagreements with OpenAI’s leadership over company priorities, stating that “safety culture and processes have taken a backseat to shiny products.” OpenAI formed a new safety group by the end of May, led by board members including Altman, raising concerns about self-regulation.

Leave a Reply