Google, OpenAI, and Meta have agreed to stop developing any AI model if they cannot contain the risks. The companies signed up for the “AI Safety Commitments” on Tuesday at the AI Seoul Summit hosted by the UK and South Korea.
Also read: UK and Republic of Korea Collaborate on AI Summit
It’s a world first to have so many leading AI companies from so many parts of the globe all agreeing to the same commitments on AI safety.
— UK’s Prime Minister, Rishi Sunak
16 AI Firms Agree To AI Safety Commitments
Per the report, a total of 16 AI companies agreed to the safety pledge, spanning the US, China, and the Middle East.
Microsoft, Amazon, Anthropic, Samsung Electronics, and Chinese developer Zhipu.ai are also among the companies that agree to the safety standards.
Also read: Alibaba and Tencent Invest $342 Million in AI Startup Zhipu
The AI Safety pledge requires all companies to publish their respective safety framework before another AI Action Summit in France in early 2025. The framework will explain how the companies determine the risks of their models and what risks are “deemed intolerable.”
AI Companies Will Pull the Plug on Risky AI Models
In the most extreme cases, the firms will “not develop or deploy a model or system at all” if the risks cannot be contained, according to the report.
The true potential of AI will only be unleashed if we’re able to grip the risks. It is on all of us to make sure AI is developed safely.
— Michelle Donelan, UK’s Technology Secretary
In July 2023, the US government made a similar effort to address the risks and benefits of AI. President Joe Biden met with Google, Microsoft, Meta, OpenAI, Amazon, Anthropic, and Inflection to discuss AI safeguards that ensure their AI products are safe before being released.
AI Safety Debate Heats up Over OpenAI
The conversation on AI safety has been heating up in the past months, particularly around AGIs, which aim to mimic human-like general intelligence.
One of the companies OpenAI was caught at the center of this conversation last week after the co-founder, Ilya Sutskever, and top-level executive, Jan Leike, resigned from the company. The duo was in charge of the OpenAI Superalignment team set up to prevent their models from going rogue.
Also read: Another OpenAI Exec, Jan Leike Quits
In his post, Leike said that “over the past years, safety culture and processes have taken a backseat to shiny products” at the company.
Leike added that “OpenAI must become a safety-first AGI company” and that we must prioritize preparing for them as best we can to ensure AGI benefits all of humanity.
Cryptopolitan reporting by Ibiam Wayas
Earn more PRC tokens by sharing this post. Copy and paste the URL below and share to friends, when they click and visit Parrot Coin website you earn: https://parrotcoin.net0
PRC Comment Policy
Your comments MUST BE constructive with vivid and clear suggestion relating to the post.
Your comments MUST NOT be less than 5 words.
Do NOT in any way copy/duplicate or transmit another members comment and paste to earn. Members who indulge themselves copying and duplicating comments, their earnings would be wiped out totally as a warning and Account deactivated if the user continue the act.
Parrot Coin does not pay for exclamatory comments Such as hahaha, nice one, wow, congrats, lmao, lol, etc are strictly forbidden and disallowed. Kindly adhere to this rule.
Constructive REPLY to comments is allowed