DeepMind co-founder Mustafa Suleyman said the US government should restrict Nvidia’s supply of artificial intelligence (AI) chips to companies that will abide by a set of ethics. He argues this will restrict the development of AI to responsible companies as regulators battle to keep pace.
He alludes to a recent agreement signed by several US companies to develop AI responsibly as a way forward. The move brings the importance of regulating the space into the present instead of pushing it back.
AI Companies Must Proactively Pursue Ethics
“We’ll want to evaluate what the potential consequences are rather than doing it after the fact.”
He argues the agreement will have more immediate benefits than the passage of primary legislation. Suleyman’s AI company, Inflection, and Google recently announced tools that expand AI’s capabilities beyond common large-language models.
Read more here about ChatGPT alternatives.
Google’s project, Gemini, would work to automate tasks, like a worker booking leave on a company’s human resource platform. OpenAI has also announced similar integrations enabling AI to handle more complex tasks.
These new agents would aim to consider several proposals before making decisions to solve real-world problems. DeepMind, the company that Suleyman co-founded, has since been absorbed into Google’s AI team and is also exploring more advanced reasoning capabilities akin to its chess-champion beater, AlphaGo.
Rights Groups Oppose Industry-Led Efforts
Advanced AI has worried regulators who feel the technology is moving too fast. Earlier this year, the UK government’s chief AI advisor, Matt Clifford, said regulators must act quickly to pre-empt catastrophe.
Attempts to regulate the space have been few and far between. China has taken a blunt approach, banning platforms from using non-socialist data, while the European Union has proposed a draft regulation that has been characterized as too punitive.
Understand Google’s current AI assistant Bard here.
The US attempt to regulate the space appears to have started with banning chip exports to China and other territories. Rights groups have repeatedly argued that an industry-led regulation attempt like Suleyman proposes is designed to benefit the industry rather than society.
Sarah Mayers West of the AI Now Institute, a body researching the impacts AI has on society, said earlier this year:
“They’re essentially being given a path to experiment in the wild with systems that we already know are capable of causing widespread harm to the public.”
Elon Musk previously called for a halt to all AI training until sound governance is developed.
In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.