This week, the United States Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic institution in California last month reportedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI application in query.
This brings into sharp targeting the urgent need to make AI safer. Currently we’re living in the “wild west” technology of AI, in which corporations are highly competing to fastest and most fun AI systems. Each corporation desires to outdo competitors and claim the top spot. This excessive competition often leads to intentional or unintentional shortcuts—mainly when it comes to protection.
Coincidentally, at around the same time of the FBI’s disclosure, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organization devoted to developing a new AI model in particular designed to be safer than other AI models—and focus on those that cause social harm.
So what is Bengio’s new AI model? And will it definitely protect the world from AI-facilitated harm?
An ‘honest’ AI
In 2018, Bengio, along his colleagues Yann LeCun and Geoffrey Hinton, received the Turing Award for groundbreaking research they had published three years earlier on deep learning. A department of machine learning, deep learning tries to mimic the tactics of the human brain by the usage of artificial neural networks to learn from computational data and make predictions.
Bengio’s new nonprofit corporation, LawZero, is developing “Scientist AI.” Bengio has said this model might be “honest and not deceptive,” and incorporate safety-by design principles.
According to a preprint paper launched online earlier this year, Scientist AI will differ from recent AI systems in 2 main ways.
First, it could investigate and communicate its confidence stage in its solutions, supporting to decrease problems of AI giving overly confident and incorrect responses.
Second, it may explain its reasoning to humans, permitting its conclusions to be evaluated and tested for accuracy.
Interestingly, older AI systems had this feature. But in the rush for speed and new strategies, many modern AI models cannot explain their decisions. Their developers have sacrificed explain ability for speed.
Bengio also intends “Scientist AI” to behave as a guardrail in opposition to hazardous AI. It ought monitor other, less reliable and dangerous AI systems—essentially fighting fire with fire.
This may be the best viable solution to enhance AI protection. Humans can’t well monitor systems which include ChatGPT, which manage over one billion queries every day. Only an other AI can manage this scale.
Using an AI programs towards different AI structures isn’t just a sci-fi concept—it is a common practice in research to compare and test different level of intelligence in AI systems.
Adding a ‘world model’
Large language models and machine learning are just small parts of today’s AI landscape.
Another key addition Bengio’s team are including to Scientist AI is the “world model” which brings certainly and explainability. Just as human make choices based totally on their understanding of the world, AI needs a similar model to function successfully.
The absence of a world model in current AI models is clear.
One well known example is the “hand problem“: most of latest AI models can imitate the arrival of hands but can not replicate natural hand movement, because they lack an understanding of the physics—a global model—at the back of them.
Another example is how models including ChatGPT struggle with chess, failing to win and even making illegal moves.
This is regardless of easier AI structures, which do comprise a model of the “world” of chess, beating even the best human players.
These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world.
On the right track—but it will be bumpy
Bengio is at the proper track, aiming to build more secure, extra trustworthy AI by combining large language models with other AI technologies.
However, his journey is not going to be smooth. LawZero’s US$30 million in funding is small in comparison to efforts along with the US$500 billion project introduced by using US President Donald Trump earlier this year to boost up the development of AI.
Making LawZero’s task tougher is the reality that Scientist AI—like another AI venture—needs huge amounts of facts to be powerful, most data are controlled by major tech companies..
There’s also an outstanding query. Even if Bengio can construct an AI system that does the entirety he says it can, how is it going on the way to manage other structures that might be causing harm?
Still, this venture, with talented researchers behind it, may want to spark a movement toward a future in which AI truely help human beings thrive. If a hit, it may set new expectations for s AI, safe motivating researchers, developers and policymakers to prioritize protection.
Perhaps if we had taken similar action when social media first emerged, we might have a more secure online environment for young human people mental health. And maybe, if Scientist AI had already been in place, it may have averted people with harmful intentions from getting access to dangerous information with the assist of AI systems.