Professor Shalom Lappin advocates urgent global AI law, IP reform, and group of workers preparedness to make sure AI development serves the general public good, not just corporate interests.
As artificial intelligence quickly reshapes the world, global laws of tech giants, reforms to highbrow property guidelines, and making plans for major shifts in the job market must be on the top of the agenda for global policymakers.
That’s the message from AI expert Professor Shalom Lappin, who gives a powerful case backed by means of research in his new book Understanding the Artificial Intelligence Revolution.
“The public domain and its citizens need to play a major role in figuring out the framework within which AI technology keeps to develop,” argues Lappin, who holds positions at Queen Mary University of London, King’s College London, and the University of Gothenburg.
Rather than targeting on a far-off sci-fi fears about super intelligent machines, Lappin zeroes in on the actual, instant immediate challenges that AI presents nowadays. These includes corporate power over AI growth, the spread of online misinformation, and the urgent need for smart, proactive policy decisions.
Lappin identifies tech monopolization as a essential concern. Large corporations now dominate AI growth, with tech corporations creating 32 fundamental machine learning of models in 2022 while universities produced only three. This attention of power, he argues, allows corporations to shape research priorities to commercial interests instead of public advantage.
Environmental harm offers another urgent challenge. Training ChatGPT-4 apparently consumed about 50 gigawatt hours of electricity—equivalent to the yearly usage of lots of American households. The production of microchips for AI structures includes toxic chemicals, huge amounts of water, and huge quantities of electricity, with chip factories consuming up to 100 megawatts per hour.
Key Policy Recommendations
To cope with these challenges, Professor Lappin outlines numerous main policy priorities. First, comprehensive global law of tech corporations is important, as individuals nations lack sufficient assets and enforcement powers to deal with those global issues. International trade agreements could offer mechanisms for implementing effective laws.
Second, highbrow property rights should be reformed to ensure rights holders are satisfied when their work is used to train AI systems.
“At a minimal, these corporations have to be required to acquire the consent of the copyright holders for the protected data that they use. In the interests of transparency, they ought to additionally be obliged to listing the materials on which their systems are trained,” notes Lappin.
Tackling AI Bias, Disinformation, and Deepfakes
Lappin also treats extensive bias in AI decision-making structures across healthcare, hiring, and financial services. He shows that effective measures need to be policy-led and enforced to combat disinformation and hate speech online, managing free expression with safety from harmful content, as modern self-rules by tech corporations has proven ineffective.
He argues that disinformation and deepfakes describes a notable threat; as generative AI becomes increasingly sophisticated, distinctive fact from fiction grows more difficult.
“We could soon locate ourselves residing in an environment in which setting apart facts from vicious fiction becomes an increasingly difficult. At that factor, the shared beliefs needed to maintain concord in the public domain start to provide way to doubt, recrimination, and chaos,” Lappin warns.
Finally, governments should prepare for potential widespread job displacement as AI automation increases across diverse sectors. Significant public funding in services and alternative forms of employment may be essential to prevent major social disruption.
“These aren’t matters that we can afford to depart solely to the vicissitudes of the market, and to the tech corporations that play this kind of dominant role in shaping that market,” Lappin concludes.