AI revolution: Friendly makeover or hostile takeover
News sources:China Daily Release Date:2024-01-04
A technology few had heard of until relatively recently suddenly became mainstream in the past year
 
 
A smartphone is used to find ways to access ChatGPT. WANG GANG/FOR CHINA DAILY

It seems as if everyone is talking about artificial intelligence, or AI, as it holds the promise of remaking our world. However, for many observers, including top experts in the field, it threatens to turn that world upside down.
 
This has prompted calls for everyone to cool their heels, and at least make sure that guardrails are in place to ensure that this technological revolution does not turn us all into losers. Public discussion of how to regulate this rapidly evolving technology will probably take a lot of the limelight this year and in the foreseeable future.
 
In Beijing, Dou Dejing, an AI expert and an adjunct professor in the Electronic Engineering Department of Tsinghua University, said AI's influence can now be felt in every aspect of our lives.
 
Of particular interest to many is generative AI, which refers to algorithms that can be used to create new content, including audio, code, images, text, simulations and videos. One prominent example of this is ChatGPT — software that allows a user to ask it questions using conversational, or natural, language. ChatGPT was released on Nov 30, 2022, by the United States company OpenAI.
 
Such developments raise the possibility of AI passing the Turing test, named after the celebrated World War II code breaker Alan Turing, and which has long been accepted as the benchmark for judging whether a machine is endowed with human intelligence.
 
Dou said: "I can't say when exactly this will happen, but it's quite possible that within a couple of years, an authoritative institution will announce that a conversational AI based on large language models of the type that power ChatGPT has passed the Turing test. This would be a highly significant milestone."
 
The emergence of generative AI is being seen as transformational, because the technology can benefit almost all businesses and individuals by greatly improving their efficiency. For example, white-collar workers can use it to draft reports, generate advertising ideas, summarize documents and even do coding.
 
However, the awe generated by AI's ability to bring a universe once reserved for science fiction into reality is accompanied by a foreboding about the dangers that lurk behind this great advance for humanity.
 
According to the database of the AIAAIC repository in Cambridge, England, which tracks incidents related to the risks and harm posed by AI, the number of such incidents has risen eightfold since 2016, when AI first came into the public spotlight.
 
Zhang Xin, associate professor of law at the University of International Business and Economics in Beijing, said the proliferation of generative AI presents two main types of risks.
 
She said generative AI amplifies problems presented by traditional AI, such as potential violations of personal privacy, threats to data security, and algorithmic bias, making such problems more complex and harder to detect and remedy.
 
For example, a report by IBM's data and AI team in the US is said to have found that computer-aided diagnoses relating to black patients are less reliable than those relating to white patients, because underrepresented data of women or minority groups can skew predictive AI algorithms.
 
 
 
Visitors experience an AI-aided product at Baidu World 2023 in Beijing in October. ZHU XINGXIN/CHINA DAILY

Serious consequences
 
Generative AI also presents many new risks that only become apparent when it is deployed in new settings, Zhang said. Labor displacement, the spread of false information, and so-called hallucinations are among the concerns raised in relation to generative AI.
 
AI hallucination occurs when an AI model generates false or illogical information that is not based on real data or events, but is presented as fact. Hallucinations can lead to significant consequences when people use AI for tasks with the potential for high-stakes consequences.
 
For example, a New York lawyer got into trouble in May for including fictitious legal research in a court filing after turning to ChatGPT for help in writing a legal brief.
 
In March, AI experts jointly signed a letter calling for a pause in developing systems more powerful than OpenAI's GPT-4, warning that AI laboratories are deploying "ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control".
 
In addition to these risks, some experts have warned that AI's revolutionary impact on economic production threatens to further widen the digital divide between developing and developed countries.
 
Antonio Guterres, the UN secretary-general, said in November that without immediate action AI will exacerbate the enormous inequalities that already plague the world. For example, no African country is in the top 50 for AI preparedness.
 
Zhang said, "Research indicates that the more advanced AI becomes, the stronger its empowering effect on developed countries gets, yet such an impact on developing countries is very weak.
 
"The most advanced AI applications, and the industrial chains that AI can best empower, are all deployed in developed countries. The more advanced AI becomes, the more labor it can replace. People in developing countries could be among the first to lose their jobs to AI, and they may not be able to reap any benefits from the AI-generated value chain."
 
This all suggests that generative AI represents a transformation that requires careful supervision and a new regulatory framework, both at national and global levels, to maximize the benefits while mitigating the risks.
 
With that in mind, governments around the world have been scrambling to build a regulatory framework in recent years. The AI Index Report 2023 published in April by Stanford University in California states that legislative records from 127 countries show that 37 bills specifically referring to artificial intelligence became law in 2022, compared with just one in 2016.
 
In China last year, an interim regulation on managing generative AI services came into force. The regulation proposes a range of measures to improve generative AI technology on one hand while establishing fundamental standards for providers of generative AI services on the other.
 
In October, the White House issued an executive order setting out the US' plans for regulating AI. Last month, the European Union managed to reach an agreement on its AI Act, the first comprehensive AI law. The legislation includes restrictions on governments using AI in biometric surveillance, as well as a regulatory framework for AI systems such as ChatGPT.
 
 
 
A visitor interacts with a digital figure at an exhibition in Shanghai in May. CHINA DAILY

Competitive edge
 
Zhang said: "The year 2023 has been a breakout one for AI regulation. The primary reason for this is the emergence of generative AI, which overturns the regulatory frameworks previously established based on narrow AI designed to perform a specific task. Generative AI can be applied to various settings, which means its risks are inherently dynamic, and calls for new regulatory frameworks.
 
"The race for AI regulation is also fueled by the recognition that those who possess the ability to exert greater influence in shaping global AI governance and formulate more compelling regulations will secure a competitive edge in the global AI arena.
 
"For example, in the previous round of regulation on social media, the European Union reaped many regulatory dividends through what has been called the Brussels effect."
 
Zhang's views are echoed by Zhang Linghan, a professor at China University of Political Science and Law's Institute of Data Law in Beijing. She said that in the relatively uncharted territory of AI governance, countries that act fastest can gain greater influence and even wield certain authority in setting key standards.
 
However, Zhang Linghan said the lack of global consensus on AI governance has led to variations in legislative goals and technical standards in different countries, posing challenges for collaboration on global governance.
 
"AI risks can spread through the internet and transcend national borders. Any country that uses AI technology or AI-generated content cannot isolate itself from the risks posed by AI to humanity as a whole," Zhang Linghan said.
 
As part of the Bletchley Declaration on AI safety signed at the first global AI Safety Summit, 28 countries and the EU agreed on the need for regulation and a new global effort to ensure AI is developed and used safely and responsibly. But what should be done and by whom remain open to debate.
 
With strategic technological competition for AI supremacy under consideration, the ways in which governments collaborate in practice to ensure AI safety remain a challenging issue.
 
Zhu Rongsheng, an expert from Tsinghua University's Center for International Security and Strategy, said that in promoting consensus on AI global governance, one of the obstacles is the choice between genuine multilateralism and pseudo-multilateralism.
 
"Genuine multilateralism is based on the United Nations framework, with the UN at its core, while the rules that are shaped by certain countries relying on their alliances are the products of pseudo-multilateralism," Zhu said.
 
"Since World War II, the UN has served as a pivotal international governance platform, uniting countries worldwide. It stands out as the most fitting platform for countries with shared concerns to collaborate and voice their opinions, which ensures wide representation and a governance plan that reflects international concerns and consensus."

Global efforts
 
In October, China reiterated support for discussions within the UN framework to establish an international institution to govern AI and coordinate global efforts in its Global AI Governance Initiative.
 
Later that month, Guterres announced the formation of a 39-member global advisory panel comprising representatives from different countries and fields to tackle challenges in worldwide AI governance.
 
Zhang Linghan, one of the two Chinese experts on the UN advisory body, said, "AI governance should not solely represent developed countries' opinions.
 
"Developing countries also have the right to express their demands. Conducting AI governance within the framework of the UN can help include more countries, particularly developing ones, in the governance process.
 
"Advocating and promoting the participation of all countries in AI governance is also a manifestation of the concept of a community of shared future for mankind. If the development of AI technology has led to the emergence of a 'technological underclass', it will be a failure of the international governance of AI."
 
Zhang Xin said global governance of AI is likely to evolve in three main directions.
 
"The first direction is legalization. Current global governance of AI relies heavily on nonbinding measures. It is now widely acknowledged that relying solely on self-regulation by AI companies is insufficient, so we can expect more laws and regulations on AI in the future. The second direction is smart regulation, which means using AI as a tool to regulate AI.
 
"The third direction is a more inclusive global governance approach, incorporating the interests of developing countries into the governance framework. I believe China will play a significant role in this regard."
访问统计:
google analytics (GA)