黄色大片影院视频免费,在线观看视频日韩,天天操天天操天天干,久久青青91费线频观青

Global EditionASIA 中文雙語Fran?ais
Opinion
Home / Opinion / Chinese Perspectives

Collaborative and human-centric approach to AI governance vital for the world

By Chen Haiming | chinadaily.com.cn | Updated: 2024-10-21 09:51
Share
Share - WeChat
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, Feb 19, 2024. [Photo/Agencies]

Artificial Intelligence (AI) has emerged as a transformative force, promising to enhance productivity, improve decision-making and address a variety of global challenges. However, alongside its vast potential comes significant ethical considerations, employment disruptions and safety concerns. As we navigate this complex landscape, it is crucial that AI governance adopts a collaborative and human-centric approach that balances innovation with safety, upholds ethical standards and ensures that the trustworthy AI benefits all of humanity.

Collaboration is imperative in AI governance due to the rapidly evolving nature of technology that transcends national borders. Various stakeholders—including governments, international organizations, academia, the private sector companies and civil society—must come together to create a comprehensive framework that addresses the multifaceted challenges posed by fast-evolving AI models. "We are seeing life-changing technological advances…And life-threatening new risks—from disinformation to mass surveillance to the prospects of lethal autonomous weapons," says UN Secretary-General António Guterres.

As such, prioritizing collaboration at all levels is essential. Achieving effective AI governance necessitates a multi-stakeholder approach that engages diverse actors to contribute their perspectives and expertise. Governments can share effective practices and regulatory frameworks while technology companies can collaborate with academics to conduct ethical assessments of their algorithms. Civil society organizations and citizens should be central to these efforts, ensuring that the voices of affected communities are heard and promoting transparency through public dialogues and participatory decision-making processes.

Moreover, this collaboration should be framed within the guidelines set forth by the United Nations, which advocates for global standards and fosters dialogue on AI governance. One valuable mechanism in this context is international soft law, which pertains to regulatory frameworks that, although not legally binding, can still guide international conduct and enhance cooperation. The UN General Assembly adopted, on 21 March 2024, a landmark resolution on the promotion of "safe, secure and trustworthy" artificial intelligence systems for sustainable development.

As AI technologies advance, policymakers face the dual challenge of fostering innovation while ensuring safety. This delicate balance is essential; excessively restrictive regulations may stifle innovation, whereas lenient regulations could lead to detrimental societal consequences. To achieve equilibrium, ongoing dialogue among stakeholders is necessary. Engaging technologists in the regulatory process can help identify risks early in the development phase.

More important, adaptive regulatory frameworks, which are designed to evolve alongside AI technologies, can maintain this balance, ensuring that safety measures are integrated without impeding innovative solutions. Also, in AI governance, it is imperative to draw on lessons from existing models of technology governance, such as those for nuclear weapons and biological and chemical weapons.

At the core of effective AI governance is a commitment to human values and rights. AI systems must be designed with users in mind, ensuring equity, accountability and transparency. This human-centric approach recognizes that technology should serve people and contribute positively to societal well-being. Ethical considerations, such as bias mitigation, informed consent and data privacy, are vital facets of safeguarding individual rights. In addition, a human-centric focus entails recognizing the potential risks associated with AI, such as the perpetuation of existing societal biases or the misuse of technology for surveillance and control. Therefore, regulatory frameworks must ensure responsible AI development and implement measures to protect individuals from harm.

One of the most pressing concerns regarding AI is its potential impact on employment. As automation and AI-driven technologies evolve, certain jobs may become obsolete, thereby resulting in significant economic and social repercussions. "AI can endanger workers, worsen poverty and lead to unemployment and instability," says Hisham Khogali. Therefore, addressing job displacement is a critical aspect of AI governance.

In order to address the issue of unemployment caused by AI, it is imperative to adopt relevant countermeasures. Proactive investment in workforce retraining and upskilling programs is necessary to prepare individuals for the jobs of the future. What's more, financial experts should consider the possibility of imposing additional taxes on businesses that benefit from AI automation, with the proceeds used as relief funds to compensate for workers who have lost their jobs as a result of this automation. An educational paradigm that nurtures human creativity, critical thinking and emotional intelligence—skills that are less likely to be automated—will empower individuals and aid in minimizing the risks associated with AI-induced job displacement.

Despite its potential negative impacts and risks, AI holds immense potential to substantially contribute to the achievement of the United Nations' 2030 Agenda for Sustainable Development, which encompasses 17 Sustainable Development Goals. By leveraging AI, nations can effectively address complex challenges such as poverty, climate change and healthcare. For instance, AI can enhance efficiency in resource management, allowing for smarter agriculture and energy solutions, while in the healthcare sector, AI innovations can lead to better diagnostics and personalized treatment plans, thus significantly improving health outcomes globally.

However, realizing this potential hinges on ensuring that AI technologies are accessible to developing countries, thereby bridging the digital divide. International cooperation plays a pivotal role in this context, as countries must collaborate to share cutting-edge knowledge and advanced technologies so as to narrow the gap between high and low-income countries. To harness the benefits of AI for the achievement of the United Nations Sustainable Development Goals requires a cohesive and coherent approach to governance that prioritizes inclusivity. Scientific and technological powers should refrain from suppressing the technological development of other countries under the pretext of geopolitics and ideology, and from hindering the export of AI technology and advanced chips.

As one of the leading countries in AI research and implementation, China actively advocates for a cooperative and people-centered global AI governance model aimed at managing risks while fostering growth. It advocates for the responsible use of AI to benefit mankind and actively promotes enhancing AI capacity-building in developing countries. China's emphasis on AI governance is evident in its hosting of the 2024 World AI Conference and the High-Level Meeting on Global AI Governance, as well as the unanimous adoption of its resolution by the 78th session of the United Nations General Assembly to strengthen international cooperation in the capacity building of artificial intelligence.

In short, the path to effective AI governance requires a collaborative and human-centric approach that successfully balances innovation with safety. Adopting a multi-stakeholder framework rooted in global cooperation will empower individuals and guide the responsible development of AI technologies. Although the future of AI is filled with great potential, it is essential that we collaborate to establish a governance framework that prioritizes ethical principles, human welfare and sustainable development for all, while mitigating risks.

The author is a professor at the Foreign Studies College and director of the Center for Global Governance and Law, Xiamen University of Technology. The views don't necessarily reflect those of China Daily.

If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US