Global Memos are briefs by the Council of Councils that gather opinions from global experts on major international developments.
A woman passes in the forefront of an artificial intelligence sign in shades of futuristic blue is seen at the World Artificial Intelligence Conference in Shanghai, China on July 6, 2023. REUTERS/Aly Song
REUTERS/Aly Song

World leaders this month have an opportunity to bridge the gap among the many divergent methods of governing artificial intelligence (AI). The International Telecommunication Union, forty other UN agencies, and the government of Switzerland are convening the 2024 AI for Good Global Summit in Geneva on May 30–31 to explore how AI can capitalize on the immense quantities of human-generated data to drive sustainable development. Four Council of Councils experts preview the summit and write on what they think should be the highest priority for leaders on a consensus AI policy.

Aligning AI Economic Opportunities With Human Rights in the Global South

 

The rapid acceleration of AI developments in recent years has highlighted its potential to achieve the 2030 Agenda for Sustainable Development Goals (SDGs). Countries are already investing in integrating AI into agriculture, health, education, sustainability, and economic productivity. For instance, smart and low-carbon cities, supported by interconnected technologies and autonomous electric vehicles, can enable smart demand response. This allows electrical grids to better match energy demands, save billions of investment dollars, and help achieve SDGs seven, eleven, and thirteen on climate action.

However, AI-facilitated technological advancements toward the SDGs could instead primarily benefit a handful of countries, largely in the Global North, that already have the public and private resources and infrastructure to scale their development. Moreover, these technologies require significant computing power, available to only a few wealthy nations, resulting in high energy consumption and a carbon footprint that eventually harms marginalized coastal communities in the Global South. The International Energy Agency estimates that data centers and AI could consume up to 4 percent of global energy demand, roughly equivalent to the electricity used by Japan.

Policymakers in the Global South are rightly pressing for more equitable resource allocation to reap the benefits of AI. However, much of this conversation overlooks the importance of creating economic opportunities that align with social benefits and protect citizen rights. Many countries lack the necessary regulatory and procedural safeguards to ensure AI technologies and large language models do not violate individual privacy, exclude marginalized populations, or exacerbate systemic discrimination. Even when countries are exploring policy interventions, there is inadequate oversight of public sector implementation of AI technologies, a lack of independent adjudication of harms, poor grievance-redress mechanisms, and misalignment with international human rights principles. Additionally, the development process of those interventions is not sufficiently transparent or evidence-driven, and often excludes the communities those interventions are meant to serve.

For example, smart cities relying on autonomous or remotely controlled vehicles need to ensure they incorporate adequate privacy and human autonomy in their design. Otherwise, they risk becoming tools for stalking, domestic violence, and large-scale surveillance. With 37 percent of women in low- and middle-income countries experiencing physical violence, the relevance of rights-based innovation and policies in the Global South is clear and urgent. 

At the AI for Good Summit this year, it is critical not only to prioritize closing the Global North-South opportunity gap but also to ensure economic mobility is underpinned by safeguarding fundamental rights. This can be achieved by incorporating robust multistakeholder processes, inclusive of diverse Global South voices, in the international and national consensus on AI policies.

Fostering a Responsible and Inclusive AI Ecosystem

 

The rapid development of generative AI has far-reaching implications. Discussions on AI governance should focus on how to use digital transformation and AI to advance economies and societies across the globe in an inclusive and sustainable way.

There should be three main objectives at the summit. First, it should foster international cooperation and robust partnerships across sectors, paving the way for the sharing of knowledge, resources, and best practices. Such collaboration can lead to innovative solutions that are scalable and sustainable. Effective international collaboration can also lead to the establishment of consistent frameworks and standards, ensuring that AI technologies are used responsibly and beneficially around the world and, in the process, contribute to the achievement of the UN Sustainable Development Goals.

Second, the summit should promote dialogue on establishing and strengthening ethical guidelines for the development and use of AI. It should prioritize discussions on transparency, accountability, and fairness in AI systems, such as addressing biases in AI algorithms, ensuring data privacy, and developing robust mechanisms for oversight and governance. Promoting ethical AI will not only build public trust, but also prevent misuse and potential harm. In addition, it would be beneficial to build on efforts undertaken in other international forums, such as those taking place under the Italian presidency of the Group of Seven (G7), which seek to address the challenges and risks associated with AI and integrate ethical considerations into its development, deployment, and use.

Third, there should be a strong focus on bridging the digital divide and ensuring that the benefits of AI reach marginalized and underserved communities. The summit should advocate for policies and initiatives that provide equitable access to AI technologies, particularly in developing countries, including investment in infrastructure, education, and capacity-building, to enable those communities to use AI for economic and social development.

When it comes to consensus AI policy, the top priority for leaders should be to create a comprehensive and adaptable regulatory framework that balances innovation with safeguards. By prioritizing those areas, leaders can foster a responsible and inclusive AI ecosystem that maximizes benefits while minimizing risks, ultimately contributing to a more equitable and just world.

Steering International AI Governance Toward Great Unity

 

The development of AI created a need for international governance, and existing international mechanisms have actively responded. However, coordinating those diverse mechanisms presents a significant challenge. Currently, the AI governance landscape includes the UN system (e.g., UN Educational, Scientific, and Cultural Organization’s AI principles and International Telecommunication Union’s Focus Groups on AI), regional systems (e.g., the G7 Hiroshima Process on Generative Artificial Intelligence), and multistakeholder initiatives (e.g., the World Intellectual Property Organization Conversation on Intellectual Property and Frontier Technologies). Among these, the AI for Good Summit should serve as a crucial platform to align various international mechanisms with UN goals.

Given its action-oriented nature, the AI for Good Summit could act like a “neural network,” disseminating UN principles on AI to encourage sustainable development across all mechanisms. Additionally, as specific functional mechanisms can offer clearer AI governance pathways, the summit could leverage their outcomes to formulate more concrete and detailed implementation plans for the United Nations’ sustainable development agenda. This move would establish efficient and rational cooperative interactions, steering international governance toward greater unity.

Beyond engaging with other international AI mechanisms, the AI for Good Summit should build up its structure and bolster its influence, inclusivity, and representation regarding AI governance. By incorporating influential industry leaders and delivering more targeted reports and policy recommendations, the summit can elevate AI to the attention of national leaders. Currently, discussions on international rules and security are predominantly technical and tactical, lacking strategic emphasis. To address this gap, the summit should aim to shift the dialogue toward more strategic considerations, highlighting the long-term implications and benefits of AI. Furthermore, as a multistakeholder platform, the summit should integrate the needs of both businesses and governments, offering comprehensive recommendations and strategies regarding the role of enterprises in AI development.

Given the complexity and challenges in coordinating global AI governance, the AI for Good Summit is uniquely positioned to unify those efforts and advance the strategic implementation of AI for sustainable development. By strengthening its internal capabilities and fostering robust dialogue among various stakeholders, the summit can ensure a more coordinated and effective international AI governance landscape.

How ASEAN Can Contribute to AI Governance for Good

 

The Association of Southeast Asian Nations (ASEAN) could provide insights on how regional organizations can be pivotal in fostering responsible and inclusive AI development at the forthcoming AI for Good Summit.

The region offers unique perspectives in dealing with AI, especially because ASEAN set a goal to become a digital economy and a digital society by 2025. The ASEAN Guide on AI Governance and Ethics, published in February 2024, offers valuable lessons through its voluntary, principles-based guidelines, which emphasize transparency, fairness, security, reliability, privacy, accountability, and human-centricity—ensuring people have the right to determine what happens to them without coercion or compulsion. This adaptable framework enables countries to tailor guidelines to their unique contexts and varying levels of AI readiness across the region. In Southeast Asia, a phased, inclusive strategy that lets countries advance at their own pace while still contributing to regional guidelines is more practical than a rigid uniform model.

Whereas other territories in the world have been criticized for overregulation, ASEAN countries have attempted to create an environment conducive to innovation. Singapore, for example, developed the AI Verify Toolkit—the world’s first AI governance and software testing toolkit—that allows for technical tests and record process checks of AI models. This initiative reflects Singapore’s belief that there is still much to learn about AI before implementing regulatory measures. Similarly, Malaysia, the Philippines, and Thailand have not yet introduced legally binding AI regulation.

The ASEAN example also offers lessons for regional collaboration. The ASEAN Guide proposes establishing a dedicated working group on AI to oversee and coordinate AI governance initiatives throughout the region. This specialized body would enhance knowledge sharing, support capacity-building, and promote the harmonization of AI policies among member states. Furthermore, the Philippines announced plans to introduce a comprehensive ASEAN AI regulatory framework during its 2026 chairmanship of the regional bloc. This initiative could lay the groundwork for a more detailed and legally binding regional framework.

Significant lessons can be learned from ASEAN’s attempts to strike the right balance between enabling innovation and providing sufficient guardrails against AI’s potential risks and unintended consequences. Regional collaboration can be a significant step toward wider international cooperation, as a phased, inclusive approach that accounts for the digital divide within the region can offer insights even to greater powers as the bloc pioneers a balanced and contextually relevant AI governance model.