The AI age: Navigating five critical global challenges
AI’s hype may be fading, but its true impact is only now unfolding. From geopolitics to jobs, five key tensions will define its role in reshaping our world.

In a nutshell
- International cooperation is vital for responsible AI regulation
- Private companies lead AI innovation, raising questions on public oversight
- Energy-intensive data centers strain resources, outpacing green solutions
- For comprehensive insights, tune into our AI-powered podcast here
When it comes to truly transformative technologies, history shows that we tend to overestimate their short-term impact but underestimate their long-term effects. Artificial intelligence is no exception. After a surge of unprecedented hype, much of it driven by the same technology giants developing these systems, we are now entering a phase of disillusionment. The public is starting to ask: “Where are the promised breakthroughs?” and “Why hasn’t everything changed already?”
But this is misleading. Just as no one in the 1990s could have foreseen that the internet would lead people to spend nearly five hours a day on smartphones, AI’s true impact is still unfolding – profound, unpredictable and irreversible.
This report explores five emerging battlegrounds where AI’s long-term influence will be most critical: geopolitics, governance, the environment, the economy and societal cohesion. These areas are already being reshaped, often in ways that are not yet fully visible. Understanding these issues is the first step in preparing for AI’s role, not just as a technological revolution, but as a broader global transformation.
The global AI race: A new geopolitical order
AI is becoming the central arena of geopolitical competition. The United States and China are engaged in a strategic contest to lead in AI development, deployment and governance. The stakes are high: economic dominance, military superiority and influence over global standards.
While the U.S. currently leads in frontier model development and private investment, China is making rapid progress by leveraging vast datasets, state-backed funding and centralized coordination. The European Union, meanwhile, positions itself as the chief regulator, shaping norms even as it lags in capabilities. In 2024, institutions in the U.S. produced 40 notable AI models, surpassing the 15 from China and the three from Europe. Although the U.S. still leads in the number of models, Chinese developments have quickly narrowed the quality gap.
Smaller innovation hubs such as Israel, Singapore and the United Arab Emirates aim to punch above their weight by focusing on strategic niches and ensuring they remain key players in this global race.
As AI advances toward artificial general intelligence (AGI) – a theoretical system that could match human thinking – key questions emerge: Who will control the most powerful AI? Could an AGI breakthrough lead to a new kind of technological hegemony? Might this AI race even spark real-world conflicts, such as a military clash over Taiwan’s semiconductor industry?
Global guardrails are necessary to prevent a race that leads to instability or misuse, but the current trust deficit between major powers makes such cooperation challenging.
Facts & figures
What is artificial general intelligence?
Artificial general intelligence (AGI) is the theoretical capability of a machine to understand or learn any intellectual task that a human can perform. This type of AI seeks to emulate the cognitive functions of the human brain. Although it remains a hypothetical concept at present, there is potential for AGI to replicate human-like skills such as reasoning, problem-solving, perception, learning and language comprehension. Research into AGI has been ongoing since the inception of AI development, yet there remains no consensus among scholars on what exactly qualifies as AGI or the most effective approaches to achieve it.
Who governs intelligence?
The most advanced AI systems today are being developed not by governments but by private companies. In the U.S., OpenAI, Google, Anthropic, Meta and others are in a race to push the boundaries of capability and ambition, often outpacing public regulators.
Unlike earlier general-purpose technologies, such as electricity, computing or the internet, where governments played a major role in their initial development, the current wave of AI in the U.S. is largely driven by private enterprises. In China this is partially true, though the Chinese Communist Party is directing private entities toward developments that serve centralized state interests. This marks a notable shift, as groundbreaking innovation is taking place primarily outside public institutions. This creates fundamental conflict, as corporations control a technology with far-reaching public implications yet operate based on private interests.
While internet-based services are indeed controlled by tech companies, they primarily function as platforms for communication and distribution. AI, in contrast, is a decision-making technology capable of interpreting data, generating content and even acting autonomously. This distinction brings a heightened risk of bias, misuse and unintended consequences. For instance, once developed, an individual could leverage an open-source AGI system to autonomously generate and deploy a personalized bioweapon or manipulate financial markets by identifying vulnerabilities, executing large-scale trades and destabilizing systems faster than regulators can respond.
Like the adage that “war is too important to be left to the generals,” many argue that AI is too important to be left solely to tech companies. Public oversight is not a bureaucratic burden, but a democratic necessity.
As AI progresses toward AGI, despite the ongoing debate about what this concept truly means, governments may be forced to assert stronger control through measures such as licensing requirements, mandatory audits or even nationalization. With systems capable of outthinking humans in various fields, concerns about private ownership arise. How can we ensure that oversight is both competent and democratic?
Moreover, AI, like any digital technology, knows no borders. Its development, deployment and impact are inherently transnational. This means that effective AI regulation requires cross-border cooperation. International organizations such as the Organisation for Economic Co-operation and Development (OECD), which promotes collaboration among countries on ethical and efficient AI governance, should play a key role in establishing standards, aligning principles and preventing a global race to the bottom in regulation.
Last year, there was a notable increase in global collaboration on AI governance. Key organizations, including the OECD, the EU, the United Nations and African Union, have rolled out frameworks that emphasize transparency, trustworthiness and other fundamental principles of responsible AI.
Intelligence at the cost of sustainability?
Training advanced AI models requires immense resources, and the environmental impact cannot be overlooked. Large data centers, which power these advancements, consume enormous amounts of electricity and water. Scientists estimate that in 2022, global data centers consumed around 460 terawatt-hours (TWh) of electricity, roughly equivalent to the total annual energy consumption of France.
The International Energy Agency forecasts that global electricity demand from data centers will more than double by 2030, reaching approximately 945 TWh. A significant factor driving this surge will be AI, as electricity demand from AI-optimized data centers is expected to increase more than fourfold by 2030.

As the demand for generative AI and real-time inference continues to rise, so does the environmental cost associated with it. This creates a challenging dilemma between advancing AI technology and protecting the environment. AI has the potential to be a valuable tool in addressing environmental issues, whether by optimizing energy grids, modeling climate change or speeding up R&D in green technologies. However, in its current form, AI could be contributing more to the climate crisis than offering solutions. The expansion of computing infrastructure often outpaces the growth of renewable energy, particularly in regions where major cloud service providers are building massive data centers.
Innovations in energy-efficient chips, model optimization and sustainable cooling are advancing, but the environmental cost of creating increasingly powerful AI remains a major worry.
Job creation, displacement and the future of work
“We are being afflicted with a new disease… technological unemployment.” This warning was not issued by a modern-day tech pessimist but by one of the greatest economists of all time, John Maynard Keynes. Writing in 1930, Keynes speculated that within a century, the “economic problem” might be solved, and human beings would no longer need to work.
While his forecast of mass unemployment due to automation has not yet materialized, as technology created new jobs alongside those it displaced, his second prediction raises questions today. A hundred years later, society may be approaching a world defined more by leisure than labor, where material needs are no longer the primary focus.
Many believe that AI holds this promise, or threat, in which AI-driven abundance clashes with fears of a jobless future. Estimates suggest that hundreds of millions of jobs globally could be partially automated in the coming decades. As models grow more capable, entire categories of work may vanish, and with it, human know-how. Historical precedents, from the Industrial Revolution to the internet age, show that economies adjust, often through painful transitions and rising inequality. If AGI becomes reality, the economic stakes will multiply. New social contracts, such as universal basic income or shorter workweeks, may be necessary. Training workers for jobs that do not yet exist and ensuring AI augments rather than replaces human potential will be critical.
Inequality, inclusion and the fabric of trust
AI could serve as a great equalizer. AI tutors might democratize education. AI diagnostics could provide healthcare to remote areas. AI assistants may help underserved populations navigate bureaucracy, learn new skills and access vital services.
But these outcomes are not guaranteed. Without proactive policies, the opposite is more likely to occur. First, at present, access to AI tools is uneven, limiting most benefits to tech-savvy and wealthier individuals.
Second, data biases can reinforce discrimination. Language models often fail to reflect the diversity of human cultures, voices and needs. Although they mirror human biases, overreliance on these models and the psychological reluctance of humans to challenge a machine’s intelligence can amplify their effects.
Read more on artificial intelligence
Third, AI systems deployed by for-profit companies may further erode trust. In the absence of transparency, accountability and inclusivity, public distrust in AI systems – and in the entities that deploy them – may lead to backlash.
As AI assumes roles traditionally held by teachers, doctors, judges and bureaucrats, we must ask: Who is included in the design of these systems? Whose values are encoded in the algorithms? And how do we ensure that technology strengthens, rather than fractures, our social fabric?
From hype to responsibility
We are moving beyond the initial hype cycle of AI and entering a more complex and consequential phase. The five tensions outlined above are not merely abstract dilemmas; they represent the key battlegrounds that will determine the future of the AI age. While the world is not yet prepared for this transition, readiness is not a static condition. It is a choice.
Governments need to adopt a long-term vision that extends beyond the next election cycle. Tech companies should prioritize social responsibility over quarterly earnings. Citizens must stay engaged, rather than passively consuming information. These are all big asks. The international community should view AI as more than just a new technology, but as a significant force shaping the 21st century.
The question is not whether the change will come. It is whether we will shape it, or be shaped by it.
Scenarios
More likely: Global AI race escalates, prioritizing power over equity
In this scenario, the global AI race intensifies to the point where technological dominance becomes the primary strategic objective. The U.S. and China, engaged in intense competition, prioritize speed and power over ethics, collaboration or domestic impacts. Regulations are patchwork, reactive and mostly symbolic. Private companies face few restrictions, and governments invest heavily in AI for military and commercial purposes.
As a result, the societal, economic and environmental applications of AI lack coordinated oversight. Inequality worsens, job displacement speeds up without enough safety nets and environmental costs increase. The world undergoes significant AI-driven change but without a clear plan to ensure stability or fairness.
Less likely: AI technology is used democratically for global progress
Here, Western democracies lead in integrating AI development within inclusive, citizen-focused frameworks. Inspired by economist Daron Acemoglu’s vision, governments pursue “democratic people power” – using public policy, education, labor protections and governance to guide AI toward widespread social benefits. International cooperation grows stronger, with organizations such as the OECD, the UN and the EU establishing global standards. AI is used not just for profit or power, but to improve public services, increase worker productivity and tackle issues like climate change and health equity. Although challenges remain, this coordinated effort helps harness AI for the many, not just the few, ensuring democracy, not technological oligarchy, shapes the future.
Contact us today for tailored geopolitical insights and industry-specific advisory services.