Davos Newcomers: The Hidden Agenda of Using Technological Leverage to Shift Global Power.

21/01/2026

The town of Davos at the foothills of the Alps has long been a global stage for the power showcase of elites. However, in January this year, when Microsoft's Satya Nadella, Anthropic's Dario Amodei, and Google DeepMind's Demis Hassabis walked side by side onto the stage of the World Economic Forum, a new form of power was quietly taking shape here. This is not a traditional geopolitical game, nor is it a business competition among multinational corporations, but a global power restructuring driven by artificial intelligence technology. The message brought by these Silicon Valley leaders is clear and firm: Slow down? Absolutely not. The proliferation of artificial intelligence must accelerate, even if it means the job market will face shocks in the coming years.

From the fervent pursuit of AI agents last year to the pragmatic discussions on large-scale implementation this year, the discourse at Davos reveals a deeper reality: AI leaders are substantively reshaping global economic rules, labor structures, and even national competitive landscapes through the speed and scale of technological deployment. The code in their hands is becoming a more effective tool of power than diplomatic rhetoric.

Speed is power: The global agenda of Silicon Valley's overlords.

At the main venue in Davos, Microsoft CEO Satya Nadella's speech set the tone for the entire AI discussion. The tech giant, who controls the world's largest software ecosystem and holds 27% of OpenAI's shares, painted a picture of AI capabilities leaping exponentially: from command coding to natural conversation, from small task delegation to round-the-clock autonomous agents—the pace of progress is astonishing. Nadella emphasized that although long-term consistency still requires refinement, systems are continuously improving under human supervision.

This obsession with speed is not a simple expression of technological optimism, but a strategic positioning. When Nadella claims that global society must reach a point where AI is practically used to change outcomes for people, communities, nations, and industries, he is essentially setting a timetable for global technology adoption. By permeating its entire product line with Copilot services and influencing the development path of foundational models through OpenAI, Microsoft's dual-pronged approach enables it to simultaneously control the evolutionary pace of both the application layer and the foundational layer.

Another dimension of the speed agenda is reflected in the discourse of geopolitical competition. Anthropic co-founder and CEO Dario Amodei candidly stated during a discussion on AI governance: "Not selling chips to China is one of the biggest things we can do to ensure we have time to address this issue." He was referring to preventing the risk of AI spiraling out of control. This statement elevates technology export controls directly to the level of human survival security, skillfully framing commercial competition as a battle for the defense of civilization. When Amodei warned that selling Nvidia H200 chips to China would have serious consequences for U.S. leadership in AI, he was essentially advocating for a new containment policy based on technological monopoly.

The pressure from this speed race has already permeated to the corporate execution level. Srinivas Tallapragada, Chief Engineering and Customer Success Officer at Salesforce, revealed at Davos that the company is deploying forward engineers to shorten the feedback loop between customers and product teams. At the same time, enterprises are rolling out pre-built agents, workflows, and playbooks to help customers redesign their business processes and avoid falling into pilot purgatory. These initiatives indicate that scaled deployment has become the new benchmark for measuring AI value, and the first companies to achieve scale will gain the power to define industry standards.

Reshaping Work and Value: Technological Unemployment Among White-Collar Workers

If last year's Davos was still marveling at the creativity of AI, this year's discussions have focused more on its structural impact on the labor market. Peter Körte, Chief Technology and Strategy Officer at Siemens, offered a precise analogy: AI is doing to knowledge workers—that is, white-collar workers—what robots did to blue-collar workers. This analogy reveals the continuity of technological substitution: from muscle to brain, the wave of automation is spreading upward along the value chain.

The prediction from Amodai is more specific: AI may eliminate half of the entry-level white-collar jobs. Although he acknowledges that the current labor market has not yet experienced large-scale impacts, changes in the coding industry have already emerged. The power of this prediction lies not in its accuracy, but in how it shapes the expectations of businesses and policymakers. When CEOs of top AI companies openly discuss large-scale job displacement, business owners accelerate investments in automation, educational institutions adjust their curricula, and governments reconsider social security systems—the prophecy itself accelerates its own fulfillment.

However, alternative narratives are only half the story. Demis Hassabis from Google DeepMind offers a relatively optimistic perspective, anticipating that new and more meaningful jobs will be created. For undergraduate students, he suggests abandoning traditional internships and instead mastering these tools, which may be a better choice than conventional internships because you are leaping ahead for the next five years. Hassabis’s advice essentially redefines the direction of human capital investment: from accumulating industry experience to mastering AI tools, a shift that will reshape career development paths and the value of higher education.

Entrepreneurs in the field of automation provide a more detailed measurement framework. Niti Mehta Shukla, co-founder and Chief Impact Officer of Automation Anywhere, points out that enterprises must go beyond measuring the impact of automation solely through labor savings. She cites specific customer cases where improving data quality, enhancing customer satisfaction, or shifting more employees to new tasks are better indicators than simply examining unit output costs. This shift in perspective is crucial—when companies begin to evaluate the value of AI using multidimensional metrics, the focus of technology deployment will shift from cost reduction to value creation, but this requires management to possess more complex evaluation capabilities.

Energy, Geopolitics, and Governance: The Game of Expanding Infrastructure

AI's computing power thirst is reshaping the global energy landscape. At Davos, Mahesh Kolli, President of India's renewable energy company Greenko Group, introduced the concept of electrostates—countries transitioning to electricity and clean energy sources like solar or wind. He pointed out that India is undergoing this electrostate revolution, where clean energy is shifting from a source of household electricity to a source for manufacturing materials, molecules, and AI. This transformation is driving India's competitive position in the global market, as is the case with other advanced electrostates like China.

The coupling of AI and the energy transition creates a new geopolitical logic. Countries with cheap, clean energy may become natural hosts for AI computing power, much like oil resources once determined geopolitical influence in the industrial era. Vaishali Nigam Sinha, co-founder of ReNew Energy, emphasizes that addressing climate change requires cooperation between nations, as climate truly knows no borders. However, when AI's energy demands intertwine with the climate agenda, the line between cooperation and competition blurs—countries must collaborate on grid upgrades and clean energy deployment while fiercely competing to attract AI data centers and computing power investments.

The gap in infrastructure may exacerbate inequality in global AI development. Nadella warns that AI deployment will be unevenly distributed worldwide, primarily constrained by access to capital and infrastructure. Realizing AI's potential requires essential conditions—mainly attracting investment and building supportive infrastructure. Key infrastructure such as the power grid is fundamentally government-driven, and private companies can only operate effectively once basic systems like energy and telecommunications networks are in place.This framing partially shifts responsibility to governments while setting prerequisites for tech companies to enter the market.

The absence of a governance framework presents another challenge. Historian Yuval Noah Harari warns that we lack experience in building a hybrid human-machine society and calls for fostering humility and establishing correction mechanisms. He sharply points out that the most intelligent entities on Earth may also be the most deluded. This philosophical concern stands in stark contrast to the optimism of technology leaders, revealing a fundamental divergence regarding the nature of AI: Is it merely a tool, or is it a new type of intelligent agent?

Scaling Dilemma and Europe's Anxiety

From Proof of Concept to Large-Scale Deployment, AI Implementation in Enterprises Faces a Series of Practical Challenges. At a side event in Davos, Christina Kosmowski, CEO of LogicMonitor, pointed out that to achieve scalable success with AI, companies should adopt a top-down approach, where the CEO and management identify the highest-value use cases and drive the entire organization to align around achieving these goals. This emphasis on top-level leadership reflects the shift of AI deployment from a technical department matter to the core of corporate strategy.

Process analysis software platform Celonis co-founder and co-CEO Bastian Nominacher provides a more specific success formula: Achieving AI investment returns typically requires three things: strong leadership commitment, establishing a Center of Excellence within the enterprise (which yields 8 times higher returns than companies that don't!), and having sufficient real-time data connected to the AI platform. These insights reveal the organizational and management challenges behind scaling—the technology itself may be ready, but the enterprise's absorption capacity becomes the bottleneck.

Europe's anxiety in this competition is particularly evident. In a side event called the European Compass, the focus of discussion was on how to restore the continent's declining competitiveness. Lila Tretikov, the AI strategy lead at NEA, bluntly stated that Europe has enough talent and funding to build world-class AI companies—what is lacking is ambition and the willingness to take big risks. This self-criticism reflects Europe's marginalization crisis in the global AI race: despite possessing research strength and regulatory influence, it has been stumbling in translating innovation into large-scale commercial success.

This anxiety partly stems from Europe's cautious attitude towards risk. While American and Chinese companies advance AI deployment within a culture of moving fast and breaking things, European enterprises are often constrained by stricter regulatory frameworks and a risk-averse culture. The top-down approach of Kosmowski and the Center of Excellence model of Nominach essentially provide European companies with a pathway to scale AI while controlling risks, but this requires stronger technical leadership from management.

The Unknown Frontiers of Hybrid Intelligent Societies

When AI begins to permeate the cognitive level of society, a more profound transformation is taking place. Moderna co-founder and chairman, Flagship Pioneering CEO Noubar Afeyan presents a disruptive viewpoint: by applying artificial intelligence to nature, we are on the verge of discovering that nature is a vast collection of intelligent forms, something we have never realized before. Every tree, every virus, every immune cell—these are all forms of intelligence. He believes that the challenge to human safety or security—or more accurately, the challenge of insecurity—will be the need for us to adjust our self-image, realizing that with the aid of machine intelligence and natural intelligence, we can improve the way we manage nature, the way we extract value from food... new medicines, and the way we prevent diseases.

This perspective of intelligent generalization may fundamentally alter humanity's self-positioning in the universe. If trees, viruses, and immune cells are all regarded as forms of intelligence, then the uniqueness of human intelligence diminishes, and AI becomes merely another node on the continuum of intelligence. Afeyan warns that people may not yet be prepared for the impact such insights will have on humanity and our self-image. This cognitive shock could be more profound than economic disruptions, as it shakes the anthropocentric worldview.

Canadian computer scientist and one of the AI godfathers, Yoshua Bengio, expressed concerns from another perspective: today's systems are trained to be too human-like. Many people mistakenly believe they (AI) are like us when interacting with them. The smarter we make them, the more this will happen, and some even make them appear as if they want to resemble us... but it's unclear whether that would be a good thing. He added that humans have developed norms and psychology for interacting with others. But AI is not truly human. This risk of anthropomorphism may lead humans to develop inappropriate trust or emotional dependence on AI, resulting in catastrophic errors in critical decision-making.

Demis Hassabis expresses caution regarding the job market following the arrival of Artificial General Intelligence (AGI), which he believes could occur within five to ten years and may lead to insufficient work opportunities for people. This raises broader questions about meaning and purpose, not just salaries. He also points out that geopolitical factors and competition among AI companies mean safety standards are being hastily developed. He calls for international consensus, such as establishing minimum safety standards, to proceed at a slightly slower pace so that we can properly prepare society for this. This nuanced call for adjusting the pace reflects a division within the tech elite: finding a balance between competitive pressures and safety needs.


The AI discussions at Davos reveal an emerging new architecture of power: the speed of technology deployment has become the core of competitive advantage, the ability to scale determines the fate of nations and enterprises, and philosophical debates about the nature of intelligence are influencing the formation of governance frameworks. Silicon Valley leaders, by setting technological agendas, predicting societal impacts, and defining success metrics, are essentially exercising a new form of global power—one that does not rely on territory or military force, but is based on control over the evolutionary path of technology and the pace of its adoption.

When Nadella discusses changing national outcomes, when Amodei links chip exports to human survival, and when Hassabis redefines career development, they are all participating in shaping global rules. This power is decentralized, networked, and embedded in technical standards, making it more difficult to counterbalance through traditional diplomatic or regulatory means.

The key issue in the coming years is no longer whether AI is powerful, but who controls the pace of its deployment, who defines its success criteria, and who bears its social costs. Discussions at Davos indicate that technology companies are vying for the right to define these issues, while governments, civil society, and academia must develop more sophisticated technological governance capabilities to maintain democratic accountability and the dominance of human values in this era of hybrid intelligence. The lever of technology is already in the hands of a few, and global society needs to find a balancing force to ensure that this power serves humanity as a whole, rather than merely amplifying existing inequalities.

Reference materials

http://www.euronews.com/next/2026/01/20/ai-at-davos-2026-from-work-to-useful-and-safe-ai-heres-what-the-tech-leaders-have-said

https://arabic.euronews.com/2026/01/20/ai-at-davos-2026-from-work-to-useful-and-safe-ai-heres-what-the-tech-leaders-have-said

https://www.letemps.ch/cyber/a-davos-les-geants-de-l-intelligence-artificielle-affirment-que-leurs-technologies-doivent-etre-adoptees-plus-vite-encore

https://fortune.com/2026/01/20/wef-davos-ai-how-to-scale/

https://time.com/7346588/davos-ai-potential-perils/