New Delhi Summit: Restructuring the Five-Year Roadmap and Global Governance Mandate
19/02/2026
New Delhi Summit: Discussing the Future of Artificial General Intelligence
On February 18, Demis Hassabis, CEO of Google DeepMind, delivered a speech at the AI Impact Summit 2026 held in New Delhi, India. Addressing policymakers, technology leaders, and scholars from 110 countries, he stated that Artificial General Intelligence (AGI) could become a reality within 5 to 8 years, with its potential impact possibly surpassing that of the Industrial Revolution. Hassabis previously led the team that developed AlphaFold, which solved the protein structure prediction challenge. During this five-day AI governance summit—the first of its kind held in the Global South—he elaborated on the technological prospects, potential risks, and possible social transformations brought by AGI.
Technology Path: From the "Einstein Test" to Autonomous Scientific Discovery
Hassabis proposed an evaluation concept named the Einstein Test at a technology forum. This test sets the training data cutoff for artificial intelligence at 1911, then observes whether the system can autonomously derive the general theory of relativity proposed by Einstein in 1915. Hassabis explained to over 500 technology industry leaders present that the key lies in innovation capability, rather than repeating existing knowledge. He believes that current large language models are more like high-speed encyclopedias, capable of solving existing problems but struggling to propose equally significant new scientific hypotheses.
According to DeepMind's internal technology roadmap, achieving this breakthrough requires the integration of multiple capabilities. For example, combining AlphaGo's long-term planning ability, the large-scale data processing capacity of modern foundational models, and world model construction techniques similar to Gemini, ultimately forming a system capable of continuous learning. Hassabis revealed that the team is developing a new generation of architecture, enabling AI to learn from real-world experiences, adapt to different scenarios, and undergo personalized adjustments for specific tasks—capabilities that are currently lacking in the mainstream frozen training mode.
Long-term planning is another key challenge. Hassabis points out that while existing AI systems can make coherent decisions in the short term, they are unable to formulate strategic plans spanning months or even years like humans can. He uses climate change research as an example: what we need are systems capable of simulating carbon reduction pathways over fifty years, not merely predicting next week’s weather. DeepMind’s experiments in materials science and drug discovery have shown that when AI is given a longer planning horizon, the probability of it proposing innovative solutions increases threefold.
Security Challenges: From System Instability to Biocybersecurity
Hassabis warned at the cybersecurity symposium that as AI capabilities advance, biosecurity and cybersecurity risks are becoming increasingly urgent. He cited data from DeepMind's internal red team exercises: the latest large language models can autonomously discover and exploit seven common enterprise software vulnerabilities in simulated cyberattack tests, whereas two years ago this number was zero.
He uses the term "jagged intelligence" to describe the instability of current AI systems: the same model can win a gold medal in the International Mathematical Olympiad, yet make mistakes on basic arithmetic problems. This inconsistency could lead to serious consequences in safety-critical fields such as autonomous driving or medical diagnosis. DeepMind's safety team found that in 1,000 stress tests, top AI models had a 12% probability of suddenly failing on seemingly simple reasoning tasks. This unpredictability is more concerning than systematic errors.
The situation in the field of biosecurity is equally noteworthy. Hassabis revealed that some open-source protein folding models have been modified for designing novel biomolecules, although this is still in its early stages. He called for the establishment of a global biological AI monitoring network: "We need to ensure that defense always stays one step ahead of offense, which requires cross-border technology sharing and ethical constraints." The "Global AI Governance Framework Draft" released by India's Ministry of Electronics and Information Technology during the summit has already included provisions for biological AI safety assessments, indicating that this concern is becoming a consensus among policymakers.
Global Governance: The New Delhi Summit and the International Order
India AI Impact Summit 2026 is the first large-scale AI governance conference held in a globalized context, with participation from over 20 heads of state, 45 ministerial-level officials, and representatives from 30 international organizations. At the opening ceremony, Indian Prime Minister Modi stated: Some people fear AI, but India sees the future in it. This stance complements the AI Safety Summit led by Europe and the United States, placing greater emphasis on the development and application of AI rather than solely on risk control.
At the closed-door policy roundtable, Hassabis proposed a dual governance pathway: at the technical level, robustness standards and alignment mechanisms need to be established, while at the societal level, minimal global norms are required. He specifically mentioned the MANAV vision launched by India—an AI-based real-time sign language translation system that interpreted Modi's speech live at the summit. Data shows that the system's vocabulary recognition accuracy has reached 94%, an 11-percentage-point improvement compared to pre-summit testing.
The summit also revealed some governance divergences. French President Macron emphasized human-centered technology via video link, while multiple African representatives called for data sovereignty to be included in the core agenda. Hassabis noted that delegates from the Global South were more focused on how AI can address localized challenges—such as flood forecasting in Sri Lanka or crop disease diagnosis in Kenya—rather than abstract existential risk discussions. This divergence was reflected in the final version of the New Delhi Principles, which dedicated one-third of its content to outlining specific pathways for AI to advance the Sustainable Development Goals.
Social Impact: Beyond Economic Growth
When asked about the historical positioning of AGI, Hassabis provided a quantitative comparison: the Industrial Revolution raised the global per capita GDP growth rate from near stagnation to around 1.5% per year, while preliminary simulations suggest that AGI-driven scientific discoveries could push global economic growth into the range of 4-7%. However, he emphasized that this is not merely an issue of economic data, but rather an expansion of the boundaries of human cognition.
DeepMind's internal research report highlights three dimensions of transformation. In the scientific field, interdisciplinary research will receive the greatest boost—AI can identify connections between fields that are difficult for humans to detect, and cross-disciplinary studies in materials science and genetics have already generated 17 new patents. In healthcare, continuously learning AI systems can track disease progression on a personalized basis, with early trials showing an average reduction of 60% in diagnosis time for rare diseases. Regarding climate action, autonomously optimized carbon capture material design has compressed the laboratory R&D cycle from five years to eight months.
Hassabis also cautioned that this acceleration may bring about intense social adaptation pains. The Industrial Revolution took three generations to complete social structural adjustments, while the impact of AGI could fully manifest within a decade. He cited the International Labour Organization's 2025 prediction: 40% of global work content will be restructured by 2030, with job demands in sectors like education and healthcare potentially increasing by 35%, while roles in administration and data entry may decrease by 20%. This structural change requires an unprecedented lifelong learning system and social security network, with only a few countries like Finland and Singapore beginning systematic preparations.
At the conclusion of the summit, Hassabis joined a working dinner with the India AI Mission team. The menu featured a Sanskrit motto: Sarvajana Hitaya, Sarvajana Sukhaya—for the welfare of all, for the happiness of all. This phrase was also engraved on the walls of the summit's main venue. Perhaps more than any technical roadmap, it captures the collective sentiment of humanity standing at the threshold of AGI: embracing the anticipation of transformation while bearing the quest for balance.