Originally published in Spanish by Derechos Digitales
The India AI Impact Summit 2026, held on 16-20 February, was heralded as an opportunity to enhance the perspective of the Global South in discussions on artificial intelligence (AI) governance. The principle that guided the event was “Welfare for all, happiness of all,” a motto that was printed on hundreds of signs announcing the Summit posted across New Delhi, next to a photo of Prime Minister Narendra Modi, who was obviously seeking to project his own image with the meeting.
Those of us who were able to participate in the discussions were left wondering what is actually understood by “welfare” and “happiness” and who will effectively benefit from the advancement of AI. Despite evoking “people”, “planet” and “progress” as guiding principles (or sutras) in convening the event, discussions were marked by a techno-optimistic narrative and a lack of diversity, in terms of gender, sectors and perspectives, in an agenda carefully put together by the local government. That this was the tone that prevailed throughout the summit is concerning enough, but what is most troubling is the kind of outcomes it shaped, namely a model of international cooperation that promotes rapid deployment, while deliberately eluding the question of limits, responsibilities and remedies.
As a number of principles for international cooperation in AI matters were proposed in the Summit, India secured investments from Big Tech companies. More importantly, in a geopolitical context of securitisation of the technological race, it opened up a space for the signing of South-South agreements in the field of technology, as evidenced most notably by the visit of the president of Brazil and his ministers to India.
Despite the importance of such movements in the current scenario, the results appear to be limited to voluntary commitments and non-binding statements, in line with the model seen in previous summits. There was no mention of human rights and no room for regulatory discussions in spaces where the actions of several states in that direction could have wielded greater influence. What prevailed instead was a language of “innovation”, “trust” and “capacities” that may sound neutral and positive, but which in practice obscures the debate on transparency, due diligence and reparation obligations. As usual, the justice and rights agenda was left to civil society, which had little space and scarce visibility in the official event.
An unchecked race
This summit was the fourth edition of a series that began in England in 2023, with a mandate that has changed with each passing year. But, in spite of the expectations and the nationalist rhetoric surrounding it – including references to elements of Hinduism – this year’s edition of the AI Summit, the first to be held in the Global South, was not at all different from the ones that came before it, as its Final Declaration shows. The venue and the narrative changed, but the formula of an abstract commitments scenario remained the same, accompanied by a market of agreements and investments that were presented as inevitable. In addition to serving to showcase global Big Tech companies, which dominated the news coverage of the event, its leading outcome favours those companies’ agendas and businesses.
According to the declaration, “The choices that we make today will shape the AI-enabled world that future generations will inherit.” This statement leaves little room for discussion on the role that both today’s and tomorrow’s generations would like AI to have, despite growing evidence of the harm it can cause and the concerns over its social effects.
The intention here is not to combat technology or deny its potential benefits. On the contrary, we travelled from Latin America to India with the aim of forging stronger coalitions and drawing up the bases for a just AI, along with dozens of other organisations from the Global South. In different parallel events, we stressed the importance of multilateral dialogues on AI governance and raised the urgent need to develop global mechanisms that will help guarantee that AI systems meet human rights standards. In other words, that they serve people, the planet and progress. However, the distance between those debates and the Summit’s final declaration, adopted by 89 countries, is evident and alarming.
The text, organised according to the thematic working groups that guided discussions, reinforces the notion of the inevitability of AI and focuses on promoting it according to the Silicon Valley model, thus restricting the space for any discussion on the necessary limits to its development and deployment. An example of this is the defence of energy-efficient AI systems, which completely erases concerns over the social and environmental impacts of data centres.
As regards the democratisation of AI resources, the declaration mentions the Charter for the Democratic Diffusion of AI, also adopted at the Summit, but it stresses its non-binding and voluntary nature. The Charter includes important guidance for international cooperation in areas such as digital inclusion, representativity of languages and contexts underrepresented in AI models, stimulating aperture through the adoption of open and interoperable standards and AI skills development, among others. However, as a first goal it stresses the enhancement of AI capacity building without any consideration for the environmental, labour and human rights impacts of AI.
The approach reflects national contexts where the technological sovereignty discourse is presented in a way that is contradictory and highly dependent on proprietary and foreign technological developments. The announcements of investments by large companies in India made during the Summit illustrate that narrative. Examples of the agreements reached at the event are the establishment of alliances with universities to offer AI models and assistants and the provision of training in the use of commercial tools and investments for government uptake of such technology. Paradoxically, sovereignty is invoked while at the same time furthering the adoption of models, assistants and tools whose design, control and conditions of use remain out of reach for the countries that incorporate them.
The idea of a technological sovereignty uncritical of the established model and devoid of sustainability and a rights perspective was also reflected in an exhibition space where local companies were placed alongside Big Tech to promote their businesses. AI systems for military use and surveillance were displayed next to “educational” versions of commercial generative AI models. Dissenting voices or those advocating for limits to the use of this kind of systems in such contexts were excluded.
The road ahead
Despite limitations, we must acknowledge the significance of the inclusion in the Final Statement of references to multilateral processes under way and of respect for national sovereignty. Both aspects are key in a text that was able to have both China and the United States as signatories.
On the one hand, this is a validation of a process furthered by the United Nations, with the mandate of moving forward in defining limits to AI uses that do not respect human rights. It should be highlighted that the agenda included different spaces for discussion with representatives of the Independent International Scientific Panel on AI and that it was attended by the Secretary General of the United Nations and its Office for Digital and Emerging Technologies (ODET). On the other hand, a margin is rightly allowed for national discussions that can better reflect the contexts and priorities of each country and where other sectors can act as a counterweight. That margin will be decisive wherever there are institutions capable of resisting the combined pressure of the discourse of securitisation, the national interests of certain countries and commercial interests that seek to present AI as an obligation with no other alternative.
Such gains, it must be said, are also currently constrained by an international arena marked by the rise of authoritarianism and the dismantling of multilateral institutions, in particular in the field of human rights. However, they signal that there is still no agreement on the role of AI in the future of the new generations.
Equally important is the recognition of multiple stakeholder participation in the debates on the subject. It is crucial that this not be translated into giving more space to mega business operators in technology and instead be channelled towards a meaningful engagement of civil society, in line with what was agreed under the São Paulo Declaration.
In this sense, the Summit provided an opportunity for coordination among different academic and civil society actors from the Global South, with a view to the coming debate. In a context in which conversations are still mostly restricted to a few actors from the North, such coordination is critical for advancing towards standards that will enable a future in which AI is effectively at the service of justice, equity and the enjoyment of rights.
If the AI race continues unchecked, traffic rules will need to be urgently defined before the impact brings us crashing against a wall.