Joint statement: Finding the good in the first UN General Assembly resolution on artificial intelligence

Photo: Ars Electronica, used under CC BY-NC-ND 2.0 licence (https://flic.kr/p/WMNtZC) Photo: Ars Electronica, used under CC BY-NC-ND 2.0 licence (https://flic.kr/p/WMNtZC)
Author: 
Various

We, the undersigned civil society organizations, welcome the United States-led United Nations (UN) General Assembly resolution “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.” Stakeholders far from UN grounds benefit when states clarify their position on new and emerging technologies and how international law, including international human rights law, and sustainable development commitments apply to fields like artificial intelligence. This is especially true with all the hype, murky definitions, and self-interested boosters surrounding artificial intelligence. 

We strongly commend the resolution’s operative language calling on states, and importantly, other stakeholders, to “refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights, especially of those who are in vulnerable situations.” Civil society notes that such messaging has been “an uphill battle” and therefore this unanimous resolution should be viewed as a positive step in the right direction. We urge states to operationalize such recommendation and to take heed of this consensus call in upcoming negotiations, particularly the UN Global Digital Compact (GDC) — where the zero draft was just released last week, with an objective to foster digital cooperation and improve governance of emerging technologies, including artificial intelligence — and other standard-setting initiatives. We further urge states to press forward such calls to include bans on certain technologies, including so-called emotion recognition and gender detection technologies, which fail to respect human dignity and instead infringe human rights, by design.

We were pleased to see that language regarding trustworthy artificial intelligence and human rights was referenced consistently throughout the text. The long list of technical, regulatory, and educational measures promoted offer a useful menu of options for states and companies as they clamor for ways to prevent harm across the lifecycle of artificial intelligence systems, ranging from design-stage risk and impact assessments to post-deployment feedback mechanisms. Moreover, the acknowledgment of the need to engage and enable participation of all communities, particularly from developing countries, should be further encouraged for practical implementation.

Nonetheless, there are concerns with the resolution: 

  • First, the paragraphs calling for closing the digital divide, are simply framed as requiring “stronger partnerships,” when further commitments — including in relation to funding — are urgent. The calls and details for implementation will be key. The details are unfortunately not elaborated in the resolution, and are currently not addressed in the interim report of the UN High Level Advisory Body on Artificial Intelligence. 

  • Second, we are deeply concerned with how the resolution differentiates between military and non-military/civil domains for several reasons: (1) blanket national security/military exemptions are not consistent with international law; (2) language surrounding safe, trustworthy, secure, etc. artificial intelligence should apply to military applications as much if not more than civilian uses; and (3) there is no standard of the differences, especially for dual-use AI systems. This is not hypothetical: the E.U. AI Act creates dangerous loopholes for use of AI by law enforcement, migration control and national security authorities, and the UN needs to set higher standards. 

  • Third, the framing of AI governance proposed does not reflect a true multistakeholder model and could be stronger when it comes to meaningful participation and inclusivity  particularly of civil society, vulnerable and marginalized groups, local and indigenous communities engaging in decisions related to AI that affect them. For instance, operative paragraph six uses language that hedges against the call for “inclusive and equitable” participation. 

  • Fourth, the resolution echoes other global conversations on artificial intelligence by leaning heavily on techno-solutionism to address the UN Sustainable Development Goals (SDGs) in situations where lack of political will and cooperation are the real barriers to progress. Without sustained and powerful global cooperation to address challenges like climate change, artificial intelligence cannot achieve the goals set out in the resolution. 

  • Finally, while the resolution focuses on the impact of some AI technologies for socio-economic-environmental progress towards meeting the SDGs, it regrettably doesn’t sufficiently include another integral dimension of the 2030 Agenda for Sustainable Development, namely the protection of human rights. While human rights implications are briefly mentioned in other sections, it is crucial that they are emphasized in relation to the SDGs.

The challenge of future negotiations on AI governance will be to advance sustainable development together with rights-respecting digital transformation, while remaining diligent in asserting the centrality of human rights and security, norms that all states have agreed to uphold. We encourage states to work with civil society and other stakeholders to develop an AI resolution in the context of the UN human rights system, complementing this sustainable development-focused text. Further, the resolution was not developed in the usual committee-based and collaborative processes at the General Assembly, but rather in plenary only; we strongly encourage that any future iterations of this resolution depart from this solo approach. 

Commitments will need to be made regarding accountability of private business enterprises operating at the global level and, often, evading responsibility in certain countries. Additional standards and mechanisms will also be necessary to prevent the circulation of technology that does not comply with human rights. Labor and environmental considerations should be a key part of such efforts and of the evaluation of compliance by such business actors. 

Civil society notes that while the resolution is non-binding, and does not include an enforcement mechanism, it is nonetheless noteworthy for threading the needle forward by providing a unanimous adoption that states can use to work towards establishing global guardrails on artificial intelligence. We therefore call on all stakeholders, particularly states, to use this resolution in conjunction with other relevant resolutions, and UN initiatives, that particularly hone in on the human rights impacts of artificial intelligence in upcoming discussions, especially negotiations regarding the UN GDC. A good starting point is to consult existing UN resources including: (1) the recommendations of the UN Special Rapporteurs who have focused on the human rights implications of artificial intelligence from their respective mandates; (2) resolutions such as the “Promotion and protection of human rights in the context of digital technologies,” which was first introduced at the end of last year, and the biannual “Right to privacy in the digital age” resolution; and (3) the ongoing work of the UN High Commissioner for Human Rights, including his statements on generative AI in the context of his Silicon Valley visit and at the Office of the High Commissioner for Human Rights B-Tech Generative AI and Human Rights Summit, and the B-Tech generative AI project’s continuing work.

Signatories 

Access Now

ARTICLE 19

Association for Progressive Communications (APC)

Derechos Digitales

Digital Action

European Center for Not-For-Profit Law Stichting

Global Partners Digital

International Center for Not-for-Profit Law

Privacy International

 

« Go back