In the latest edition of poliTICS, a publication produced by APC member Insituto Nupef, Carlos Afonso explores the "curious contradiction" between the fear of an AI-caused extinction and the uncontainable desire to make fame and money from its spectacular and frightening advances.
This edition of GenderIT.org came together at a time of daily breaking news around artificial intelligence and the risks it poses. In the MENA region, these problems are compounded with a litany of daily struggles, the most devastating of these being occupation, war, conflict and displacement.
Technology is constantly evolving, with new advances in areas like artificial intelligence which, on one hand, claims to help humans in making tasks easy, but on the other, reinforces harmful societal stereotypes. This piece explores the portrayal of gender through AI in pop culture and chatbots.
Content moderators and AI trainers spend hours making the internet safer for others, while constantly struggling with the serious repercussions of being exposed to disturbing content without any support or fair compensation from the tech companies they work for.
For decades science fiction has alarmed us with the idea that AI will become much smarter than us and take control. Our columnist unpacks the issues of AI's uncertainty, common good, regulation and governance.
APC believes it is imperative to place human rights, social justice and sustainable development at the centre at all stages of AI systems, including their creation, development, implementation and governance, and that potential risks should be continually assessed and managed.
New applications like ChatGPT based on AI and large language models are likely to be transformative, a step change in technology like the internet was 30 years ago, but much faster. The technology is now out of the bag and can't be uninvented, and we should move swiftly to figure out its implications, deployment and governance.
This submission was produced in response to the call for contributions to the thematic report of the Office of the High Commissioner for Human Rights, and comprises inputs collated from the experiences of APC staff and members, organised according to the guiding questions proposed in the call.
The design and development of digital language technologies, and especially the technologies relying on large language models like ChatGPT, call for a deep power analysis on who is building this technology, who will benefit from it and who will decide its future.
A Google engineer claimed that an AI bot has become sentient. Is that a possibility? Can a program designed and trained by a human develop the ability to feel human-like feelings? Priyadarshini John explores.