The socio-ethical challenges of Generative AI

Dr Atoosa Kasirzadeh contrasts the opportunities, issues and ethics of generative AI.

A woman smiling for the camera
Dr Atoosa Kasirzadeh, Chancellor’s Fellow, School of Philosophy, Psychology and Language Sciences, Centre for Technomoral Futures, Edinburgh Futures Institute, The University of Edinburgh

 

Generative artificial intelligence refers to machine-learning-powered technologies – trained on billions of images or text data and often scraped from the internet – that can produce impressive new images or texts in reaction to human requests. One example is Stable Diffusion – an open-source example of generative technologies transposing written phrases into novel images. You can input a refined text, such as “a gentleman otter in a 19th century portrait”, and it will only take a few seconds for Stable Diffusion to process your input and output a novel image.

Another instance is GPT-3 – a proprietary instance of generative language technologies. GPT-3 can write an essay from scratch if you ask it to “convince us robots come in peace”; the result is remarkable.

The opportunities that the generative AI introduce are enormous. For example, text-to-image generative AI can help designers and architects to envision surreal and experimental ideas for new furniture, websites, or buildings. These technologies can empower those in the media and entertainment industries to improve content creation or restoration. Everyone can use open-source language-generation technologies to produce all kinds of text outputs: essays, blog posts, op-eds, and even computer codes.

A generative AI image of a gentleman otter wearing a suit and bowler hat
Stable Diffusion image generated by the input command: “a gentleman otter in a 19th century portrait”

Some AI research labs are pushing the capacities of these technologies further by working on the development of text-to-video generative technologies. You might type: “make a video of a cat riding a unicorn in an ocean”, and generative AI might be able to create videos based on this text. As some say, generative AI unleashes the power of creativity for everyone; it democratises creativity. However, the awe about the supercharged creative potential of generative AI for everyone should not override the complicated ethical, social, and legal issues around these systems.

Generative AI is trained on the creative works of millions of artists, writers, and creators; and many of the works which are copyrighted. Hence, the novel mimicry capacities of generative AI are obtained by laundering the outputs of human creativity at scale. Yet, how could legislation mechanisms protect artists’ copyrighted creative works in the presence of open-source and commercialised generative AI? How would pre-generative AI copyright laws be amended? Would human artists ever be compensated for including their works in the training of generative AI systems? This is a cluster of problematic questions which has puzzled artists, legislators, legal scholars, and policymakers across the globe.

Beyond copycatting concerns, deep philosophical questions arise from the embodiment of these technologies in our lives. In the presence of generative AI, what would it mean for a creative human to play a unique artistic role in expressing dimensions of the human condition? Would these technologies generate works that undermine human artists’ creative capacity in a thriving area of commercial art?

Generative AI is trained on the creative works of millions of artists, writers, and creators; and many of the works which are copyrighted. Hence, the novel mimicry capacities of generative AI are obtained by laundering the outputs of human creativity at scale.

Furthermore, serious epistemological and political concerns stem from the potential negative impacts of these technologies on human knowledge and democratic discourse. The capabilities of generative AI in producing hyper-realistic images, texts, or videos can be misused by misinformation campaigns. Various reports demonstrate the role of generative AI on social media platforms in amplifying false information. Previous research has shown that misleading or incorrect information can spread up to six times faster than correct information and can reach more people. Online misinformation can deepen political polarisation and contribute to distrust in legitimate governments. Exploring socio-technical solutions to minimise these potential harms of generative technologies remains a work-in-progress.

At the Centre for Technomoral Futures, I write and think about these questions in collaboration with my students and colleagues across different schools. Tackling these questions requires transdisciplinary research. I am excited to see how our research contributes to the better social and moral flourishing of the human experience with generative AI.


Dr Atoosa Kasirzadeh is a Chancellor’s Fellow in the School of Philosophy, Psychology and Language Sciences, Centre for Technomoral Futures, Edinburgh Futures Institute, The University of Edinburgh.

This article originally appeared in ReSourcE Winter 2022.

The RSE’s blog series offers personal views on a variety of issues. These views are not those of the RSE and are intended to offer different perspectives on a range of current issues.

You might also like

  • Resources

    The linking of history and archaeology – or in this instance, monuments and artefacts – is explored in this special ‘in conversation’ talk.

  • News

    What are the challenges and opportunities for Scotland’s research base in the current funding and political climate? Thoughts on this subject from a joint panel discussion held between CaSE and the RSE in March 2019. Highlights of the discussion include communicating the impact of our research to the rest of the world; making the UK…

  • Resources

    Hear from Professor Sir Ian Boyd about using the learning from Covid-19 to enhance Scotland’s resilience to deal with large-scale disruptions and challenges of the future.