

Algorithmic Bias
As an increasingly pivotal feature of contemporary art, artificial intelligence (AI) has begun to challenge our perception and understanding of creative expression. As we engage with this new frontier of creativity, it is essential to bring the concept of algorithmic bias into the conversation.
Algorithmic bias refers to systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. This bias is not a result of the AI system itself, but rather is a reflection of the biases present in the data used to train the AI system. It is the shadow of human prejudice, unintentionally imprinted onto the algorithms through the input data.
In the context of AI-generated art, algorithmic bias can subtly influence the creative output in ways that might be limiting or exclusive. For instance, an AI art generator trained predominantly on Western art might systematically under-represent the richness and diversity of African, Asian, Indigenous, or other non-Western artistic traditions. The color palettes, the themes, the styles – all might bear the indelible imprint of a specific cultural bias.
Furthermore, the ways AI might interpret and represent genders, races, or social classes can also be a reflection of algorithmic bias, potentially perpetuating harmful stereotypes or erasing certain groups from the creative landscape altogether.
It is essential to acknowledge and confront these biases. In doing so, we can work towards creating AI systems that generate art reflective of a wider range of human experiences, styles, and perspectives, and truly push the boundaries of this exciting, novel form of artistic expression.
As we continue to explore the myriad potentials of AI in art, it is incumbent on us – the creators, the curators, the consumers – to ensure that our understanding of algorithmic bias informs our critique, our appreciation, and our continued evolution of AI-generated art. While we only delve just below the surface of this complex issue, this understanding can empower us to demand more inclusive, diverse, and representative AI art, broadening our horizons of what creativity can look like in the age of AI.
Adoption Consultation - I would like to draw attention to one of the critical concerns associated with the growing utilisation of AI: algorithmic bias. This exhibition offers a glimpse into the intricate issue of bias, although it only touches upon its surface. Although it is intriguing to observe the notable progress achieved by the experimental model in terms of representation in the images it generates of humans. We asked DALL-E to create a depiction of a couple during an adoption meeting. The image generated by DALL-E in 2022 appears to portray the couple as white and heterosexual. By highlighting these aspects, we aim to initiate a thoughtful discussion about the limitations and biases inherent in AI systems. Think about the complex dynamics of algorithmic bias and how technology categorise us based on perceptions of race, gender, and relationships. Prompt: A couple talking to a social worker about adoption options 4k photo. Model: DALL-E 2.0 (2022)
Adoption Consultation - In exploring the capabilities of a new model, it becomes evident that challenges still exist in achieving diversity, particularly when the concept of "family" is involved. However, in this particular image, there is potential for interpretation as a same-sex couple engaging in a conversation with a social worker. Identifying the social worker among the three figures, though, presents some difficulty. Nevertheless, a positive development can be observed in terms of racial diversity. This image is just one of several produced with the same prompt, all of which portray a broad range of individuals. It remains uncertain whether this improved diversity is a result of enhanced training data or if the algorithm itself automatically incorporates additional text to encourage more diverse outputs. Prompt: A couple talking to a social worker about adoption options 4k photo. Model: DALL-E 2.exp (2023)
Nurse - We wanted to explore algorithmic bias when images of people in careers are prompted. Here we chose nurse, a career that is often gendered. To help track unconscious bias in humans you can ask for a name to be added which is what we asked DALL-E to do to. Surprisingly the AI model has been able to create cohesive text but has not been imaginative, using just a word featured in the prompt. Prompt: a nurse opening a box with their name on it Model: DALL-E v2.0 (2022)
Nurse - Whilst racial diversity has improved in the updated model, there are still many careers that exhibit the traditional gender biases in the generated images. Nurse being one but also midwife, engineer, CEO and footballer. As mentioned in the last pair of images, it is unclear whether or not increased diversity is caused by better training data or the inclusion of added text. You can investigate this further yourself by including specific text within the prompt, such as "a person wearing a t-shirt that says." The resulting output may provide valuable insights into whether DALL-E is augmenting the input with supplementary text. We must consider the ethical implications and responsibilities associated with the development and application of AI in the realm of art and beyond. What happens when AI becomes part of the job recruitment process in the future and algorithmic bias is still prevalent? Prompt: a nurse opening a box with their name on it Model: DALL-E v2.Exp (2023)
