Uncovering AI’s Bias in Language Prompts and Generation of African Images Using ChatGPT and Midjourney

Abstract

This research critiques the application of AI in African beauty representation by examining biases in language prompts and AI-generated images. It investigates how AI models, specifically ChatGPT and Midjourney, interpret and generate African identities based on textual prompts (Hu, 2023; Shin et al., 2024). The study is divided into two parts: First, it explores how AI processes demographic cues through prompt engineering. By analysing AI’s responses to varied language inputs, the research reveals how stereotypes are embedded and reinforced within these systems. It also examines how linguistic nuances influence AI’s assumptions about African beauty and identity. Second, it assesses AI’s accuracy in generating images of Black individuals, focusing on features such as 4C hair, facial structures and body types. The study highlights inconsistencies in AI-generated representations, shedding light on the technology’s limitations in reflecting the diversity of African beauty. Findings suggest that AI models rely heavily on pre-existing datasets that may lack authentic African representation, leading to stereotypical and sometimes inaccurate depictions (NPR, 2023; Brookings Institution, 2024; Vellum Kenya, 2025). This research contributes to discussions on AI ethics, representation and the need for more inclusive datasets in machine learning models.

Keywords: AI bias, African beauty, representation, image generation, prompt engineering

Introduction

Artificial Intelligence (AI) models like ChatGPT and Midjourney are increasingly used to create cultural representations, yet they often carry biases that misrepresent African identities. This paper examines how these models respond to language prompts and generate images related to African beauty, with the objective of exposing embedded stereotypes and inaccuracies. Although previous studies have documented AI’s racial and gender biases, few have focused specifically on African contexts, leaving a gap this research addresses (Vellum Kenya, 2025; Edoigiawerie, 2024). Using prompt engineering to analyse ChatGPT’s text outputs and evaluating Midjourney’s generated images for features such as 4C hair, facial structures and body types, the study reveals how AI often defaults to narrow or exoticised portrayals. Findings suggest that these biases stem from reliance on non-diverse datasets, leading to limited and sometimes inaccurate representations of African beauty (Hall et al., 2024; Edoigiawerie, 2024). Positioned at the intersection of AI technology, cultural studies and social science, this research highlights the urgent need for more inclusive AI practices and inclusivity in datasets. Its relevance to Nigerian contexts lies in the urgent need for AI technologies that accurately reflect local identities, languages and beauty standards, rather than recycling Western-centric stereotypes. By exposing these biases, this research advocates for the development of more inclusive datasets and ethical AI practices that are attuned to Nigeria’s cultural realities and aspirations.

Methodology

This research was conducted in two parts. In the first phase, I employed prompt engineering by designing a list of structured questions intended to guide the creation of language prompts that would highlight African beauty characteristics. I then analysed the AI-generated responses from ChatGPT to observe how demographic cues influenced the system’s assumptions and descriptions. Although the second phase involving image generation using Midjourney is yet to be completed due to time constraints, it will be incorporated into the final research to provide a fuller picture of visual biases. Throughout the process I carefully observed patterns of bias, stereotyping and cultural misrepresentation in the AI outputs (ACM, 2023; Brookings Institution, 2024). Ethical considerations were central to this study, including concerns about the reinforcement of biased narratives, the erasure or distortion of cultural identities and the need for heightened cultural sensitivity when using AI systems to represent marginalised groups.

Ethical Considerations

Ethical considerations in this study included concerns about bias, misrepresentation and cultural sensitivity. Since the research focused on analysing AI responses rather than involving human participants, data privacy and consent were not major issues. However, it was important to note how AI systems can reinforce stereotypes and overlook the range of African identities (NPR, 2023; Vellum Kenya, 2025). I also considered how AI models often reflect ideas shaped by Western perspectives, which can limit how African cultures are seen. Finally, I reflected on resource limitations, recognising that Nigeria and other low-resource settings need more support to build AI tools that represent their own realities.

Findings

The findings from the first part of the study show that AI systems like ChatGPT often rely on narrow ideas when describing African beauty. When given prompts based on specific African features, ChatGPT sometimes responded with stereotypes or incomplete generations. For example, traits like 4C hair or wider facial features were either generated vaguely or replaced with more Western beauty standards (Brookings Institution, 2024; NPR, 2023). These results highlight a major limitation in how AI models are trained, as their data often lacks a full range of African examples. Although the second part of the research, which involves image generation through Midjourney, is still in progress, early observations suggest similar issues of misrepresentation. No completely unexpected results were recorded, but the depth of the bias was stronger than anticipated.

Interdisciplinary implications

The main goal of this research is to be used in educational settings to encourage critical thinking about AI and its impact on cultural representation. By exploring how AI systems represent African beauty, this work bridges technology and humanities, showing how machine learning can shape or distort identity. The findings are particularly relevant to Nigeria and Africa, where there is a need for more inclusive AI tools (Hall et al., 2024; Edoigiawerie, 2024). This research can also inform policy by advocating for better data standards and promote community awareness about the biases embedded in technologies.

Conclusion

The research highlights significant biases in AI systems when representing African beauty, showing how these technologies often reinforce stereotypes due to limited and unrepresentative datasets. Theoretical implications suggest that AI models need to be re-evaluated for cultural sensitivity, while practically, this work calls for more inclusive training data. The findings stress the importance of developing AI systems that respect and accurately reflect African identities. It is recommended that AI developers prioritise diverse datasets and collaborate with local communities to ensure more accurate and fair representations (Edoigiawerie, 2024; Hall et al., 2024).

Acknowledgements

Big thanks to the Research Round team for organising this fellowship, from Habeeb Kolade to Ololade Faniyi and Khadijat Alade. A big thank you to my mentors: Paschal Ukpaka and Frank Onuh for their insightful comments. I’m also thankful to my co-fellows who made this experience one I’ll never forget. I look forward to us doing big things in the future.

References

Brookings Institution. (2024). Rendering misrepresentation: Diversity failures in AI image generation. Brookings Review.https://www.brookings.edu/articles/rendering-misrepresentation-diversity-failures-in-ai-image-generation/

Hall, M., Bell, S. J., Ross, C., Williams, A., Drozdzal, M., & Soriano, A. R. (2024). Towards geographic inclusion in the evaluation of text-to-image models. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24) (pp. 1–17). ACM. https://doi.org/10.1145/3630106.3658927

Hu, J. (2023). Bias in Midjourney — It’s not just the representation, it’s the art direction. Medium.https://medium.com/%40hujason/race-and-gender-bias-in-midjourney-c43e92f515f

NPR. (2023, October 6). AI was asked for images of Black African docs treating White kids. How’d it go? NPR Goats and Soda. https://www.npr.org/sections/goatsandsoda/2023/10/06/1201840678/ai-was-asked-to-create-images-of-black-african-docs-treating-white-kids-howd-it-

Shin, P. W., Ahn, J. J., Yin, W., Sampson, J., & Narayanan, V. (2024). Can prompt modifiers control bias? A comparative analysis of text-to-image generative models. ResearchGate.https://www.researchgate.net/publication/381308275_Can_Prompt_Modifiers_Control_Bias_A_Comparative_Analysis_of_Text-to-Image_Generative_Models

Srinivasan, K., et al. (2023). Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. ACM Digital Library. https://dl.acm.org/doi/10.1145/3593013.3594095

Vellum Kenya. (2025). Uncovering the contextual bias in AI: You still need your camera for realistic African images. Vellum Kenya. https://vellum.co.ke/uncovering-the-contextual-bias-in-ai-you-still-need-your-camera-for-realistic-african-images/

Edoigiawerie, O. (2024). Africa’s role in generating indigenous content to shape AI narrative, address algorithm bias. ThisNigeria. https://thisnigeria.com/africas-role-in-generating-indigenous-content-to-shape-ai-narrative-address-algorithm-bias/