ABSTRACT:
This article explores how contemporary text-to-image (T2I) systems routinely minimise or “correct” aquiline noses in AI-generated images, a phenomenon the authors term “non-consensual rhinoplasty”. Despite explicit prompts for pronounced nasal features, many models systematically smooth out dorsal humps, with 92% of generated images displaying a non-convex profile. Situating these findings in a broader cultural and historical context, the article examines how entrenched beauty standards and physiognomic biases shape both AI training data and societal perceptions. It highlights how content moderation, algorithmic “beautification,” and dataset limitations further erase natural variation. To address this bias, the article proposes solutions such as community-led awareness campaigns, petitions for greater transparency in AI development, and technical refinements like prompt sliders for nasal prominence. By outlining these strategies, it advocates for AI innovation that prioritises cultural sensitivity and equitable representation.
KEY WORDS:
AI aesthetics, algorithmic bias, artificial intelligence, aquiline noses, data diversity, facial representation
DOI: https://doi.org/10.34135/communicationtoday.2025.Vol.16.No.1.7