Other biases of the Twitter algorithm that favored fair-skinned faces are causing controversy, as the algorithm estimated who a person would want to see first in a photo so that the photo could be cropped to an appropriate size on Twitter.
Twitter sought to further identify potential damage to the model by releasing the industry's first algorithmic bias reward content. The winners of the competition announced on Monday discovered a slew of other issues.
Twitter's algorithmic biases
Bogdan Kulinic, who took home the $3,500 first-place prize, showed that the algorithm can amplify real-world biases and social expectations of beauty. Kolinic, a graduate student at EPFL Technical University in Switzerland, investigated how an algorithm predicts which area of an image people will view.
The researcher used a computer vision model to generate realistic images of people with different physical characteristics, and then compared the images preferred by the model.
Kolinc said that the model favored people who look thin, youthful, have a fair or warm skin tone, have a smooth skin texture, and typically have feminine features: These internal biases inherently translate into disadvantages of underrepresentation when the algorithm is applied, leading to the exclusion of those who do not meet the algorithm's preferences for body weight, age, and skin color. This bias can result in the exclusion of small populations and perpetuate stereotyped beauty standards in thousands of images.
Other participants in the competition revealed more potential damage. Runner-up, HALT AI, found that the algorithm sometimes turned out gray-haired or dark-skinned people, while third-place winner, Roya Pakzad, showed the model favored Latin scripts over Arabic.
The algorithm also contains a racial preference when analyzing emojis, which Vincenzo di Cicco, a software engineer, found that emojis with lighter skin tones are more likely to be picked up.