Young, Beautiful, White? Swiss Student Exposes Twitter Algorithm Bias
A Swiss EFPL university student was able to win Twitter's algorithm bias bounty program, proving the algorithm has certain preferences when cropping images.
A Swiss university student has won Twitter's first bug bounty competition, revealing that the algorithm is flawed and biased toward young, ‘beautiful,’ and light-skinned faces.
Bogdan Kulynych, a swiss university student, demonstrated the bias in the Twitter saliency cropping algorithm, which crops images for previews by focusing on the most interesting parts of the image.
In 2020, Twitter came under fire for its faulty image cropping algorithm when users discovered that the algorithm was highlighting white faces over black ones when it came to image previews, such as in this tweet:
Trying a horrible experiment...
— Tony “Abolish ICE” Arcieri 🦀 (@bascule) September 19, 2020
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia
The algorithm even preferred white dogs over black ones, regularly highlighting them and using their image as the preview, despite black ones being in the same image.
I tried it with dogs. Let's see. pic.twitter.com/xktmrNPtid
— 🅜🅐🅡🅚 (@MarkEMarkAU) September 20, 2020
This wave of criticism that affected the company prompted the tech giant to launch a bug bounty campaign after apologizing and saying their team did a lot of testing without being able to find any evidence of racial or gender bias in their testing.
"Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open-source our analysis so others can review and replicate," said Twitter then.
This competition was then wrapped up with Bogdan Kulynych winning $3500 for proving Twitter's algorithmic bias. He did so by artificially generating slightly different faces, altering things like age, width, skin complexion, and stereotypically feminine facial traits.
Twitter explained that the algorithm was "trained on human eye-tracking data," but it could be the cause of some intricate factors. Now that the reason behind the ‘bias’ was discovered, perhaps the social media giant will be able to take the appropriate measures to fix the bug and solve the problem.