Does your profile picture look gay?: Talking with Stanford Data Scientist Michal Kosinski about his controversial AI


“AI-Laugh” by Dan Buck is licensed under CC BY 2.0

You may have heard of Michal Kosinski and Yilun Wang’s provocative study which asserts that AI can accurately predict sexual orientation with a facial recognition algorithm. Their paper draws a connection between facial features and intimate traits like sexuality. The work has sparked backlash from LGBTQ+ rights groups like the Human Rights Campaign (HRC) and GLAAD, which called it “’junk science’ that could be used to out gay people across the globe.”[1] Jim Halloran, GLAAD’s Chief Digital Officer, refuses to accept the validity of the study, saying in a statement that “technology cannot identify someone’s sexual orientation.”[1] I sat down with Professor Kosinski in early January to get his side of the story.

Professor Kosinski has always been fascinated by the privacy implications of the digital footprints people leave behind on social media. In the past, his research has used data points other than facial characteristics, such as Facebook likes, in order to predict other intimate traits like political leaning. Facebook almost immediately changed the public availability of likes, which he attributes to the publication of his research. Though revealing personal information that can be used to violate your privacy is most often unavoidable on social media, many people (myself included) still choose to share a carefully-curated selection of intimate moments with their social network. Dr. Kosinski warns that people are at risk because they do not have the same level of control over their faces as they do over other social media content. Dr. Kosinski is a strong advocate of safeguarding human rights through legislation and education in order to limit the potential dangers of posting pictures of yourself online.


Stanford researcher and coauthor of the study, Michael Kosinski. Photo courtesy of Lauren Bamford

After conducting research over a year ago on AI’s ability to predict sexual orientation, he struggled with whether to publish the new findings because he, too, was surprised at the accuracy of the results and feared what could happen if this algorithm landed in the wrong hands. In the end, the paper was published with the hope that it would lead to a push for new legislation surrounding how a person’s face can be used without violating their privacy.

Kosinski believes that we will inevitably progress to a post-privacy paradigm. While a well-motivated third party can likely already find out whatever they want to know about you, personal information will become even more accessible in the future. However, Professor Kosinski’s version of a post-privacy world is a lot more optimistic than the world controlled by Big Brother in George Orwell’s 1984. He believes that visibility increases tolerance, and that complete transparency will actually decrease the potential for wrongdoing.

Though there is a lot to discuss about the future of privacy, the reality is that no one can be certain of what is to come. We don’t live in a post-privacy world just yet, so it is important to consider the implications of technology that tends in that direction today. Though Kosinski did not develop new technology, he shed light on a new application of AI by compiling off-the-shelf tools and publicly available data. Because the AI is so accurate, Kosinski was careful to minimize risk by excluding anything that might help other actors, like oppressive regimes, replicate this classifier to discriminate against gay people. But, nevertheless, his paper has sparked interest in the unexplored possibilities of AI. He warns that “there is an urgent need for making policymakers, the general public, and gay communities aware of the risks that they might be facing already.”[4]


Features such as the facial landmarks denoted by the colorful dots in figure A and head orientations in figure B were extracted from pictures and used to develop the deep neural network-based sexual orientation classifier.[4]

Given multiple images of the same person, the deep neural network-based sexual orientation classifier had up to 91% accuracy, and Kosinski is not convinced that this result represents the upper bound of what is possible. While some might argue that similarities among gay men and lesbian women are largely driven by society–like how to dress or do your hair/makeup–this classifier used both transient and fixed facial traits, those you can and cannot change, respectively. Physiognomy (the divination of traits from facial characteristics) is clearly a pseudoscience, but Kosinski and Wang’s research suggests that faces can tell us much more about a person than previously thought with AI.

Artificial intelligence can predict sexual orientation more accurately than humans because of almost unnoticeable differences and gender atypicality in physical features. Kosinski asserts that this finding is consistent with Prenatal Hormone Theory, the leading theory of the biological basis of sexual orientation. Prenatal Hormone Theory essentially holds that fetal exposure to certain hormones influences sexual orientation and physical and mental gender characteristics. In other words, Kosinski and Wang’s research supports the notion that sexual orientation is already determined at birth.

This research on the connection between AI and sexual orientation is not perfect. Many have pointed out that all the faces in the study were white, and that failure to include people of color further entrenches a system of racial oppression and skews the results. Many critics also slammed the study for regarding sexual orientation as a binary between gay and straight without including or even acknowledging people who identify as bisexual, pansexual, or transgender in the study. I gave Dr. Kosinski the opportunity to respond to the latter criticism by asking him if the AI could predict how gay someone is. In other words, if you’re 90% likely to be gay, are you 90% gay? His response: “Treat this AI as a Kinsey scale… It ranges from 0% to 100% and no one actually reaches 0 or 100%.”[3] Like the Kinsey scale, which attempts to define a spectrum from homosexual to heterosexual, the classifier assigns each face with a probability prediction of belonging to someone who is queer. The results are simplified and interpreted into a binary, but they attempt to reflect the nuance of the human experience.

Despite the obvious potential privacy issues that are associated with the widespread proliferation of this technology in today’s world, many people still attempt to guess others’ sexuality–and this AI is simply better at doing it. I would love to live in a world where there were no negative connotations with finding out someone’s sexual orientation. This AI could even be beneficial for dating purposes. But the reality is that many people still live in fear that their true sexual orientation will be uncovered, and the HRC and GLAAD have made it their mission to protect these individuals and communities. While their intentions are positive, I still find it contradictory for these organizations to dismiss his work as inaccurate while accusing him of publishing “reckless findings [that] could serve as a weapon.”[1]

Regardless of what you believe about the validity of his findings, one thing is clear: outright dismissal of his potentially paradigm-shifting results is not productive in advancing the discussion about and actions towards protecting the queer community. We must start a respectful and constructive dialogue about the implications of such research and the concerns of the queer community.


Further Reading:


  1. Anderson, Drew. “GLAAD and HRC Call on Stanford University & Responsible Media to Debunk Dangerous & Flawed Report Claiming to Identify LGBTQ People through Facial Recognition Technology.” GLAAD. N.p., 08 Sept. 2017. Web. 05 Feb. 2018.
  2. Hawkins, Derek. “Researchers Use Facial Recognition Tools to Predict Sexual Orientation. LGBT Groups Aren’t Happy.” The Washington Post, WP Company LLC, 12 Sept. 2017,
  3. Kosinski, Michal. (2018, January 09). Artificial Intelligence and Sexual Orientation. [Personal interview]
  4. Wang, Yilun, and Michal Kosinski. “Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images.” Open Science Framework, 16 Oct. 2017. Web.