Outlandish Stanford facial recognition research claims there are hyperlinks between facial options and political orientation

An article by controversial Stanford researcher Michal Kosinski published today in Scientific Reports magazine claims that facial recognition algorithms can reveal people’s political views from their social media profiles. With a data set of over 1 million Facebook and dating sites profiles from users from Canada, the USA and Great Britain, Kosinski et al. Trained an algorithm to correctly classify political orientation in 72% of “liberal-conservative” face pairs.

The work as a whole encompasses the pseudoscientific concept of physiognomy, or the idea that a person’s character or personality can be judged by their appearance. The Italian anthropologist Cesare Lombroso published a taxonomy in 1911 in which he stated that “almost all criminals” have “pitcher ears, thick hair, thin beards, pronounced sinuses, protruding chin holes and a wide cheekbones”. Thieves were notable for their “small wandering eyes,” he said, and raped their “puffy lips and eyelids,” while murderers had a nose that was “often hawklike and always large”.

Phrenology, a related field, involves measuring bumps on the skull to predict mental characteristics. Authors representing the Institute of Electrical and Electronic Engineers (IEEE) have said that this type of facial recognition “is necessarily doomed” and that strong claims stem from poor experimental design.

Princeton professor Alexander Todorov, a critic of Kosinski’s work, also argues that methods such as those used in facial recognition paper are technically flawed. He says the patterns captured by an algorithm that compares millions of photos may have little to do with facial features. For example, self-published photos on dating websites reflect a number of non-facial cues.

In addition, current psychological research shows that personality in adulthood is mainly influenced by the environment. “While it may be possible to predict personality from a photo, in the case of humans, it is at best slightly better than chance,” said Daniel Preotiuc-Pietro, a postdoctoral fellow at the University of Pennsylvania who specializes in predicting personality from profile pictures has worked. Business Insider said in a recent interview.

Defend pseudoscience

Kosinski and co-authors, who respond preventively to criticism, try to distance their research from phrenology and physiognomy. But they don’t fire them entirely. “The physiognomy was based on unscientific studies, superstitions, anecdotal evidence, and racist pseudo-theories. However, the fact that his claims were unsupported does not automatically mean that they are all false, ”they wrote in notes posted alongside the paper. “Some of the claims made by physiognomists may have been correct, perhaps by accident.”

According to the co-authors, a number – but not all – of facial features indicate political affiliation, including head orientation, emotional expression, age, gender, and ethnicity. While facial hair and glasses predict political affiliation with “minimal accuracy”, liberals tend to look more directly into the camera and are more likely to express surprise (and less disgust), they say.

“While we tend to view facial features as relatively solid, there are many factors that affect them in both the short and long term,” the researchers wrote. “Liberals, for example, tend to smile more intensely and sincerely, which leads to the development of various expression lines. Conservatives tend to be healthier, consume less alcohol and tobacco, and eat differently – which over time is reflected in differences in skin health and the distribution and amount of facial oil. “

The researchers believe that the appearance of the face predicts life outcomes such as length of prison sentence, job success, educational level, chances of winning an election, and income, and that these outcomes are in turn likely to influence political orientation. But they also suspect that there is a connection between facial expression and political orientation as well as genes, hormones and prenatal exposure to substances.

“Negative first impressions could decrease their earning potential and status throughout their lives, increasing their support for wealth redistribution and sensitivity to social injustice and shifting them towards the liberal end of the political spectrum,” the researchers wrote. “Prenatal and postnatal testosterone levels influence the shape of the face and correlate with political orientation. In addition, prenatal exposure to nicotine and alcohol affects facial morphology and cognitive development (which has been linked to political orientation). “

Stanford facial recognition study

Kosinski and co-authors declined to provide the project’s source code or dataset, citing the privacy implications. However, this has the double effect of making it impossible to test the work for bias and experimental flaws. Science in general has a reproducibility problem – a 2016 survey of 1,500 scientists found that 70% of them tried to reproduce at least one experiment by another scientist, but it’s not acute – but it’s particularly acute in AI -Area. A recent report found that 60% to 70% of the responses given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were often just learning answers by heart.

Numerous studies – including the pioneering Gender Shades work by Joy Buolamwini, Dr. Timnit Gebru, Dr. Helen Raynham and Deborah Raji – and VentureBeat’s own analysis of public benchmark data have shown that facial recognition algorithms are susceptible to various distortions. A common nuisance is technologies and techniques that favor lighter skin, including everything from sepia-colored films to low-contrast digital cameras. These prejudices can be encoded in algorithms in such a way that their performance in dark-skinned people lags behind that in people with lighter skin.

Bias is widespread in machine learning algorithms that go beyond facial recognition systems. Research by ProPublica found that crime prediction software shows prejudice against black people. Another study found that women received fewer online ads for high-paying jobs. An AI beauty pageant was biased in favor of the whites. And an algorithm that Twitter uses to decide how to crop photos on people’s timelines is automatically selected to show the faces of white people over people with darker skin pigmentation.

Ethically questionable

Kosinski, whose work analyzing the relationship between personality traits and Facebook activity inspired the founding of the policy consultancy Cambridge Analytica, is no stranger to controversy. In an article published in 2017, he and Stanford computer scientist Yilun Wang reported that a commercially available AI system was able to distinguish photos of gay and straight people with a high degree of accuracy. Advocacy groups such as the Gay and Lesbian Alliance Against Defamation (GLAAD) and the Human Rights Campaign said the study “threatens the safety and privacy of LGBTQ and non-LGBTQ people alike” and found it to be the basis of the controversial prenatal hormone theory of sexual hormones Orientation found that predicts the existence of relationships between facial expression and sexual orientation determined by early hormone exposure.

Todorov believes Kosinski’s research is “incredibly ethical,” as it could give credence to governments and companies that want to use such technologies. He and scientists like cognitive scientist Abeba Birhane argue that those who create AI models need to take into account social, political and historical contexts. In her work “Algorithmic Injustices: Towards a Relational Ethics”, for which she won the Best Paper Award at NeurIPS 2019, Birhane wrote that “Concerns about algorithmic decision-making and algorithmic injustice require a fundamental rethink about technical solutions goes out. “

In an interview with Vox in 2018, Kosinski stated that his overall goal is to understand people, social processes and behavior through the lens of “digital footprints”. Industries and governments are already using facial recognition algorithms similar to those he developed, underscoring the need to warn stakeholders about the privacy extinction.

“The widespread use of facial recognition technology carries dramatic risks to privacy and civil liberties,” wrote Kosinski and co-authors of this latest study. “While many other digital footprints reveal political orientation and other intimate traits, facial recognition can be used without the subject’s consent or knowledge. Face images can be easily (and covertly) taken by law enforcement agencies or obtained from digital or traditional archives such as social networks, dating platforms, photo sharing websites, and government databases. They are often easily accessible; For example, anyone can access Facebook and LinkedIn profile pictures without the consent or knowledge of a person. As a result, the privacy threats posed by facial recognition technology are unprecedented in many ways. “

In fact, companies like Faception claim to be able to detect terrorists, pedophiles, and more using facial recognition. And the Chinese government has used facial recognition for identity photos of hundreds of suspected criminals, allegedly with an accuracy of over 90%.

Experts like Os Keyes, Ph.D. The candidate and AI researcher at the University of Washington agrees that it is important to raise awareness of the abuses and shortcomings in facial recognition. But Keyes argues that studies like Kosinski’s Progress advance basic junk science. “They draw on a lot of (honestly creepy) studies in evolutionary biology and sexology that deal with weirdness [for example] as the origin of “too much” or “not enough” testosterone in the womb, “they told VentureBeat in an email.” Depending on you and your support in a study … is utterly confusing. “

VentureBeat

VentureBeat’s mission is to be a digital city square for tech decision makers to gain knowledge of transformative technology and transactions. Our website provides important information on data technologies and strategies to help you run your business. We invite you to become a member of our community and access:

  • current information on topics of interest to you,
  • our newsletters
  • gated thought leader content and discounted access to our valuable events like Transform
  • Network functions and more.

become a member

Comments are closed.