What’s wrong with a gay facial recognition program

Gay faces
Image: Stanford University

Last September, there was a brief flurry of activity over the idea of a program that can distinguish homosexual versus heterosexual faces, based on their online dating image (see this, for instance, from the BBC). It started shortly before a scientific article was published by Stanford’s Yilun Wang Michal Kosinski in the online journal Open Science Framework. The link to the whole paper is here (and thanks, by the way, for making it open-source).

motorcycle-2197863_960_720I don’t normally offer an opinion on science that isn’t in my field (I’m a bird ecologist after all), but when it intersects with politics, it’s hard to resist, especially when it raises timely questions about the the nature of science, of inherent bias of study and of deeper concerns over social equality and data privacy. A number of LGBTQ+ groups published statements condemning the work as potentially fueling further segregation and inequality. I didn’t read them, but I was interested in the response of the researchers:

Our findings could be wrong… however, scientific findings can only be debunked by scientific data and replication, not by well-meaning lawyers and communication officers lacking scientific training  (See the full response at this link)

Ouch. Personally, I think it was a carefully-conducted study that would stand up to repetition. My concern is more basic – it’s with the paradigm they’re testing, which could have inherent bias, and means that repeatability might not actually come into it.

disco-ball-3426765_960_720The biggest question I have is around the inherent assumption of this study – and presumably many others –  that homosexuality is binary and a population of study subjects could come down clearly on one side of the fence or another. Only then would it actually be possible to give the computer that sort of baseline information with any accuracy. Those who have waded through Kinsey et al. (1948)’s 800-page tome Sexual Behavior in the Human Male will know that sexuality is far more complex and many men can be “a little bit gay” or “a little bit straight” without it effecting their fundamental sexual expression. (Presumably the same is true for women, but I haven’t made it through their twin book Sexuality and the Human Female.) The computer program has no idea about these things, because it’s being fed information from a binary paradigm the design of which, one could argue, is culturally mediated. The authors do address this point:

Another issue pertains to the quality of the ground truth: it is possible that some of the users categorized as heterosexual were, in fact, gay or bisexual (or vice versa). However, we believe that people voluntarily seeking partners on the dating website have little incentive to misrepresent their sexual orientation (page 33).

However, the social complexity here is enormous. A 10-second Google Search came up with almost 100,000 hits like this one “Honey, I’ve got a secret: When gay men come out to their wives” This means that people advertising themselves on a dating website might have many different reasons for doing so, which may fit very badly into a black-and-white data-driven paradigm. The question, then is whether a computer can be better at knowing the sexual identity of somebody who doesn’t necessarily know for him-or-her-self, or who may feel differently at different times? If, as seems to be the case, sexual orientation is at least partially biological in nature, then perhaps it can.

…the deep neural network used here was specifically trained to focus on fixed facial features that cannot be easily altered, such as structural facial elements. This reduced the risk of the classifier discovering a superficial and not face-related difference between facial images of gay and straight individuals used in this study. (from their response to GLADD and HRC)

Fascinating stuff.

adorable-21998_960_720So I’m left asking myself whether I think that the fundamental findings of this study are wrong. Is there potentially sufficient sample bias to confound the data set? There are plenty of examples of identical twins who do not share the same sexual orientation, but who – presumably – have similar facial characteristics. There’s no easy answer to this.

Maybe a more important question is whether we (society – science – whatever) should be addressing this question at all. I would like to think it doesn’t matter. And yet, in this dystopian world of fake news (I’m reluctant even to use that word, at the risk of sounding glib), profiling (ditto the last comment), and almost daily breaches of internet privacy, it might be better to face up to the reality of these things happening anyway. Worse yet, there is the concern- and perhaps not without cause – that if this technology is available it would be used to ‘filter out the unfit’. It’s difficult to imagine anything more abhorrent. (Remember Gattaca? In 2011, NASA named it Hollywood’s most scientifically plausible film.)

In the end, if the reality of this potential is brought to the public’s attention by a couple of sympathetic scientists from Northern California, maybe it’s a pretty good alternative.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: