The Problem of (Gender) Binaries

In their simplest form, computers work in binary. There are ones, and there are zeros and all the rest is context and combinations building more and more complex functions off of that binary. So it is maybe unsurprising that at .txtLAB, when we are dealing with complex entities like characters in a novel, we want to boil them down into binaries, too. It’s easy to analyze. And while I love the statistical and computational elegance of this sort of reductiveness, I worry about its implications.

The gender binary is perhaps the most common one of these simplifications we fall into in our questions and models. Do women and men write differently? Are gendered pronouns or aliases positioned and patterned differently in literary texts? How do protagonists who are women function differently in their character network than protagonists who are men? These are research questions we can ask to understand how women and men produce, and are produced within, cultural objects. But in doing so, this research falls into two, potentially harmful traps. First, we buy into the gender binary, and second, we assume all bodies that fall within the category of “woman” or “man” experience that categorization in identical ways.

To the former, gender is not binary. Our models that measure gender do not account for trans/non-binary folks. This is a problem of data and representation. In terms of data, we simply do not have enough bodies in our corpus that we could classify as either not a woman or not a man to do rigorous statistical analysis on. It is not because we do not want to ask these difficult questions on our own methods; we want to challenge the binary as much as we’ve challenged the patriarchy (see: JustReview.org), but we do not have enough data to do it. It’s a condition of the larger culture that we are trying to analyze. Why are there so few trans/non-binary authors and characters in our data set? Systemic oppression and invisibility of these bodies is part of this issue. We can not study a facet of a culture object that does not exist en masse. To challenge the binary computationally, we must first support the elevation of these voices, culturally.

To the latter question of individual experiences within the binary, we are faced with another set of reductions. When we measure men and compare them to women, we are not taking into account any of the intersecting identities any individual within a category may carry. Issues like class, race, sexuality, and ability are critical controls that nuance the way gender oppression operates. Upper- and middle-class cis white women, for example, hold immense privileges that other women do not. We know this, and we have, in some of our research, worked to analyze how these multiple identities interact (forthcoming research, “Racial Lines”). But more needs to be done to articulate these intersectional identities. We’re trying to find ways to evolve our tools to get at both the issues within the binaries, and of the binaries, themselves.

In the absence of both representation of non-binary folks in our data set, and of computational methods able to parse out differences within the binaries we use, should we still do this kind of gender research? It’s a question without an easy answer. Because using our current methods, we consistently find troubling patterns of the overrepresentation of men over women, which itself needs dismantling, too. So how do we reconcile the benefits of continuing to use the gender binary to measure these biases, with the cost of normalizing “men” and “women” as unique and uniform categories?

1000 Words

Lab member Fedor Karmanov has created a beautiful new project that combines machine vision, machine learning, and poetry. It is called “1,000 Words,” and takes the self-portraits of Van Gogh and generates poems based on the colours and items in the portrait. The poems consist of 10 lines randomly drawn from an archive of about 70,000 twentieth-century poems.

As we ask in the introduction to the piece: “Would this process of machines learning to see also help us as human beings see differently, and think about seeing differently?” It’s an example of the more creative use of algorithms, something that is equally important to our lab. While we often focus on the analytical functions of algorithms (identifying things like gender bias in book reviews, prestige bias in academic publishing, or nostalgia bias in prizewinning literature), it is important to think about the ways in which machines change our understanding of language, and, in this case, vision. These kinds of projects can tap into a the chance encounter with words, but also the curiosity of how machines focus on an image.

It is all part of a much bigger effort to better understand how we think with machines, rather than have them think for us.

AI across the Generations

I gave a talk today with Paul Yachnin to the McGill Community for Lifelong Learning on “Conscientious AI.” The idea for the event was to give the audience some understanding of how machine learning works and what you might do with it. We then asked the tables to brainstorm ideas about what kinds of AI would they like to see — what would help them with day-to-day tasks as they age?

It was an amazing event, not only to see how into the topic they were but also to see the topics they cared about: many of the ideas related to meeting up with people, either new people or those from different generations. Some were related to facilitating learning in class, especially related to hearing. That’s one of the biggest impediments to learning — older people have a really hard time hearing each other and that makes for a less than satisfying educational experience. Finally, people suggested a need to develop a system that might create more appropriate course material for their interests and needs.

Besides hearing some fascinating ideas what it really showed me is how important it is that we engineer with people in mind. Most algorithms are designed to serve powerful interests — corporations, school boards, but we have not yet made the plunge of user-driven AI. What do different communities need and how can we help them? Stop thinking in terms of hockey-stick curves in terms of consumer growth and more about people.

More important was an issue that came up during Q&A. Most people are very afraid of AI. They see how it seems to drive things like polarization or unemployment. Why / how could it be a force for good? The main point I tried to bring home — the point I always try to bring home — is that AI is a political good that can be used to serve our interests if we treat it as something open and communal. If seniors participate in algorithms designed to serve seniors; if teachers and students participate in algorithms designed to serve education, then we will have AI which is responsive to human needs rather than humans constantly responding to AI.

Mainly it was just fun to be there with so many curious, conscientious learners.