z, p, t, d, and counting

I made the a list the other day of all of the letters, names, and new terms I have had to learn to undertake the computational study of literature and culture. It was very long. It made me realize that when researchers speak of the “bilingualism” of interdisciplinary work, that we should take this idea very literally. I feel like I’m learning German all over again. It started as a novelty (Ich is so funny sounding!), then a frustration (I have no idea what you’re saying), and then magically you could do something with it (ich hätte gern ein Bier). And then you waited, and waited, and waited until you stopped noticing you were thinking in this other thing.

Continue reading “z, p, t, d, and counting”

Rethinking the Table of Contents

I wanted to share an experiment that I worked on with Mark Algee-Hewitt to reconstruct the table of contents of our new collaboratively authored book, Interacting with Print. The book was written by 22 co-authors around the theme of interactivity. Mark and I thought it would be great to do a digital intervention into the print convention of the TOC.

Below you can see two network graphs of relationships between chapters. The first is a network of links between the “renvois” inside of chapters. We used a system of cross-references to point to other chapters within the book that dealt with related themes [Paper] as in the French Encyclopédie. The second shows relationships based on the use of topic-modeling each of the chapters and drawing connections based on the presence of shared topics. The first represents authors’ explicit beliefs about which chapters are most related, while the second represents latent connections derived through shared language. In each case, we move past the linear table towards the more reticular network.

These networks tell us different things about the relationships within our book. The renvoi network shows that binding, a chapter about constructing books, is the most centrally connected, followed by thickening, which is about adding pages to books. One can see how visual chapters like Frontispieces, Engraving, and Stages mark out one pole while non print spaces mark out another. You can also follow the directionality and move from letters to manuscripts, or from advertising to catalogs, or spacing to disruption to ephemerality in a suggestive causal sequence.

Where the first network privileges gerunds, the network of latent topics is more centrally organized around qualities like Paper and Ephemerality. That these are the two most linguistically central chapters suggests an interesting medial centre point of our history (paper), as well as an interesting new temporal framework that has been far less central to print studies in the past. Print has most often been associated with notions of permanence and reproducibility. Focusing on interacting with print seems to move that focus more towards the fleeting and contingent aspects of print media.

This is obviously just a beginning in experimenting with ways that computational methods can interact with print conventions and change the way we organize and structure information. Surprisingly, we still remain in a very print-centric universe when it comes to sharing and archiving information. We hope experiments like this one will nudge us towards trying out more alternatives.



1000 Words

Lab member Fedor Karmanov has created a beautiful new project that combines machine vision, machine learning, and poetry. It is called “1,000 Words,” and takes the self-portraits of Van Gogh and generates poems based on the colours and items in the portrait. The poems consist of 10 lines randomly drawn from an archive of about 70,000 twentieth-century poems.

As we ask in the introduction to the piece: “Would this process of machines learning to see also help us as human beings see differently, and think about seeing differently?” It’s an example of the more creative use of algorithms, something that is equally important to our lab. While we often focus on the analytical functions of algorithms (identifying things like gender bias in book reviews, prestige bias in academic publishing, or nostalgia bias in prizewinning literature), it is important to think about the ways in which machines change our understanding of language, and, in this case, vision. These kinds of projects can tap into a the chance encounter with words, but also the curiosity of how machines focus on an image.

It is all part of a much bigger effort to better understand how we think with machines, rather than have them think for us.

The Danger of the Single Story – Why Quantity Matters

I listened to a beautiful podcast the other day by Chimamanda Ngozi Adichie on “the danger of the single story.” Her point was that when we only tell one kind of story about a person or a place we cheapen our understanding. She began with her experience as an African writer, one who all too often only hears one kind of story about a whole continent. As she remarked, it’s not that stereotypes aren’t true, it’s that they make one story the only story. Having more than one story gives us a richer understanding of the world.

How does this connect to the lab? Well, our aim is to use quantity to better understand literature and creativity. Adichie’s point is that when we focus on single things we get locked into single versions of them. We need quantity to help envision and imagine alternatives. Often when we use quantity we do so to reduce diversity into a single summary-like assessment. Fan fiction tends to look this way or the nineteenth-century novel behaves this way. I’m hoping that as we move forward we can begin to locate the diversity within quantity, the different kinds of stories that are available to us within the large quantities of stories that we have been telling ourselves for centuries. The goal is to make our generalizations more flexible, while still being based on something more than our personal beliefs or single pieces of evidence.

AI across the Generations

I gave a talk today with Paul Yachnin to the McGill Community for Lifelong Learning on “Conscientious AI.” The idea for the event was to give the audience some understanding of how machine learning works and what you might do with it. We then asked the tables to brainstorm ideas about what kinds of AI would they like to see — what would help them with day-to-day tasks as they age?

It was an amazing event, not only to see how into the topic they were but also to see the topics they cared about: many of the ideas related to meeting up with people, either new people or those from different generations. Some were related to facilitating learning in class, especially related to hearing. That’s one of the biggest impediments to learning — older people have a really hard time hearing each other and that makes for a less than satisfying educational experience. Finally, people suggested a need to develop a system that might create more appropriate course material for their interests and needs.

Besides hearing some fascinating ideas what it really showed me is how important it is that we engineer with people in mind. Most algorithms are designed to serve powerful interests — corporations, school boards, but we have not yet made the plunge of user-driven AI. What do different communities need and how can we help them? Stop thinking in terms of hockey-stick curves in terms of consumer growth and more about people.

More important was an issue that came up during Q&A. Most people are very afraid of AI. They see how it seems to drive things like polarization or unemployment. Why / how could it be a force for good? The main point I tried to bring home — the point I always try to bring home — is that AI is a political good that can be used to serve our interests if we treat it as something open and communal. If seniors participate in algorithms designed to serve seniors; if teachers and students participate in algorithms designed to serve education, then we will have AI which is responsive to human needs rather than humans constantly responding to AI.

Mainly it was just fun to be there with so many curious, conscientious learners.

Think Small: On Literary Modelling

This is the name of a new piece I have out in PMLA in a section called “Franco Moretti’s Distant Reading.” The first point I try to make is that calling it “Moretti’s Distant Reading” is indicative of literary studies’ continued penchant for great men. It is ironic, or telling, that even in an issue devoted to methods that aim to use larger samples of evidence we return to the male proper name.

The second point I try to make is to nudge us away from thinking in terms of distance or bigness and towards “representation.” As philosophers of science have pointed, models are crucially representations of the worlds they intend to make claims about. Representation emphasizes the perspectival nature of knowledge, its partialness, but also the creativity behind the endeavour. Computational criticism isn’t cold and impersonal. It can be very much about personal conviction and inventiveness. I encourage people to read Richard So’s contribution that addresses many of these same issues.

Last, I want people to better understand that modelling isn’t entirely about measurement. There are many steps to creating a model that have nothing to do with numbers. There is much more conceptual richness to the process of modelling which thinking only in terms of numbers leaves out.

Beginning to acknowledge that the field is broad, creative, and conceptually demanding will hopefully encourage its wider acceptance within the umbrella of literary studies. As I write in the piece, “Instead of book love, I hope we can also begin to find model enjoyment.”

Why are non-data driven representations of data-driven research in the humanities so bad?

One of the more frustrating aspects of working in data-driven research today is the representation of such research by people who do not use data. Why? Because it is not subject to the same rules of evidence. If you don’t like data, it turns out you can say whatever you want about people who do use data.

Take for example this sentence, from a recent special issue in Genre:

At the heart of much data-driven literary criticism lies the hope that the voice of data, speaking through its visualizations, can break the hermeneutic circle.

Where is the evidence for this claim? If you’re wondering who has been cited so far in the piece you can guess it’s Moretti. That’s it. Does it matter that others have made the exact opposite claim? For example, in this piece:

In particular, I want us to see the necessary integration of qualitative and quantitative reasoning, which, as I will try to show, has a fundamentally circular and therefore hermeneutic nature.

But does a single piece of counter-evidence really matter? Wouldn’t the responsible thing be to try to account for some summary judgment of all “data-driven literary criticism” and its relationship to interpretive practices?

To be concerned about the hegemony of data and data science today is absolutely reasonable and warranted. Data-driven research has a powerful multiplier effect in its ability to be covered by the press and circulate as social certainty. Projects like “Calling Bullshit” by Carl Bergstrom and Javin West are all the more urgent for this reason.

But there is another dimension of calling bullshit that we shouldn’t overlook. It’s when people invent statements to confirm their prior belief systems. To suggest that data is omnipotent in its ability to shape public opinion misses one of the great tragedies of facticity of our time: climate collapse (a phrase I prefer to climate “change” which is too wishy washy a word for where we’re headed — “change is good!”).

In other words, calling bullshit is a multidimensional problem. It’s not just about data certainty. Its also about certainty in the absence of data. Its about rhetorical tactics that are used to represent phenomena without adequate evidence, something that happens all too often in the humanities these days when it comes to understanding things as disparate as the novel or our own discipline.

As authors, journal editors, peer-reviewers, researchers and teachers we need to wake up to this problem and stop allowing it to pass with a mild nod of the head. We need to start asking that hard question: Where’s your evidence for that?