Gender Trouble: Literary Studies’ He/She Problem

Pronouns have become a hot topic of late and I thought it would be interesting to explore their use in the new JSTOR data set that I have been working on that represents 60 years of literary studies articles.

Previous work has shown how men and women use personal pronouns at differing rates (you can guess how). I wanted to see whether over the past 60 years an assumed bias towards masculine pronouns in the field might have subsided with the rise of gender studies and the entry of more women into the profession.

Unfortunately not.

Continue reading “Gender Trouble: Literary Studies’ He/She Problem”

Topic Stability, Part 2

In my previous post I tried to illustrate how different runs of the same topic modelling process can produce topics that appear to be slightly semantically different from one another. If you keep k and all other parameters constant, but change your initial seed, you’ll see the kind of variation that I showed.

The question that I want to address here is whether we can put a number to that variation, so that we can understand which topics are subject to more semantic variability than others.

I’ve gone ahead and written a script in R that calculates the average difference between a given topic and the most similar topic to it from all other runs. You can download it in GitHub.

Continue reading “Topic Stability, Part 2”

The Replication Crisis I: Restoring confidence in research through replication clusters

Much has been written about the so-called “replication crisis” going on across the sciences today. There are many ways that these issues impact literary and cultural studies, but not always in the most straightforward way. “Replication” has a complicated fit with more interpretive disciplines and it warrants thinking about its implications. In the next few weeks I’ll be writing some posts about this to try to generate a conversation around the place of replication in the humanities.

Continue reading “The Replication Crisis I: Restoring confidence in research through replication clusters”

Gender and Equity in Publishing

The Just Review team held an inspiring event last night. It was a roundtable of six women discussing their experiences with academic and literary publishing. It was an amazing conversation covering many different perspectives. We had two academics, one editor, one publisher, a novelist and a poet. Here are some of the themes they touched on.

Cultivating Confidence

Putting oneself forward was a theme that kept recurring. Whether it was the confidence to send off your manuscript or speak up at a literary festival or reach out to a mentor, many of the panelists discussed how they consistently had to work against their own inner inhibitions. Based on their success as individuals you would never guess that this is something they wrestled with. But something they strongly emphasized was cultivating the confidence at an early an age as possible to take risks, speak out, and put oneself forward.

Prioritizing Carework and Generosity

Another key theme was about avoiding the myth of scarcity, by which they meant seeing gender and job competition as a competition or zero sum game. Instead, they encouraged all of us to think about how to cultivate the work of others and how, in the words of one participant, “to take up less space.” This might seem in contradiction to the first point about putting oneself out there, but it offers another way to think of literary work. Not only find your place, but do the work to make it possible for others, especially others who may have less privilege than you, to find their place. Generosity and empathy were two states of mind that were strongly emphasized.

Creating Parastructures

Finally, a core theme that kept emerging was the importance of creating peer-networks and “collectives.” Inevitably as a woman you will be subject to some kind of bias or discrimination in your career. These extra-institutional structures can be an important way of finding more rewarding spaces to work and create and find more open feedback loops to help improve your work. Creating these networks takes time. But the participants emphasized just how valuable such spaces have been in their lives and careers, whether it was creating independent presses, writing groups, or women-led gaming communities.

Much more was discussed over the hour and a half event that I can’t cover here. But I think it was a really crucial conversation to have and one that I hope inspired the many students who were present. I know I learned an incredible amount.

The Legibility Project: Reversing the dark economy of academic labor

Here is an example of the kind of registry I am thinking of, using my own activity as a starting point.

On-going duties include: Undergraduate Advisor European Studies Minor, Editor Cultural Analytics, Board Member Centre for Social and Cultural Data Science

Table of Review Commitments since September 2017.
ActivityRequest DateDue DateAccepted/DeniedNameInstitution
Grant Proposal12/28/2017Denied############
Faculty Recommendation12/19/201701/10/2018Accepted############
Book MS12/13/2017Denied############
Grant Proposal12/13/201701/30/2018Accepted############
Faculty Recommendation12/07/201712/18/2018Accepted############
Book MS11/27/2017Denied############
Book MS10/09/201701/20/2018Accepted############
Grant Committee09/22/201711/27/2017Accepted############
Faculty Recommendation09/01/201710/15/2017Accepted############
University Committee01/01/201701/01/2018Accepted############

Over the years I have become aware that a significant portion of my time is spent on tasks for which I am not directly paid, either in the form of money or public credit, and about which no one outside of my chair or dean is aware. I am talking about work known as “peer review.” Typically we associate this term with the reviewing of scientific articles. However, the scope of “peer review” is considerably larger than that understanding implies.

Peer review can encompass:

  • Scholarly articles, the most familiar category
  • but it can also, especially in the humanities, entail reviewing entire book manuscripts. If the average scholarly article is between five and seven thousand words, then the average academic book is anywhere between ten and twenty-four times as long (not to mention time-consuming to review). Sometimes, I will be asked to review a book proposal, which can be considerably shorter, between 20 and 75 pages.
  • Promotion dossiers, either for tenure or full professor. These include publications produced over the course of a career. If someone has published several books and dozens of articles, then the time commitment is now potentially 100x the extent of reviewing a single academic article.
  • Faculty recommendation letters. These entail knowledge of their entire scholarly output, which in some cases may be more than promotion to full professor if they have already been a full professor for a while.
  • Grant or prize committees. A book prize committee can mean reading upwards of 100 scholarly monographs (i.e. the equivalent of 2,400 academic articles), while a grant committee can mean reading either a single proposal (about as long as an article and just as dense) or adjudicating up to 20-30 proposals at a time.

I should note that I have not included writing letters of recommendation for undergraduate and graduate students because I consider those to be part of my teaching and supervision, for which I am directly paid. Nor have I included my own research writing, again because my assumption is that I am directly paid for this work.

These tasks will be familiar to anyone in the profession. They are almost entirely unknown to those outside of it.

Some might say, ah, come on, that’s part of your job, too. You may not have known about it when you started out, but in addition to teaching, advising, mentoring, researching, writing, and sitting on committees, it was implied that you would also be doing a lot of reviewing of other people’s work. After all, how do you think your articles and books get published? Someone’s got to do it.

Absolutely. But the bigger issue for me is that these activities are almost all confidentially recorded, which means that no one knows you are doing them, except either your chair or dean to whom you might report your yearly activities, or the individual parties that made the request. That’s why you never anticipate doing this work because you don’t see your advisors doing it when you’re training in grad school. It just suddenly appears — and keeps on appearing. I am not opposed to the work. I am opposed to the way we hide it.

Why does this lack of transparency matter?

I think for two reasons. First, it means there is all this work going on — work which has serious consequences in the lives of real people — which is totally inscrutable. How many books or articles were or were not published last year because I, or someone else, reviewed them for a press or journal editor? What kind of biases do I bring to my judgments and do we have any way of assessing that? What individuals, and now more importantly, what social networks are making things happen in the field?

All of these questions are currently unanswerable because of this dark economy of labor. While we have a tremendous amount of freedom in the classroom, I still have to make course proposals, still have to get approvals for new classes, still have to have my performance evaluated by my students, etc. There is an important degree of accountability for what and how I teach. That is totally missing from peer review.

The second reason this matters is purely practical. I am totally exhausted by these requests. If I said yes to every request I would have no time for anything else. It literally could be a reasonable full time job to adjudicate everything people have asked me to read. Period. No teaching, no research, no writing (other than “reports”), no recommendation letters for students, no advocacy on campus for things I believe in, no advising duties. Just “peer review.”

So inevitably I say no, and then yes, and then no, and maybe a few more no’s with some guilt-ridden yes’s dropped in for good measure. I try to create some rationale, but really it’s random. That’s not a good way to make decisions, it’s not a good way for me to apportion my work time, and it’s not a good way for the field to be relying on people.

I also don’t think I’m special. My working assumption is that many, many in the field experience the same thing. I hear this anecdotally all the time when it becomes my turn to ask people to review something for the journal I edit. But it’s hard to know because everything is so invisible. And as the tenure labor market continues to shrink, the problem will only worsen, as fewer people are called upon to do more and more things.

So here is what I suggest: We need a peer-review registry.

We need a place where this work is recorded and made visible. But it’s confidential you’ll say! It’s a fair concern. But we can create a registry that contains minimum information for public consumption, and then contains confidential information for auditing purposes. For example, I can list that I am doing “promotion review” right now. You don’t need to know whom I’m doing it for. But it is important for people to be aware of who is doing this kind of work. Who are the gatekeepers? I can guarantee you will start to see biases and unintended networks appear. It will also help me in my decision making to be able to say to a requestor, look, I’m doing 6 different reviews right now, I really can’t say yes. Many of the reasons why we say yes is we are trying to maintain social bonds. No communicates a lot of negative will. No + I have a very good reason is very different. Right now, it’s hard to know if someone is just dodging work or is legitimately swamped.

But we also need a confidential section for auditing purposes. If all an academic has to do to look busy is check a box, s/he will. We need some way of validating that the public-facing representation is accurate. And we also need some way of further delving into the data. The point wouldn’t be to disclose embarrassing information — did you know that Prof. X was the reviewer for these 20 articles! — but to work with stakeholders to help them understand where problems might lie. We’re seeing a strong network effect here around this group of people, perhaps Editor Y you might consider expanding your pool a bit. Or Grant Agency Z you have traditionally been relying on reviewers with these gender/institutional/ethnic/disciplinary backgrounds. You might want to take steps to address that.





On Prestige Bias in the Chronicle of Higher Ed

The Chronicle of Higher Education ran a version of our essay on the concentration of institutional prestige as its cover story this week. In it we expand our reflections about how to change the current system. The essay is based on our original piece that appeared in Critical Inquiry. Here is an excerpt from the new essay:

The current system of double-blind peer review that underlies most academic publications is essentially an invention of the second half of the 20th century. Its failings have been well documented and numerous projects in the sciences as well as the humanities are now underway to change it. Almost all of these fixes, however, continue to rely on two basic principles: First, that communities of scholars still make intuitive judgments about quality (judgments which are rarely, if ever, made explicit); and second, that they largely rely on established publishing practices that essentially transfer content from one place (the lab or the desk) to another (the library).

What we are imagining, by contrast, is a new form of algorithmic openness, in which computation is used not as an afterthought or means of searching for things that have already been selected and sorted, but instead as a form of forethought, as a means of generating more diverse ecosystems of knowledge. What values do we care about in terms of human knowledge and how can we use the tools of data science to capture and more adequately represent those values in our system of scholarly communication? Instead of subject indexes and citation rankings, imagine filtering by institutional diversity, citational novelty, matters of public concern, or any number of other priorities. How might we encode these values to create smarter, more adaptable, and more open platforms and practices?

It is clear from our study and others like it that elite institutions continue to be the locus of the practices, techniques, virtues, and values that have come to define modern academic knowledge. They diffuse it, whether in the form of academic labor (personnel) or ideas (publication), from a concentrated center to a broader periphery. Using digital technologies to guide the circulation of knowledge does not inherently make one complicit in the “neoliberalization and corporatization” of higher education or a practitioner of “weapons of math destruction,” to use the data scientist Cathy O’Neil’s well-turned phrase. Wisely and openly used, such technologies can help us not only reveal, but potentially undo, longstanding disparities of institutional concentration. It is time we built a scholarly infrastructure that is more inclusive and more responsive to a broader range of voices, including those outside of the academy.

Over the course of the 19th century, universities adopted many of the norms of print culture and in so doing transformed themselves into modern research universities. We need a similar reinvention for our own universities as they enter a new age.

Addressing epistemic inequality, and not simply publication inequities, will require us to rethink what universities do and what they are for in a digital age. “Digitization” means more than just transferring print practices to digital formats. We need to integrate data science, knowledge of our past practices, and contemporary understandings of institutional norms to reinvigorate the intellectual openness of the university. We need to use all of our analytical and interpretive capabilities to rethink who and what counts. The university is a technology. Let’s treat it like one.