The following are my remarks for the DH 2020 "Tool Criticism 3.0" workshop hosted by the Digital Literary Stylistics SIG. If you'd like to enjoy it in video form (it's fast-paced and colorful and fairly fun for this kind of talk), you can watch the video here.

Hi, I’m Quinn Dombrowski, Academic Technology Specialist in the Division of Literatures, Cultures, and Languages, and the Library at Stanford University in California, the United States. And today I’m happy to share some thoughts on DH tools and failure, which I’m calling… “IT DOESN’T WORK.”

Before coming to Stanford in 2018, I worked for 10 years in central IT at the University of Chicago and UC Berkeley. Much of that time was in groups that would be on the receiving end of support tickets when things broke… or, more specifically, when things didn’t work the way people were expecting. Especially when you’re new, it’s not always clear what the situation is  when your inbox is filling up with support requests, all with some variation of the message “IT DOESN’T WORK”. Maybe the server is down. Maybe user data has been corrupted. Or maybe it’s September and lots of people are having their first encounter with a system with known UI or UX issues, and actually, everything’s FINE… or at least, operating as expected.

In 2010, Bethany Nowviskie wrote a blog post, “Eternal September of the Digital Humanities”, where she grapples with, as she puts it, “a tension between goodwill and exhaustion — outreach and retreat. I’m sympathetic to the weariness of these people, treading water, always “on.” I feel it, too. But it’s their voicing of frustration and possible disengagement that alarms me.”

I’ll get back to that frustration and disengagement.

But let’s first talk about DH tools, and the ways they “don’t work”.

One thing I’ve learned, over my years of doing direct end-user support, is that — what it is that “doesn’t work” often depends on your perspective. In college, I worked in a computer lab, sitting at a special desk for multimedia support. Unfortunately, that desk was also right next to the printer and printing payment kiosk. You can imagine how many questions I got about video editing, compared to the number of questions I got because people had failed to read the giant sign about how to use the printer. The printer was almost always fine — so you could argue that the problem was actually the users. Or from another perspective, maybe what was broken was the software procurement process, that led my boss to purchase pay-for-printing software that so many people found hard to understand.

I see similar dynamics around DH tools.

So much work goes into building DH tools. And overall, it’s absolutely a gift to a field that can’t assume that its new practitioners have any exposure to — let alone proficiency with — programming. Still, there’s an important point to make here that tools are a stopgap solution to the fact that the institutional structures to teach humanities graduate students to code are still few and far between in most places in the world. And the people who are more likely to have access to formal training in computational methods and languages prior to pursuing graduate study in the humanities, fall within a fairly narrow set of demographics. Tools, in such a context, provide a way to bridge a gap and enable more people to apply digital humanities methods to the kinds of research questions that interest them. But how big is that gap, and how far can tools take you across it?

Let’s take the example of Scott Enderle's Topic Modeling Tool — and I choose this not as a bad example, but as an excellent one. It has thorough, readable documentation, lots of beginner-friendly functionality (like having a graphical user interface, to start with), and it’s one that I’ve used in the classroom, making mistakes along the way, myself.

The Topic Modeling Tool is an easy-to-use tool for… well... doing topic modeling. It’s simpler to get started with than installing and running Mallet from the command line, and the output it provides — readable HTML files of topics and their distributions across documents — is much easier to work with for interpretation than what you get out of Mallet.

Let’s see what happens when you give it something that makes no sense at all… like a folder full of pictures. Friends, let’s topic model some cats!

And, surely, you might think, no one would do this.

But step back for a moment and think about the student who doesn’t have access to formal DH pedagogy as part of their educational program. If I works for cat JPGs… it’ll also give us some kind of result for PDFs with no OCR layer — and LOTS OF PEOPLE have texts they might like to topic model in the form of PDFs with no OCR. 

Or take a less-wrong example: what happens when someone feeds the Topic Modeling tool one single document. That’s the thing with topic modeling: to get a meaningful result, you have give the algorithm multiple documents. You can split the “document” you want to look at into multiple pieces for purposes of analysis, but you have to give it multiple things, or else you’ll get a basically random assignment of words to topics. But if you give the Topic Modeling Tool a single document… the experience is indistinguishable from if you give it a set of documents that it CAN produce some meaningful result from. Even though the results are useless.

There’s a few different ways this could be addressed — you could start with some very emphatic messaging in the Quickstart guide or documentation that you NEED MULTIPLE DOCUMENTS (and explaining that it’s okay to break up a long document into multiple pieces). But I always think back to the computer lab, and people NOT READING THE SIGN. So, slightly harder to implement, you could add a warning message when someone selects a single text file. Let them go ahead and do it, if they really want, but build in opportunities for people to confirm — when they’re doing something that’s likely to make the tool “not work” — that it’s really what they want.

Now that’s an easy case — in most other situations, problems result from the contents of the document, rather than something as simple as the number of documents provided to the tool. 

Palladio is a data visualization tool developed by the Humanities + Design Lab at Stanford, and recently my colleague Simon Wiles has been working on a new version with embeddable visualizations. 

One of the things I really appreciate about how Nicole Coleman and the Palladio team developed the interface is that there’s a link, right above the data upload form, with the question “Not sure how to structure your data?” That will take you to a page with detailed information about how to name the columns of your spreadsheets, what characters cause it to break, how to format dates, and all those other useful things that one often has to find out by trial-and-error… assuming you’re used to working with tool that way, and don’t throw your hands up in despair and figure the problem is you and maybe you’re really not cut out for doing digital humanities because you can’t get any of it to work.

Right now so much is unclear about the future — about the humanities broadly (for instance, coming under attack in new tuition models in Australia), and about higher education in general, where it’s becoming increasingly clear that at least in the United States, not all our institutions will survive the pandemic. It seems inevitable that some things will change — but there’s no reason to assume that they’ll change in a direction that will lead to coding and data skills being core to a humanities education. 

It’s widely acknowledged that a lot, if not MOST, of digital humanities training takes place through less formal contexts than a curriculum: things like workshops, and self-study. But we’re in a context where some of the longer, intensive, residential workshops (like the summer institutes) are hard to imagine being possible in the near term. Which is really unfortunate, given the valuable experience these workshops provide, though many groups of people — including parents of small children, and people without the means to take a week off and travel — had challenges accessing these opportunities even in the before-times. 

There are, as a result of all this, more people in a position where self-study or very short workshops are the only option for developing DH skills. And that’s something that tool developers need to keep in mind. You’re not building tools just for yourself and your colleagues — at least, not if you want those tools to have their greatest potential impact. You’re building them … for yourself and your colleagues… and also the grad student whose stomach drops with trepidation when they have to look at a command line… and also the librarian who took a basic Python workshop a year ago and has done a REALLY GOOD JOB pushing other tools to their limit in order to avoid having to resort to using basic Python since then. I’m not saying all tools need to have a graphical interface, or be immediately intuitive, or avoid making people write code. But rather, it’s important to keep in mind the background that your users have, or don’t have, and spell out through documentation, videos, tutorials, and other materials those things that you might just take for granted. Things like, for instance, whether it’s recommended to lemmatize a text in a language that uses inflection, or how to format file names or metadata files.

It doesn’t necessarily have to be the responsibility of the tool developers to bridge that gap. 

Last year, I started a project called the Data-Sitters Club, as part of the Stanford Literary Lab, where a group of friends and I try to apply various DH text analysis tools and methods to “The Baby-Sitters Club”, a series of girls’ literature from the 80’s and 90’s that was translated into a few different languages. We write up our results — but more importantly, our entire PROCESS, including the mistakes we make and the challenges we face, and our frustrations along the way — in chatty, colloquial prose. Our style does’t work equally well for everyone — but that’s okay! There’s other resources out there, including “Programming Historian”, that cover some of the same tools in a different style, that makes those tools accessible to a different audience than we’re going for with the Data-Sitters Club. When it comes to tutorials, especially for people early in their journey of exploring digital tools and methods, let a thousand flowers bloom! Navigating technical documentation, and instructions not written with your particular perspective in mind, are skills you develop over time — but it takes confidence to take those risks, and building the necessary confidence is easier if you’re working with materials that resonate with you.

Now, realistically, there are limits — I’m not expecting DH tools to rethink their interfaces and/or documentation to be usable by 6-year-olds like my junior coworker, Sam, here.

Sam: I’m learning to do spreadsheets!
(Side note: I was pregnant with him when I gave the now-famous Project Bamboo talk— how weird is that?)

But thinking about the original theme of this conference in Ottawa — Carrefours or Intersections — I think it is precisely at these place of intersection that DH text analysis tools have the most to offer… and the most to learn. DH-curious undergrads and grad students with limited computational background are closer in skill level to the general public than the text analysis “in-crowd” that we often think of first. We're in a moment where, at least in North America, there’s much more attention being paid to the narratives that historically HAVEN’T been told. And as tool builders, we have an opportunity to enable other people to tell the stories that matter to them. But for that to happen, we need to build both our tools and our documentation with that audience in mind— so that our tools “work”. 

Which brings me back to Bethany Nowviskie’s “Eternal September of Digital Humanities”. Much has changed in the field of digital humanities since 2010, but recent efforts to split off “computational humanities research” feel like a manifestation of the "frustration and possible disengagement” she warned about. It’s much easier start a new community that builds tools for people who share your background and expertise — you can be confident the tools will work for them, because they work for you.

While no one likes to hear their tools “don’t work”, or the output “doesn’t make sense”, the challenges posed by different kinds of texts and data provide valuable opportunities to question the assumptions that underpin both our theories and our methods. And we’re never going to get there if you have to be comfortable submitting papers in LaTeX to join the conversation. Narrowing the pool of people who can engage with DH tools means the tools will “work” more often, but it achieves “success” by changing the goal. Why are we doing all this anyways? I’m in it to better understand human society and culture — in all its complex, diverse manifestations. If you never hear that your tool “doesn’t work”, either you’re some kind of unprecedented genius who has managed to account for all the linguistic, structural, and historical variation we find in the human record… or the tool isn’t being pushed to its breaking point.

Feedback that your tool “doesn’t work” is a gift. It may be a gift where there’s some assembly required — figuring out exactly what went wrong, and how, and why can be frustrating. But it’s worth receiving the gift in a spirit of generosity — after all, anyone who says it is taking a risk themselves in admitting that something went wrong. Understanding the ways your tool doesn’t work for people who bring a certain set of assumptions, or a particular kind of data, is a way to enrich both the tool and your own understanding of the world’s complexity. The answer to “it doesn’t work” shouldn’t be “you’re doing it wrong”, but “tell me more”.