It's been a busy month, albeit without one particular thing that would merit its own post. So, in summary...

Digital humanities white paper

NITLE published a white paper I co-authored with Rebecca Davis, "Divided and Conquered: How Multivarious Isolation Is Suppressing Digital Humanities Scholarship" [pdf], which may be the first publication drawing extensively on the data collected during the Bamboo Planning Project (social isolation of scholars, siloed tools, difficulties involved in finding tools and content, etc.).

Bamboo data sorting done

I spent today doing the last data sorting for the Bamboo Planning Project. When I published the data and summaries on January 1st, I hadn't had time to include the Scholarly Narratives, or data from Workshops 3 and 4. Today, I finished posting the last of the Scholarly Narratives and Workshop 4, which wraps up the Bamboo Planning Project data. I'm considering trawling through the current Bamboo Technology Wiki for additional data at some point.

Slavic linguistics wiki

I gave a talk on the Slavic linguistics wiki at the Midwest Slavic Workshop last Friday with Monica Vickers, a first-year grad student at The Ohio State University who used the wiki last quarter in an MA prep class. Slides are available on Google Docs here. One of the concerns I've heard from professors about the wiki is that it provides students with a way to get out of doing their class reading. Interestingly, Monica noted that-- while the students did try that-- the wiki was a great way to refresh your memory about an article you've already read, but was no substitute for doing the reading when it came to giving you the ability to actively participate in a class discussion.

Birchbark letters XML

Giant cuddly oversized birchbark lettersFor a few months, David Birnbaum and I have been working on a way to batch convert the birchbark letter transcriptions available online into Unicode. We've finally gotten the XML file with the PUA/Unicode correspondences right, and he's mostly done with a clever bit of XSLT to actually do the conversion.

Relatedly, in sewing news, I turned some Spoonflower fabric scraps into giant cuddly oversized birchbark letters.

Drupal

In the last month, I've built a Drupal VRE-style site for a friend working on a project to analyze Facebook posts from Tunisia and the Tunisian diaspora, and I have some sketchy notes for a write-up. At work, I built a Drupal service catalog in less than two hours (if you exclude the 30+ hours spent cleaning up messy data, doing multiple imports, etc.) A write-up is about half done. I'm dabbling with a new site that uses Feeds to pull in weekly reports from major IT projects and display them in a way that's much more accessible than what we currently provide.

I'm also working on building a VRE for Bulgarian dialectology that I did a proof-of-concept for in February. Actually doing a batch import of all the pre-existing data from Word files (as opposed to manually entering data, as I did for the proof-of-concept) is going to be a task-- over 7,000 word nodes, plus maybe a thousand sentence nodes, and a handful of others. Getting the data into a form where it can be imported has also been a challenge, between cleaning up inconsistencies and human error, and figuring out the XSLT to pull the right data out of the XHTML generated by Word2CleanHTML.

Cocoon running on an Ubuntu server

Thanks to Gerry Siarny, there's a proof-of-concept running on Slicehost. A blog post on how to do it will be coming soon.