Project Bamboo wiki: Corpora Camp Lessons Learned Report
<p><strong>This was originally published on the Project Bamboo wiki at <a href="https://wiki.projectbamboo.org/display/BTECH/Corpora+Camp+Lessons+Learned+Report">https://wiki.projectbamboo.org/display/BTECH/Corpora+Camp+Lessons+Learne...</a> (<a href="https://wiki.projectbamboo.org/x/-AZYAQ">https://wiki.projectbamboo.org/x/-AZYAQ</a>) by Seth Denbo, last modified by Travis Brown on 17 May 2011.</strong></p> <p>I. Introduction<br /> II. Overview<br /> III. Collections<br /> III.1 Common representation and data store<br /> III.2 Hathi texts<br /> IV. Architecture<br /> IV.1 Cloud Computing<br /> IV.2 Collection Interoperability<br /> IV.3 Support for Wide Range of Interfaces<br /> V. Application<br /> V.1 Analysis<br /> V.2 Visualization design<br /> V.3 Visualization software<br /> VI. Recommendations<br /> VI.1 Methods for text exploration<br /> VII. Conclusion<br /> <br class="atl-forced-newline" /></p> <h3 id="CorporaCampLessonsLearnedReport-hu5ueik9qebizIIntroduction"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-hu5ueik9qebiz"></span><strong>I. Introduction</strong></h3> <p>From March 2-4, 2011 at the University of Maryland, the Bamboo Technology Project Corpora Space work group held CorporaCamp, the first of three planned workshops. These events are part of the design phase of Corpora Space, which began in January 2011 and continues for fifteen months. The primary outcome of this phase is a road map for the subsequent eighteen-month implementation of Corpora Space, so all of our activities are focused on informing this document.</p> <p>During CorporaCamp the participants designed and developed a tool for the exploration of texts from distributed, large-scale collections. The primary purpose of this exercise in tool building was to gain a greater understanding of the challenges involved in building Corpora Space infrastructure. This report will address the lessons learned in the three main areas on which CorporaCamp participants worked: the development of a platform architecture for Corpora Space, a prototype application, and interoperability among the collections to facilitate research.</p> <p>The overview section below lists the most significant lessons learned, presenting them in three sections: the trade-offs we faced, the challenges we identified, and the successes we had. The rest of this report presents a more detailed account of our process and the lesson learned.</p> <h3 id="CorporaCampLessonsLearnedReport-h97kly0uvyx1vIIOverview"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h97kly0uvyx1v"></span><strong>II. Overview</strong></h3> <ol> <li><strong>Trade-offs:</strong> CorporaCamp forced us to identify and negotiate several trade-offs involved in implementing a piece of functionality across diverse collections, including most importantly the following: <ol> <li><strong>Data representations:</strong> The collections we were working with had very different degrees of structure and annotation, and we had to decide whether to enrich the less structured texts or flatten the more structured ones. Given our limited resources we adopted the latter approach in most cases.</li> <li><strong>Architecture:</strong> We had to balance the advantages of a distributed architecture where independent agents interact through a switchboard against those of more traditional web application frameworks, where components are tightly coupled. We developed our first working prototype using the second approach, but also built a set of distributed components (using the <a href="https://wiki.projectbamboo.org/display/BTECH/Corpora+Camp+Platform+API" rel="nofollow">platform API</a> that we've code-named <em>Utukku</em>) that perform many pieces of the functionality.</li> <li><strong>Legacy systems and emerging technologies:</strong> We made a design decision in this case to be forward-looking in our decisions about standards and technologies — for example by adopting HTML5 and the WebSocket protocol in the distributed version of the application. While this approach offers many advantages, it also shuts out some users.</li> <li><strong>External software libraries:</strong> At several points in the development process we had to choose between general libraries (or tools) that would allow our core functionality to be more easily extended in the future, and more limited tools that solved our immediate goals with less work. In some cases — the visualization code, for example — we took the latter approach for the prototype but have partial implementations using more general tools.</li> <li><strong>User interfaces and visualization:</strong> In our initial design plans the visualizations that we intended to present to the user were very simple. Once we had a working prototype, we realized that we needed to add elements to the interface in order to allow users to navigate the data in a useful manner. This additional complexity requires more user engagement and training.</li> </ol> </li> <li><strong>Challenges:</strong> For most of these trade-offs we were able to balance the opposed concerns satisfactorily, but the issue of creating interoperable representations of data from diverse sources in particular posed problems that we were not able to address in the scope of the workshop: <ol> <li><strong>Metadata:</strong> Deciding on a schema for metadata that works across collections is complex, even for a relatively simple application.</li> <li><strong>Provenance and versioning:</strong> We need to be able to record and provide access to information about changes to objects in the collection: when the change occurred, who made the change and provide information about that person, and what exactly was changed.</li> </ol> </li> <li><strong>Successes:</strong> In some cases we believe that the decisions we made proved particularly successful: <ol> <li><strong>Leveraging existing tools and services:</strong> We demonstrated that it is possible to piece together a diverse set of resources quickly and effectively. These resources included the following: <ol> <li><strong>Cloud computing:</strong> Using Amazon's <a href="http://aws.amazon.com/ec2/" class="external-link" rel="nofollow">Elastic Cloud Computing</a> (EC2) service we were able to provide each team at the workshop with uniform development servers that could be scaled as necessary.</li> <li><strong>Data store:</strong> We were able to use <a href="http://www.elasticsearch.org/" class="external-link" rel="nofollow">ElasticSearch</a>, a Lucene-based search engine that provides a REST interface, as a lightweight and flexible data store.</li> <li><strong>Analysis:</strong> UMass's <a href="http://mallet.cs.umass.edu/" class="external-link" rel="nofollow">MALLET</a> toolkit and the <a href="http://acs.lbl.gov/software/colt/" class="external-link" rel="nofollow">Colt</a> library — developed by CERN for "High Performance Scientific and Technical Computing in Java" — allowed us to perform complex analysis of our documents efficiently.</li> </ol> </li> <li><strong>Interfacing with collections:</strong> Bamboo doesn't have to control collections in order to make gains — our use of ElasticSearch's REST API demonstrates that we can easily interface with external collections.</li> <li><strong>Extensibility:</strong> We demonstrated that our distributed approach makes it relatively easy for users to build applications — for example a JavaScript-based web search interface — that work with our architecture without requiring the user to have access to servers or other infrastructure of their own.</li> </ol> </li> </ol> <p>The rapid development process of the workshop required us constantly to balance our long-term goals — experimenting with a distributed, extensible architecture — against our desire to have a working prototype implemented at the end of the three days. In many cases we had two development threads running in parallel, with one group working on a more general solution and another on a simpler fallback. We believe that this process provided us with a better sense of the problems and decisions — and the range of consequences of those decisions — that we'll be faced with in developing future Corpora Space applications. In particular we found that we had underestimated the difficulty of collections interoperability in our preparation for the workshop, while we were much more successful in our use of a diverse set of platforms, tools, and libraries.</p> <p>The following sections of this document discuss the issues outlined above in more detail as they relate to the collections, architecture, and the functionality of the application.</p> <h3 id="CorporaCampLessonsLearnedReport-hkul7gim3lm1xIIICollections"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-hkul7gim3lm1x"></span><strong>III. Collections</strong></h3> <h4 id="CorporaCampLessonsLearnedReport-h3x6b7ul0jnomIII1Commonrepresentationanddatastore"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h3x6b7ul0jnom"></span><strong>III.1 Common representation and data store</strong></h4> <p>The functionality that we had decided to implement at the workshop was designed to operate on very simple representations of the texts from our three collections. Because these collections used two very different formats — TEI-A in the case of TCP and Perseus and a simple page-based plain text format for Hathi — we had a choice between attempting to add structure to the Hathi texts, in order to bring them closer to TCP and Perseus, or to remove structure from the TCP and Perseus texts. While preparing for the workshop we ran a series of experiments on the Hathi texts that suggested that the latter would be a more practical approach, and during the first sessions of the workshop we decided to use a JSON format that would include some basic metadata about each document as well as two simple representations of its content: a plain-text version for use in analysis, and an HTML version for display in the drill-down view.</p> <p>We decided to use ElasticSearch, a Lucene-based full-text search engine, to store and query our texts. ElasticSearch is properly an index, not a data store, but since the functionality we were implementing did not require writing to the collection, we decided that querying ElasticSearch through its REST API would be an appropriate approximation of the way that Corpora Space applications might interact with external collections in the future.</p> <h4 id="CorporaCampLessonsLearnedReport-h3slynj11msjeIII2Hathitexts"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h3slynj11msje"></span><strong>III.2 Hathi texts</strong></h4> <p>The Hathi Trust provided us with a bulk transfer of approximately 120,000 public-domain texts that had been digitized by parties other than Google, and therefore have no restrictions on use or redistribution. Because of the focus of our other collections we decided to work primarily with a subset of the Hathi texts published before 1837. The publication date field in the metadata was not standardized, and we decided to err on the side of excluding texts when the field couldn't be easily parsed. We also filtered out a small set of documents that were marked as being in the public domain only in the United States. This selection process left us with approximately 10,000 texts, many of which were duplicates, with the same edition having been digitized at multiple institutions. We did not have the resources at the workshop to develop a consistent method for filtering duplicates given the available metadata.</p> <p>The quality of the OCR for these texts posed a more serious challenge. The following example from <a href="http://babel.hathitrust.org/cgi/pt?id=mdp.39015078564385&start=1;size=100;page=root;seq=173;view=image;num=149;orient=0" class="external-link" rel="nofollow">an early edition of <em>Gulliver's Travels</em></a> is representative of texts published before 1800:</p> <blockquote><p>wLILLIPUT. 147 I ftayed but two Months with my Wife and Family 5 for my infatiable De- fire of feeing foreign Countries would fuffer me to continue no longer. I left fifteen hundred Pounds with my Wife, and fixed her in a good Houfe at Red- riff. My remaining Stock I carried with me, part in Money, and part in Goods, in hopes to improve my For- tunes. My eldeft Uncle John had left me an Effate in Land, near Epping, of about thirty Pounds a Years and I had along Leafe of the Black-Bull in Fet- ter-Lane, which yielded me as much more: fo that-1 was-not in any danger of leaving my Family upon the Parifh. My Son Johnny, named fo after his Uncle, was at the Grammar School, and atowardly Child. My Daughter Betty (who is now well married, and has Chil- dren) was then at her Needle-Work. I took leave of my-Wife, and Boy and Girl, with tears on both fides, and>went on</p> </blockquote> <p>Many examples were substantially worse, with a word error rate of 70-80% not being unusual in our initial investigations. The texts also had no consistent layout analysis: only page breaks were captured reliably, and the variation in the OCR output made it difficult to recover paragraph breaks in an automated fashion.</p> <p>These problems put constraints on the kind of analysis that we could usefully perform on the data. We had initially decided to offer multiple feature representations of the texts, including something like n-grams as stylistic features, but the extremely high word error rates made this approach less useful. In our initial experiments we found that using Latent Dirichlet Allocation for topic modeling provided an alternative way of characterizing documents usefully despite the many errors.</p> <p>The lack of consistent structure in the Hathi texts also limited our options for selecting appropriate document units for analysis. While it might have possible to use Hathi's OCR coordinate data to identify paragraphs by indentation or page layout, this kind of analysis was beyond the scope of the workshop, and we decided to use pages as our documents for the initial prototype. While this approach is not ideal, since pages generally do not correspond to organic divisions in the text, it seemed to produce interpretable output in our initial experiments.</p> <h4 id="CorporaCampLessonsLearnedReport-h3slynj11msjeIII3Othercollections"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h3slynj11msje"></span><strong>III.3 Other collections</strong></h4> <p>For the TCP and Perseus collections the primary challenge was dividing the texts into units that would be comparable to the page divisions of the Hathi texts. We also needed to create HTML representations of the individual documents for presentation in the drill-down view, as well as a way to create links back to the original collections.</p> <h3 id="CorporaCampLessonsLearnedReport-hqbqar8gr1l9xIVArchitecture"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-hqbqar8gr1l9x"></span><strong>IV. Architecture</strong></h3> <h4 id="CorporaCampLessonsLearnedReport-hari1tba019i5IV1CloudComputing"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-hari1tba019i5"></span><strong>IV.1 Cloud Computing</strong></h4> <p>In preparation for CorporaCamp we built an Amazon EC2 (Elastic Cloud Computing) virtual machine image to facilitate development by allowing the workshop participants to have access to uniform development machines with essential libraries and applications installed and configured in advance. We took as our starting point the official Ubuntu 10.04 LTS image distributed by Canonical and installed and configured Apache Tomcat, ElasticSearch, Git, the MALLET machine learning toolkit, and a number of other libraries and applications.</p> <p>On the first day of the workshop we started three instances of this machine image: two low-powered instances for specific development groups and one large instance with 7.5 GB of memory and two virtual cores for more computationally intensive processing tasks. This approach allowed us to coordinate development work and share data effectively. After the conclusion of the workshop we turned off all three of these instances and started two new small instances: one to host the prototype we had completed and the other to support ongoing development on the Utukku-based version.</p> <h4 id="CorporaCampLessonsLearnedReport-hty164ajfyslIV2CollectionInteroperability"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-hty164ajfysl"></span><strong>IV.2 Collection Interoperability</strong></h4> <p>It is easy to think that collection interoperability is a universal advantage and that accessing all collections through the same interface is a step forward, but interoperability has costs, and there are situations in which the properties of the collection will need to be preserved. Some collections are built around a specific set of research questions. Not all humanists will want all collections to have the same interface because they have their own questions that might depend on particular properties of the collections. While this may not be as much of an issue with the large collections we looked at (e.g., Haithi Trust, ECCO, EEBO), it will be important when we try to bring in smaller curated collections.</p> <p>The CorporaCamp platform presents functionality such as an interface into a collection as an XML namespace with a collection of functions. The platform can support multiple collection profiles by assigning a different namespace to each profile. The collections can advertise which profiles they support by having the agent export the corresponding namespaces to the ecosystem.</p> <h4 id="CorporaCampLessonsLearnedReport-h93zhtx2ay29kIV3SupportforWideRangeofInterfaces"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h93zhtx2ay29k"></span><strong>IV.3 Support for Wide Range of Interfaces</strong></h4> <p>Going in to CorporaCamp, we thought that we should support a wide range of interfaces based on our experience with such systems as <em>Mathematica</em>, which allow users to pose their research question in the way that best fits their question, as well as knowing how steep the learning curve can be with such a system. We also knew that as users became experts with the tools, they would want greater flexibility and control.</p> <p>The distributed Utukku architecture allowed us to create a web-based JavaScript client as well as a command line client. The command line client allowed us to ask questions of the collections that the web-based client wasn't prepared to allow. For example, we were able to look at the frequency of texts and modify our queries as we explored the question in a way that would be difficult to build into a graphical interface.</p> <p>We confirmed that having the low-level interface can be useful and powerful as a companion to specialized tools that are built around specific research questions.</p> <h3 id="CorporaCampLessonsLearnedReport-h5edpojly6s3qVApplication"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h5edpojly6s3q"></span><strong>V. Application</strong></h3> <h4 id="CorporaCampLessonsLearnedReport-h2xcmqvokkjuxV1Analysis"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h2xcmqvokkjux"></span><strong>V.1 Analysis</strong></h4> <p>The implementation of the methods we had chosen to use to analyze and visualize the texts was relatively straightforward. After transforming a selection of texts from our source collections into a common JSON format, we were able to use MALLET, a machine learning toolkit developed at the University of Massachusetts Amherst, to learn a topic model from a subset of the texts and to use that model to annotate the entire selection. The first of these two steps must be done in advance, since it can take up to several hours for large training sets (on the order of 100 million words, for example). The annotation step, in which every document is labeled with topic assignments, could be done dynamically if necessary, but for the sake of efficiency and convenience we also performed it as a preprocessing step.</p> <p>Our approach to visualization requires principal component analysis to be performed on the user's current selection of texts for each specific experiment. We had used the R environment for PCA in our initial demonstrations, but R is a complex tool and is not primarily designed to support user-friendly web applications. We decided instead to implement this part of the analysis in a library that could run on the Java Virtual Machine, which would allow it to be used from the Ruby version of Utukku via JRuby, as well as from a wide range of web application frameworks.</p> <p>We were able to create an efficient implementation of PCA in Scala using the linear algebra packages of the <a href="http://acs.lbl.gov/software/colt/" class="external-link" rel="nofollow">Colt library</a>, which was developed by CERN for high-performance scientific computing in Java. This approach is fast and memory-efficient enough to serve dozens of concurrent users from a single Amazon EC2 "micro" instance.</p> <p>In our early experiments with the application we discovered a few recurring problems with noise in our source texts creating uninteresting artifacts in the visualizations. We were able to resolve most of these issues by adding several additional preprocessing steps: for the texts from Hathi, for example, we removed running heads from the pages, and ignored all pages with fewer than 40 characters. Apart from these minor changes, the analysis components have required very few revisions since the workshop.</p> <h4 id="CorporaCampLessonsLearnedReport-h9eh9tt3q8a5pV2Visualizationdesign"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h9eh9tt3q8a5p"></span><strong>V.2 Visualization design</strong></h4> <p>In our experiments with the application on the last day of the workshop, we found that while many features of the visualizations it produced were intuitive and interpretable, others were simply confusing. This problem was compounded by the fact that the drilldown functionality was unsatisfactory as a method for exploring the space represented in the map: examining individual pages was simply too time-consuming. We've attempted to make the map easier to interpret by adding a chart showing the "component loadings," which represent the contributions of topics to the two principal components currently shown in the map. We also added a graph showing the variance captured by each of the first eight principal components. This graph provides a visual explanation of how well the map explains the structure of the data. If the graph falls off quickly after the first two components, that is an indication that the dimensions shown in the map do a good job of summarizing the structure of the data. If the graph slopes more slowly, that indicates the opposite.</p> <p><img src="/sites/default/files/bamboo/graphs-01.png" /></p> <p>These additional elements make the visualization more useful for trained users, but they also complicate an interface that we had initially intended to be very simple. Balancing these concerns — power and simplicity — will be one of the central tasks for any future work on the Woodchipper tool, as well as for any other visualization or analysis application developed or deployed as part of Corpora Space.</p> <h4 id="CorporaCampLessonsLearnedReport-h5yx4cd1wzcdpV3Visualizationsoftware"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-h5yx4cd1wzcdp"></span><strong>V.3 Visualization software</strong></h4> <p>We experimented with two libraries for creating the visualizations in the web browser. The current prototype version of the application uses Flot, a simple plotting library for jQuery. Flot supports a relatively limited range of visualization approaches, but these have proven sufficient for a first implementation of the Woodchipper tool. We also experimented with Raphaël, a much more general JavaScript library for creating graphics on the web, which would probably serve as a more appropriate foundation for future visualization applications.</p> <h3 id="CorporaCampLessonsLearnedReport-hprr97qkkgjasVIRecommendations"><span class="confluence-anchor-link" id="CorporaCampLessonsLearnedReport-hprr97qkkgjas"></span><strong>VI. Recommendations</strong></h3> <h4 id="CorporaCampLessonsLearnedReport-VI1Methodsfortextexploration">VI.1 Methods for text exploration</h4> <p>While the primary goal of CorporaCamp was to serve as an experiment in rapid, iterative design and development — not to build an application that would become a central piece of Corpora Space — we did develop a stronger sense of the advantages and disadvantages of the methods we had chosen for scholarly text exploration.</p> <p>LDA topic modeling provided a very powerful way of characterizing uncorrected, unstructured text, but interpreting the topics it infers often requires experience and training. A topic that shows up in our visualization with the label "church religion gospel holy authority" has a clear interpretation, for example, but the meaning of a topic with the label "voice heard man eyes see" isn’t as immediately obvious. This latter topic plays a central role in many of our experiments, however — it serves as a kind of general narrative topic that often allows the system to draw important distinctions between documents. In future applications using unsupervised topic modeling we would therefore suggest having a trained human annotator add labels instead of simply using the most prominent words from the distribution.</p> <p>We also recently experimented with adapting the Woodchipper application to a very different kind of text collection: a corpus of syllabi harvested from university websites by Dan Cohen at the <a href="http://chnm.gmu.edu/" class="external-link" rel="nofollow">Center for History and New Media</a>. Topic modeling also proved useful here, since it was not possible to extract any consistent structure from the 15,000 syllabi we selected. It wasn't as clear that principal component analysis was the best way to perform the dimensionality reduction, however — most experiments did not produce the interpretable clusters that we very commonly saw when looking at literary and historical texts. We are experimenting with adding components implementing other dimensionality-reduction techniques — for example <a href="http://en.wikipedia.org/wiki/Kohonen_maps" class="external-link" rel="nofollow">self-organizing maps</a> — that could be substituted for PCA in the current system. The comparative failure of our PCA-based visualization on this collection highlights the need to tailor the methods used for visualization and exploration to the shape of the dataset.</p> <h4 id="CorporaCampLessonsLearnedReport-VI2CollectionsInteroperability">VI.2 Collections Interoperability</h4> <p>We brought together subsets of three collections at Corpora Camp and made one tool work with all of the texts. In the process, we did some pre-processing that made all of the texts have the same metadata. Essentially, we made all of the texts fit a particular profile that was needed for the functionality to work.</p> <p>Different functionality may need different metadata or collection properties. Instead of trying to make all collections fit a single profile of metadata, structures, and markup, we may want to explore using profiles to balance the need for interoperable collections and collections built with a particular set of properties.</p> <h4 id="CorporaCampLessonsLearnedReport-VI3CreateaPathtoanExpert-levelInterface">VI.3 Create a Path to an Expert-level Interface</h4> <p>Tools such as the woodchipper are limited to exploring one particular aspect of the texts. People need tools that are simple to use with a shallow learning curve. Based on the concept of flow, in which people enjoy tasks that have a close match between skill and challenge, we need tools to act as stepping stones to more powerful and flexible (and usually more complex) tools. As people exercise their skills against challenges, their skills improve, requiring slightly more complex challenges to restore flow.</p> <h3 id="CorporaCampLessonsLearnedReport-VIIConclusion">VII Conclusion</h3> <p>The trade-offs, challenges and successes of Corpora Camp will inform the design process for Corpora Space. Our proposal for the next workshop will look to broaden out from a single functionality, in order to address questions and problems involved in the combination of multiple tools working across multiple distributed collections. This will lead to the process of determining the road map for the next phase of Corpora Space.</p>