"hope to find better ways to integrate media, embed images, be truly multimedia" (Ex 1, 1b-D)
"Disc space on any given computer is being filled with digital media (not text). What does the "replacement" of text as a primary area of focus mean to humanities scholarship? What tools are available for searching on, annotating, extracting from, publishing, etc. these -- these are at a less-developed stage than tools for text." (Ex 1, 1c-C)
"Multimedia: explosion in non-textual artifacts. Makes computing needs for A&H quite comparable with needs in the sciences, given high volume of digital artifacts." (Ex 1, 1c-C)
"New digital literacy, mashup culture, clips from movies and arts and that's critical MPA and RWA use of copyright and video and showing up; in evidence of learning." (Ex 1, 1d-A)
"humanities scholarship is images. Use of images more widespread. Copyright issues." (Ex 1, 1d-B)
"Biggest change is the multimedia, used to be text or image problem. Students using mixed media, we have to teach them how to do it, need machinery, way to share it and store it." (Ex 1, 1d-B)
"Images, audio, data, gen Chinese culture at [large university]. Try to archive my doc video; tools so cumbersome. Video, so complex. How do you make that available so that it doesn't anticipate or impose categories that may not be intuitive." (Ex 1, 1d-B)
"There's early adapters, some work with 1st year students in innovative ways (multimedia composition). At Dartmouth, they have a class for students w/o strong writing skills (esp. international students or Native American students, have to take two term course in writing). Can make films, take pictures, add music, think of transitions, think of it as a paper. Students learn to "compose" in a different way, then do a research paper dealing with their multimedia composition. Very successful program" (Ex 1, 1d-E)
"One class in East Asian - students submit all work in movies. Use of tools is a central thing. Faculty worried about students' strong knowledge of new technologies. Faculty also not worried; quality of papers not improving. Some students think "if it's not easy to find, it's not worth finding" (Ex 1, 1d-E)
"Importance of visual and non-text media" (Ex 1, 1d-G)
"issues related to archiving information in/from multiple media; how to, do it better, access to resources that are already in place" (Ex 1, 1d-H)
"Publishing research in multimedia form. Multimedia sounds like the past. Subtasks: finding media selecting and using them; identifying most-effective means (media, patform, format) for communicating an idea (argument or point). A lot of the terms are technology oriented? How can we come up with more distinctive humanistic terms?; Is a novel a piece of data? Identifying unit of analysis is related to methodology" (Ex 2, 1a-B)
"Annotating texts. There can be annotations at many levels; Great examples from Talmudic tradition. Now lots of multimedia. You might have a transcription, translation of source. There are many levels of collaboration, private and collaborative; social tagging; personal annotations; A book just came out that is a study of 15th century annotations." (Ex 2, 1a-B)
"Data set might be original source, secondary data, images, texts, audio files, bibliographies, stories, GIS locations" (Ex 2, 1a-F)
"publish in print and conference presentations, "non-linear multimedia presentation" - digitized media and digitized text, all essays have begun as presentations; multimedia archive projects - video archive and video annotation. believes that most digital humanities projects do form the basis of scholarly research. doesn't consider himself computer savvy, but is media savvy -- manages his own system" (Ex 2, 1d-C)
"Scholars need to expand their literacy to examine multiple sources and multiple media." (Ex 2, 1d-C)
"We use Blackboard in teaching and learning sites to collect these things. Units were generic depending on the year in cycle (this is 10 yrs), now doing in dozen different teaching fields for 4 yrs. Can invite people in to password-protected sites; want to take material out of that. Graphic references w/ hyperlinks, text. Going back 5 or 6 yrs." (Ex 2, 1d-D)
"resistance and/or capacity limitation to new media types 6 images for a 400 page document" (Ex 4, 1d-C)
"Internet is no longer usable after a certain point for my research. Can get some kind of bibliography, but then you have to read articles/footnotes, constantly go back to originals, which aren't available on-line. These are temporary measures at this time, a first step. But what if everything has been digitized? What tools will you need for that? At that point, have to refine what you're doing, ask more interesting questions. Interesting if you develop techniques where you can sense that there's a gap of stuff that needs to be brought on-line sooner rather than later. Start down these roads learning new ways to do it; leads to machine intuition, etc." (Ex 4, 1d-E)
"Relatedly, what scholars have to do to transform between models. Enriching by looking at what else is out there. People are creating GIS in my field for their archeological project. As we get more projects, we get patchwork quilts of databases that don't talk to each other" (Ex 4, 1d-E)
"can db access be done in a standardized way? valley of the shadow: 1990s presentation, not interactive particularly, outdated, could be done better now. images etc. independent of indexes. would be nice if humanists had a space where all that was preserved, could be revisited, rebuild, recreated. amazon S3 model: content-free infrastructure that humanists could fill with content." (Ex 4, 1d-F)
"who will be responsible for all ways all these media will be joined and made accessible? library might have different baskets for jpegs, tiffs, etc. library can preserve the object, who indexes for access. in SOA that would be done on a case by case basis, library not responsible. isn't that the problem? if goal is for re-use, maintaining at faculty level... objects maintained in library or such, mashups would be handled by consortia." (Ex 4, 1d-F)
"How would I find the ability to do these things as a researcher? YouTube, commercial spaces. Some require specialized software. 3D visualization. Development of cheaper alternatives to do the same types of spectrographic visualizations. "It's not there yet. It's not as normal as television or taking a book off a shelf."" (Ex 5, 1c-B)
"I might tend to do more with images, because then I know I might be using those images for other sorts of things in a readily available form. If there's things likely to use in research > digitization. Students might want to use them, I might for PowerPoint - potential for multi-use" (Ex 5, 1c-D)
"Lots of issues re: different formats for recorded versions. With text, at some level you can move from one platform to another. Much harder for video. YouTube doesn't cut it for scholarly analysis of most performances" (Ex 5, 1c-D)
"Need to be aware of multi-media. Lots of people come in with skills in m-m. We found humanities grads were very proficient in using various forms of m-m. Younger people have less tolerance of the inertia in the existing systems. We're seeing it too. They want to immediately come in & make changes." (Ex 6a, 1a-D)
"Teaching-oriented has strong selection bias towards junior faculty comfortable w/ mixed media, etc. - want to use in their teaching" (Ex 6a, 1c-C)
"In languages, it's expected that incoming hires will incorporate technology in the classroom. They use blackboard, can digitize clips, work with library, Instructional technology. I look to my new colleagues for inspiration on how to use technology. Non-tenure track people take more advantage of grants for teaching. Standing faculty are more about just writing books" (Ex 6a, 1d-C)
"Easy multimedia authoring/publishing" (Ex 6b, 1a-G)
"Need to be aware of multi-media. Lots of people come in with skills in m-m. We found humanities grads were very proficient in using various forms of m-m." (Ex 6b, 1a-G)
"multimedia forms of writing and expression --> new model for publication" (Ex 6b, 1b-A)
"publishing platform that's multimedia that works (doesn't freeze client computers) and allow/enforce publishing of quality materials." (Ex 6b, 1b-B)
"Sustainable ways to gather baseline data about student writing practices. Gathering reservoirs of samples of student data, and finding ways to slice and dice that data. Multimedia makes that proposition even more challenging." (Ex 6b, 1b-D)
"A multi-media scholars notebook" (Ex 6b, 1d-A)
"Scale of amount of data that humanities researchers may be looking to get to grips with when we look beyond print media is staggering. Want to remind everyone that some of the scale of what we're up against is ... problems we have in dealing with storage/access to print media are miniscule compared to other media" (Ex 7, 1c-C)
"Sound archives have reached a critical point in their history marked by the simultaneous rapid deterioration of unique original materials, the development of powerful new digital technologies, and the consequent decline of analog formats and media. Motivated by these concerns, in 2005 the Indiana University Archives of Traditional Music and the Archive of World Music at Harvard University began Phase 1 of Sound Directions: Digital Preservation and Access for Global Audio Heritage - a joint technical archiving project with funding from the National Endowment for the Humanities. One major goal of the project was to test emerging standards and develop best practices for audio preservation." (SN-0010 Audiovisual Preservation Issues and the Sound Directions Project, Alan Burdette, 1/9/09)
We have a number of interdisciplinary projects involving different collaborations across Australia, but each project has a common objective of working with Aboriginal people to help them document their cultural heritage and environmental knowledge. This involves researchers collecting new data in the field, including images, video and GPS data. It also involves digitally returning material held in museum collections to remote communities and working with them to update museum records. The projects require a spectrum of tools and services to deal with data collection, annotation, preservation, access, analysis and publication. We have developed a few prototype systems addressing some of the preservation, access and publication needs but we have not managed to develop standardised workflows to get data into these systems.
1. Capture: Digital tape-based video cameras and digital still cameras are used. GPS data may also be recorded by a PDA using 'Cybertracker' or 'ArcGIS'.
2. Download: Currently images are downloaded by different researchers to their local hard drives in different ways eg. using digital camera software or iPhoto for example. Video is digitised and logged using Final Cut Pro.
3. Documentation: Adobe Bridge has been useful. Filemaker has been used, Microsoft Excel is also used by some. We need to develop some consistent workflows that researchers can adopt so that their data will interface with a repository. Some documentation needs to be done offline.
4. Filenaming and resizing: Maybe this could be done automatically on upload to a web-based system.
5. Metadata mapping: We have done some transforming of metadata from Filemaker to a standard schema.
The work done to date has been more at the end of setting up of repositories and web-based systems for access and discovery, eg Fedora, OAI-PMH and Google maps. This is possibly the more complex end from an IT perspective, but the systems are not usable by researchers as there is no workflow in place.
Develop standardised workflows for researchers working with visual material to move the media from their local hard drives into shared databases/repositories." (SN-0011 Workflows for Contributing Visual Media, Katie Hayne, 1/9/09)
"WHAT: The compilation of a series of digital morphs illustrating the problem of the ontology of a text, its relation with precedent and subsequent texts, and the blurring of the boundaries of the "work". The compilation/database is primarily of visual materials (from paintings to video games to cartoons to architecture), but also incorporates audio files. The morphing project has been the subject of several lectures at "digital resources" conferences, is used in my textual/critical courses, and has been the subject of a number of published (and to be published essays). An example of a mid-point in a low-resolution morph constructed from two different video games is attached.
HOW: The following is a caption to an illustration from the compilation; it sets out the procedures in a general way. Complex Morph storyboard, showing selection of key points and keylines in a two-sequence morph on three states. Note that once a keypoint has been selected in the opening frame of each level of a storyboard, the morphist must then make a subjective decision on what will be the appropriate analogous keypoint on the closing frame of that level (i.e. the initial digital pairing of the two keypoints is based purely on the positions of individual pixels in the graphic frame, and it is the morphist who must then drag the corresponding keypoint to the pixel that best represents the formal or ontological equivalence in the morph narrative being constructed). Other technical and critical decisions made by the morphist that will have direct effects on every frame of the total morph movie include the setting of time codes, the image resolution (in dpi), the image resizing, the chroma-keying (adjustment of colour wheel), the zoom ratio, the setting of interpolation points (transformation-control points along each keyline), degrees of rotation, the selection of crossfade protocols, the compression ratio, the relation between quality of animation-image and animation motion (in inverse proportion), the frames per second (8 is standard low-end for computer animations, 30 for NTSC US and 25 European video), and the pixel depth (i.e. the number of colours in the transition image), which will depend on the technical capacities of the playback device (8-bit, 24-bit). All of this demonstrates that, while the resulting morph may look like "free play" or "feminist fluidity", it is in fact the construct of a very complex series of technical and critical decisions made by the morphist.
HELPS: Graphics and audio editing programs that can accomplish the steps laid out above.
NEED: As above, with more sophisticated morphing software and display." (SN-0016 Support for Morphing of Audio-visual Assets, David Greetham)
"We have created the Global Performing Arts Database (www.glopad.org), a multimedia, multilingual, Web-accessible database containing digital images, texts, video clips, sound recordings, and complex media objects related to the performing arts from around the world, plus information about related pieces, productions, performers, and creators. In addition, a team of GloPAC scholars is building JPARC, an interactive and interpretive Web-based research and teaching environment focused on the Japanese Performing Arts. One of our more pressing technology needs for both GloPAD and JPARC is in the area of text and video annotation. We want our scholars to be able to easily annotate play scripts with multi-media objects, and to annotate videos with text subtitles and notes. We currently use an ad hoc combination of tools such as the QuicktimePro video editor, HTML page editors, and Flash players to annotate, but none of this work is possible without a good bit of training in complicated software and computer set ups. We have also been hampered by the technological decay of software. Some of the procedures we developed only a year or two ago no longer work due to the changes in the commercial software on which we had to rely. (See, for example, our subtitling how-to: http://hdl.handle.net/1813/3341). We need a reliable service that includes tools for timed text (for subtitling and captioning) and multi-media annotation that can be easily used by the scholars who are helping to build these resources." (SN-0020 The Global Performing Arts Database No. 2, Ann Ferguson)
"I can find large groups of good images to use in certain on-line databases - ARTstor, the Library Image Database, Gardner's Image Set. After that, I might turn to Flickr and Google Image, which, of course, are not search-able according to any standard metadata system, and which may be completely mis-identified, thus finding anything in particular is pure serendipity. In the process of putting all this together (a big presentation of 99 PPT slides in the end), I will experience certain frustrations related to image quality and metadata. E.g., I will want to show some ground plans - first of the Akropolis (or Acropolis depending on the source...) and then of the building itself. For this particular class, I don't want plans that include too much archaeological detail that will just be confusing for beginning students, so I may reject some of what I find on that pedagogical basis. I may reject other options because the images are too blurry (common problem with maps and plans) and/or not big/high enough in resolution. Ultimately, I may have to order/scan a plan myself or order one from an on-line vendor (Saskia/Universal)." (SN-0044, Ann Nicgorski)
"It would be nice to have a workspace where one could utilize WYSIWYG construction to build scholarly documents that utilize rich media. These documents could then be routed to a number of publication venues, DVD, CD, WWW, even article." (SN-0025 Biofutures - Owning Body Parts and Information - An Interactive DVD ROM, Phillip Thurtle, 12/22/08)
"The collections of the art museum and archives at Willamette University reflect the life and work of several Pacific Northwest artists. Currently, museum and archives staff create digital copies of artwork and artists' papers using a digital camera or scanner, submitting these in turn to a digital repository. Library staff provide additional metadata services for these collections, such as controlled vocabularies, metadata mapping, and (ideally) the application of thesauri or ontologies. In addition, museum and archives staff work with art history faculty to develop online exhibits and publications that incorporate images of artwork, personal papers, audio recordings, historical documents and interpretive essays. The individuals involved in this work have little or no previous multimedia authoring experience, so a relatively simple-to-use authoring tool is required. The tool chosen was the open source Pachyderm 2.0. In 2008, the University received NEH funding to develop an Repository Open Service Interface Definition (OSID) that integrates the Pachyderm 2.0 authoring tool with the Willamette image repository -- currently running on CONTENTdm, a software application widely used for this purpose at many colleges and universities. The artist chosen for an initial project under the NEH grant is Carl Hall, a major Pacific Northwest artist who first attracted national attention as a Magic Realist in the 1940's. An art historian has developed the conceptual organization of the exhibit and selected the works and personal papers to be included. He will also provide most of the textual commentary, which will subsequently be added to the exhibit by museum staff using Pachyderm (via cut-and-paste). Meanwhile, museum and archives staff will digitize, catalog, and upload content to the digital repository. Preliminary work on storyboarding, visualization. and the development of early prototypes will be completed. Images and other media in the CONTENTdm repository will accessed from within the Pachyderm 2.0 authoring work flow via the repository OSID plugin. Flash presentations will be created using standard Pachyderm design templates and then customized using an online service developed at the University of British Columbia for this purpose. Final publication involves downloading a zip file of the Flash presentation, applying customizations to the presentation directory, and copying the file to a web server. Missing currently from the Pachyderm authoring work flow is the option of cropping images to standard dimensions. Without this, images can be cropped only by downloading a copy from the repository, editing the image on a personal computer, and uploading the revised copy to Pachyderm 2.0. A more complete set of authoring tools would be helpful here. One approach to creating this editing ability is to build the tool into the Pachyderm application itself. Another approach is to use an image server not only to retrieve images from the repository, but also to scale and crop these images on demand. Similar basic editing capabilities would be useful for audio and video as well. The particulars of this story involve a team effort in which the scholar's role is largely focused on content and high-level conceptualization. Other scholars at Willamette have a keen interest in multimedia and the process of multimedia authorship, and can articulate the possible value of methodologies that give them -- and their students -- an efficient means of multimedia publication combined with archiving for preservation and reuse of primary materials." (SN-0029 Creating Online Exhibits, Michael Spalti)
"Live dance, portable Tele-Immersion stations, and audio technologies juxtaposed with large and small scale projections drew the audience members into a performance in 360° that they could enter at any time. Inspired by Nine Evenings: Theater and Engineering (1966) in New York's 69th Regiment Armory by artist Robert Rauschenberg and Bell Laboratories engineer Billy Klüver, Panorama brought together a multi-disciplinary cast of dance makers, artists, scientists, and engineers to create an evening of interactive and technologically alive theater, honoring the cutting-edge collaborations and technological explorations that are the hallmark of the Merce Cunningham and John Cage legacy. At the heart of the Happening were actual dancing bodies. Twenty-two dancers within the durational event itself performed chance organized dance phrases as well as recited text organized through computational algorithms written and created by Sheldon B. Smith, using the program Isadoara. In addition to the 22 dancers participating in the Happening itself, 25 volunteer performers performed movement improvisations outside the building linking Zellerbach Hall to the Pauley room as a kind of metaphorical internet connection sending a binary "code" of human movement down a long line of people. This line movement improvisation will represent the virtual connection of the two large halls to one another as participants experience events that will permeate both spaces throughout the evening.
Dancers/performers interacted with Black Cloud senor data being streamed live from Zellerbach Hall. These sensors, created by Greg Niemeyer, were placed in Zellerbach Hall and collect sound data, CO2 changes, light changes and temperature fluctuations from the main hall and stream this data to the performance site. Jen Wang, the composer, is interpreting this data into a sound score; utilizing the constantly fluctuating data streams to compose the musical structure. Go to www.blackcloud.org to see the data streaming live. In addition to the sound score created by Jen Wang Luc Ferrari's piece Didascalies was used during the performance.
Audience members were encouraged to step into the Tele-Immersion stations located at either end of the performance space. Tele-Immersion can best be understood as a hybrid of technologies that focus on social and physical properties. Like avatar based social games played on the Internet Tele-Immersion connects people through the cyber landscape, AND like tele-communication technologies such as tele-conferencing or video conferencing the physical body is present within the interaction.
Tele-immersion does not use animation techniques usually associated with motion capture but rather uses data collected from a series of specialized digital web cameras. The data collected is rendered together quite rapidly to create a life like version of the user within cyber space moving in real time. So unlike video conferencing individuals using Tele-Immersion can meet within virtual meeting locations - in real time. Furthermore, once an individual is immersed the user can apply infinite digital manipulations to the images being rendered within the system.
Inspired by Merce Cunningham's TV Re-Run (1972), the Robot Reflex project provided spectators with a viewpoint "inside" the dance. Robotic cameras programmed to respond to dancer's motion track their movements and projected an image in tandem to the live performance, allowing the audience to simultaneously view the dance inside and out. In addition, software analysis of the motion shows viewers what the computer sees, and how it interprets movement.
Isadora was used to mix all of the data inputs. Large scale projections at either end of the performance space projected the mixed data as it related to each section of the performance." (SN-0049 Tele-immersion and Live Performance, Lisa Wymore)
"Professor Q, a professor of folklore, is collecting oral histories of local storytellers. Using a digital voice recorder, she has multiple interviews with each of her subjects. She uploads the files to the server. She has several graduate assistants transcribe the notes. These, too, are stored on the server. When the data is ready, she begins to analyze the data using the timeline tool to note the beginnings and endings of stories, the commentary that the subject provided for each story, the different phases of the subjects life, etc. Professor Q and her GAs can use the synchronization tool to coordinate the audio to the transcription. Using the annotation tools, Professor Q can add her own analysis and observations. The timelines and other products can be exported to an interactive Web page that can be sent out for peer review." (SN-0054 Variations - a Tool Set for Music Research and Pedagogy, Stacy Kowalczyk)