Providing resources and trai­ning in the practices and tools of the digital humanities
  • Home
  • Digital Humanities Now »
Subscribe to Digital Humanities Now feed
Community-Curated Content Published by the Roy Rosenzweig Center for History and New Media
Updated: 16 min 36 sec ago

Editors’ Choice: The Form of Digital Projects

Thu, 07/13/2017 - 11:00

Unlike print, the form of digital projects has a direct bearing on the ideas they convey.

Not too long ago we used word processors to write documents on computers. The act of writing itself was called “word processing.” The excitement around the revolutionary new technology (first electric typewriters, then computer applications) inspired a new name for writing, defined by the instrument with which we produced it. Now the technology has become common place and we just write documents, whether electronic or on paper. The term “word processing” has fallen out of use.

So, in another decade, will the long-form, peer-reviewed digital humanities projects, or interactive scholarly works, produced today be known as just books? Is it our excitement about the new technological instruments of production that has us searching for a new name? Time will tell. What we know for certain is that this new form of scholarly publication has significant implications for the practices and processes of authoring, publishing, archiving and preservation.

 

Read the full post here.

Announcement: Zotero 5.0 Release

Tue, 07/11/2017 - 13:30

From the post:

We’re delighted to announce the release of Zotero 5.0, the next major version of Zotero and the biggest upgrade in Zotero’s history. Zotero 5.0 brings many new features, as well as a huge number of changes under the hood to improve Zotero’s responsiveness and stability and lay the groundwork for other big developments coming soon. We’ll be highlighting some of the new features in upcoming posts, but for now see the changelog for more details on all that’s new.

Read more here.

Announcement: Neatline 2.5.2 Release

Tue, 07/11/2017 - 13:00

From the announcement:

New release!

First, a huge thank you to Jamie Folsom and Andy Stuhl from Perfomant Software Solutions LLC, who did the heavy lifting on the coding for this release. We couldn’t have done it without them. We’re grateful, as well, to Neatline community member Adam Doan (@doana on Github) from the University of Guelph, whose code contributions made Neatline’s first accessibility functionality possible.

Read the full announcement here.

Resource: USAboundaries v0.3.0 Released

Tue, 07/11/2017 - 12:30

From the post:

I’ve recently published version 0.3.0 of my USAboundaries R package to CRAN. USAboundaries provides access to spatial data for U.S. counties, states, cities, congressional districts, and zip codes. Of course you can easily get contemporary boundaries from lots of places, but this package lets you specify dates and get the locations for historical county and state boundaries as well as city locations.

Read the full post here.

Resource: Ways to Compute Topics over Time, Part 4

Tue, 07/11/2017 - 12:00

From the resource:

This is the last in a series of posts which constitute a “lit review” of sorts, documenting the range of methods scholars are using to compute the distribution of topics over time. The strategies I am considering are:

Average of topic weights per year (First Post)
Smoothing or regression analysis (Second Post)
Proportion of total weights per year (Third Post)
Prevalence of the top topic per year (Final Post)

To explore a range of strategies for computing and visualizing topics over time from a standard LDA model, I am using a model I created from my dissertation materials.

Read the full resource here.

CFP: Chicago Colloquium on Digital Humanities and Computer Science 2017

Tue, 07/11/2017 - 11:30

From the CFP:

We invite submissions on any research broadly related to Digital Humanities and Computer Science from scholars, researchers, librarians, technologists, and students. We particularly encourage proposals on visualization tools, theories, methodologies and workflows to make sense of Big Data.

Read the full CFP here.

Editors’ Choice: Disrupting The Silicon Valley Department of Education

Tue, 07/11/2017 - 11:00

First, Silicon Valley entrepreneurs are seeking to disrupt education in the same way its technologies have disrupted other areas of everyday life, from hailing a cab and booking accommodation to finding a date or looking after your health. The main concept Silicon Valley uses to explain its focus in education is “personalization,” as demonstrated by startup schools like AltSchool, Khan Lab School and the Summit Schools network.

The Stanford University education technology researcher Larry Cuban has been reporting on some of these new schools based on a series of lesson observations. Though he remains skeptical of their capacity to “scale-up” beyond the sites where they have started-up in Silicon Valley itself, he has also reported admiration for their synthesis of progressive, inquiry-based pedagogies and efficient, administrative uses of data analytics to support personalized learning.

Second, though, Silicon Valley has set itself the challenge of educating young people to be able to live and thrive in the disrupted world — that is, “to rule the machines.” The way it hopes to achieve this aim is through teaching kids to code.

One such effort, according to Natasha Singer in the New York Times, is the learning to code organization Code.org, “a major nonprofit group financed with more than $60 million from Silicon Valley luminaries and their companies, which has the stated goal of getting every public school in the United States to teach computer science. Its argument is twofold: Students would benefit from these classes, and companies need more programmers.”

 

Read the full post here.

Job: IISH Postdoc Researcher, Digital Humanities and Global Labour History

Thu, 07/06/2017 - 14:00

From the ad:

The International Institute of Social History seeks to appoint a post-doc researcher for the multidisciplinary projects “Diamonds in Borneo: Commodities as Concepts in Context” and “Linked Open Data Gazetteers of the Americas,” funded by two CLARIAH Research Pilot grants awarded to Prof. Dr. Karin Hofmeester and Dr. Rombert Stapel. The post-doc will be part of a team of researchers from different Dutch institutions and work on both highly related projects. The aim of both projects is to contribute to the research infrastructure developed within the Common Lab Research Infrastructure for the Arts and Humanities project (CLARIAH; https://www.clariah.nl). Both projects are also explicitly intended to increase our knowledge of the history and dynamics of globalization and are part of the general research program of the IISH.

Read the full ad here.

Announcement: Archives Unleashed Project Awarded Grant from Andrew W. Mellon Foundation

Thu, 07/06/2017 - 13:30

From the announcement:

The University of Waterloo and York University have been awarded a grant from the Andrew W. Mellon Foundation to make petabytes of historical internet content accessible to scholars and others interested in researching the recent past.

The grant, valued at $610,625, supports Archives Unleashed, a project that will develop web archive search and data analysis tools to enable scholars and librarians to access, share, and investigate recent history since the early days of the World Wide Web. It is additionally supported by generous in-kind and financial contributions from Start Smart Labs, Compute Canada, York University Libraries and the University of Waterloo’s Faculty of Arts.   

Read the full announcement here.

Resource: Ways to Compute Topics over Time, Part 3

Thu, 07/06/2017 - 13:00

From the resource:

This is the third in a series of posts which constitute a “lit review” of sorts, documenting the range of methods scholars are using to compute the distribution of topics over time.

Graphs of topic prevalence over time are some of the most ubiquitous in digital humanities discussions of topic modeling. They are used as a mechanism for identifying spikes in discourse and for depicting the relationship between the various discourses in a corpus.

Topic prevalence over time is not, however, a measure that is returned with the standard modeling tools such as MALLET or Gensim. Instead, it is computed after the fact by combining the model data with external metadata and aggregating the model results. And, as it turns out, there are a number of ways that the data can be aggregated and displayed.

Read the full post here.

Resource: Full Draft of Theory & Craft of Digital Preservation

Thu, 07/06/2017 - 12:30

From the resource:

This weekend I’m submitting the full draft of the manuscript for my book The Theory and Craft of Digital Preservation to the publisher, Johns Hopkins University Press.

I’ve had a lot of fun working on this on nights and weekends over the last year. I have also learned a ton from everyone who has read drafts of the work in progress.

I’ve had a few folks reach out to me after reading parts of drafts and say things like “I’d love to read more of this. When will it be out?” I’m not sure exactly how long it will take for the next round of review and all the improvements that will come from working with a great press. With that said, drafts of the entire book are now online. Instead of having folks pick through my previous blog posts with the links, I figured I would put them all together in order in this post.

Read more here.

Editors’ Choice: Standard Practice – Libraries As Structuring Machines

Thu, 07/06/2017 - 12:00

As Lawrence Busch has put it, standards and related forms are “the ways in which we order ourselves, other people, things, processes, numbers, and even language itself.” Standards enable access to things, allowing social worlds to interact. Standards tell cars when to stop and go. Standards direct the flow of water and power. In libraries, standards do similar work, enabling some ways of knowing and being and not others due to the pathways they create through space. Users must adopt the vocabulary of the knowledge organization scheme to efficiently retrieve books on a topic, opting for Gays or Lesbians rather than Queer or Same-gender-loving or any of the myriad terms people use to describe their sexual selves or communities. David Wojnarowicz’s accounts of life with AIDS under Reagan will be classified with other books about AIDS (Disease)—Patients, reducing his fiercely political text to the story of an individual sick body. Librarians at the reference desk will smile, whether that smile is culturally or contextually appropriate. Standards have material effects in libraries.

Standards are not, of course, wholly determinative. Even when paths are clearly marked, walkers will move as they want to. As Adler argues, readers in libraries make their own meaning from the rigid designations on the shelves, reading “perversely” to tell stories other than those told by the classification schemes. And in digital spaces, search and retrieval is not bound by the same kinds of conventions as traditional library systems. If the card catalog enabled patrons to search only by author, title, and subject, the database expanded search to include keywords. Once the internet arrives, search and retrieval sidesteps completely controlled vocabularies and categories structured in advance. On the internet, we can sustain a fantasy of freedom.

And yet, digital spaces are constructed just as much by standards, though the stories they tell may be more difficult to parse.

 

Read the full post here.

Editors’ Choice: A Naive Empirical Post about DTM Weighting

Thu, 07/06/2017 - 11:00

In light of word embeddings’ recent popularity, I’ve been playing around with a version called Latent Semantic Analysis (LSA). Admittedly, LSA has fallen out of favor with the rise of neural embeddings like Word2Vec, but there are several virtues to LSA including decades of study by linguists and computer scientists. (For an introduction to LSA for humanists, I highly recommend Ted Underwood’s post “LSA is a marvelous tool, but…“.) In reality, though, this blog post is less about LSA and more about tinkering with it and using it for parts.

Like other word embeddings, LSA seeks to learn about words’ semantics by way of context. I’ll sidestep discussion of LSA’s specific mechanics by saying that it uses techniques that are closely related to ones commonly used in distant reading. Broadly, LSA constructs something like a document-term matrix (DTM) and then performs something like Principle Component Analysis (PCA) on it.1 (I’ll be using those later in the post.) The art of LSA, however, lies in between these steps.

Typically, after constructing a corpus matrix, LSA involves some kind of weighting of the raw word counts. The most familiar weight scheme is (l1) normalization: sum up the number of words in a document and divide the individual word counts, so that each cell in the matrix represents a word’s relative frequency. This is something distant readers do all the time. However, there is an extensive literature on LSA devoted to alternate weights that improve performance on certain tasks, such as analogies or document retrieval, and on different types of documents.

This is the point that piques my curiosity. Can we use different weightings strategically to capture valuable features of a textual corpus? How might we use a semantic model like LSA in existing distant reading practices? The similarity of LSA to a common technique (i.e. PCA) for pattern finding and featurization in distant reading suggests that we can profitably apply its weight schemes to work that we are already doing.

 

Read the full post here.

Back on Thursday!

Tue, 07/04/2017 - 11:00

Digital Humanities Now is taking the day off. We’ll be back with new featured posts on July 6th!

Job: Digital Archivist, University of Texas – Rio Grande Valley

Thu, 06/29/2017 - 13:30

From the job ad:

Scope of Job – To manage daily operations related to the digitization, organization, and access of special collections materials. Leads the efforts of the UTRGV Library in digitally preserving the culture and history of the university and the Rio Grande Valley.

Read the full ad here.

Job: Digital Humanities Developer, University of Virginia Library Scholars’ Lab

Thu, 06/29/2017 - 13:00

From the ad:

You might have seen our opening for a Senior Developer—we’re now seeking an additional colleague for our R&D team: DH Developer! Apply here (posting number #0621212), or read on for more information.

We welcome applications from women, people of color, LGBTQ, and others who are traditionally underrepresented among software developers. In particular, we invite you to contact us even if you do not currently consider yourself to be a software developer. We seek someone with the ability to collaborate and to expand their technical skill set in creative ways.

Read more here.

Announcement: Come Play in the Omeka S Sandbox

Thu, 06/29/2017 - 12:30

From the announcement:

Have you been intrigued by the posts and tweets about Omeka S but haven’t quite got around to installing it? Or have you just found out about Omeka S and are wondering what, exactly, it does? We have good news for you!

We are happy to announce the Omeka S Sandbox, a space the explore, play, and test out the functionality of Omeka S!

Read more here.

CFP: The Wearable and Tangible Possible Worlds of DH @ HASTAC 2017

Thu, 06/29/2017 - 12:00

From the CFP:

Building on the 2016 HASTAC Wearables and Tangible Computing Research Charrette, we are hosting an exhibition at HASTAC 2017 (Nov 2-4, Florida). We invite proposals for participation from scholars, artists, and activists at both student and professional levels. In particular, we are eager to see emerging and exploratory work in the broad range of wearables and tangible computing. In keeping with the Possible Worlds thematic of the event, proposals can be past, present, or future oriented and speculative work is welcome.

Read more here.

Resource: Ways to Compute Topics over Time, Part 2

Thu, 06/29/2017 - 11:30

From the resource:

This is the second in a series of posts which constitute a “lit review” of sorts, documenting the range of methods scholars are using to compute the distribution of topics over time.

Graphs of topic prevalence over time are some of the most ubiquitous in digital humanities discussions of topic modeling. They are used as a mechanism for identifying spikes in discourse and for depicting the relationship between the various discourses in a corpus.

Read more here.

Editors’ Choice: Computers & Writing Session F1 – Critical Making As Emergent Techne

Thu, 06/29/2017 - 11:00

The panel “Critical Writing as Emergent Techne” worked with both criticality and technology, showing how both critical discourse and hands-on, constructive practices could reinforce each other in the college writing classroom.

Anthony Stagliano opened the session with a paper that situated “critical making” as a position that works to avoid both the “Scylla” of self-destructive or naval-gazing skepticism/criticality and the “Charybdis” of an over-excited indulgence in technology in the classroom simply for the sake of technology. The critical maker, Stagliano said, looks at technology with a subversive but playful eye, looks to see how the technologies work, how they can be opened up and modified, how they may be used to challenge the status quo, and how they can lead to perceptual shifts in conversations, in what can be done, what can be seen. “Critical making at its best is affirmative rather than negative,” Stagliano said; it’s about affirming possibilities, seizures, and perversions.

Bree McGregor offered her take on critical making in a video that explored her work with critical making in an online class. McGregor wanted to subvert common assumptions about online classes. Students believed that because the class was shorter, only a handful of weeks, and that it was online, there would be less rigor and less interaction required. McGregor sought to work against these expectations by involving the students in critical making exercises. In the class, students were immersed in “maker spaces” which demanded hands-on, tactile rhetorical constructs.

 

Read the full post here.

Pages


Calendar
Directory of DH Scholars

Looking for collaborators, expertise, or other scholars with related interests? 

Please see our list of affiliated scholars at KU.

If you would like to be included in this list please complete our affiliated scholars form.

 

KU Today
Home to 50+ departments, centers, and programs, the School of the Arts, and the School of Public Affairs and Administration
KU offers courses in 40 languages
No. 1 ranking in city management and urban policy —U.S. News and World Report
One of 34 U.S. public institutions in the prestigious Association of American Universities
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
23rd nationwide for service to veterans —"Best for Vets," Military Times