Original source: link

Digital humanities - How data analysis can enrich the liberal arts | Christmas Specials

IT ALL STARTED with a preposition. In 1941 Father Roberto Busa, a Roman Catholic priest, started noting down as many uses of the word “in” as he could find in the Latin work of Thomas Aquinas, a medieval theologian and saint. Eight years and 10,000 handwritten cards later he completed his linguistic analysis of Aquinas’s “interiority”—his introspective faith—at Rome’s Pontifical Gregorian University. By then he had a suspicion that his work could be done far more efficiently. He started hunting for “some type of machinery” to speed up his new project, recording the context of all 10m words written by Aquinas.

Father Busa’s zeal took him to the office of Thomas Watson, IBM’s chairman. Soon he had switched from handwritten cards to IBM’s punch-card machines, before adopting magnetic tape in the 1950s. In the 1960s dozens of full-time typists were involved. By 1980, when his team finally printed the “Index Thomisticus” in 56 volumes, they had spooled through 1,500km (930 miles) of tape. A CD-ROM containing 1.4GB of data came out in 1992, with a website following in 2005. The 97-year-old priest died in 2011. But not before he had initiated a new quest, to annotate the syntax of every sentence in the Index Thomisticus database.

Such is the creation story of the digital humanities, a broad academic field including all sorts of crossovers between computing and the arts. The advances since its punch-card genesis have been “enormously greater and better than I could then imagine,” remarked Father Busa in his old age. “Digitus Dei est hic! [The finger of God is here!]” Almost every humanistic composition imaginable has been rendered in bytes. Aquinas’s works are a speck in the corpus of Google Books, which contains at least 25m volumes and perhaps two trillion words. Naxos, a music service, has annotated 2.4m classical pieces with authorial biographies and instrumentation. Spotify, a streaming service, has 60m tunes, each with metadata about tempo, time signatures and timbre.

What started as a niche pursuit is growing rapidly. Google Scholar now contains about 75,000 academic articles and essays that mention “digital humanities”. That total is already bigger than for “Napoleon Bonaparte” (57,000) or “Romeo and Juliet” (66,000). Nearly half of the 75,000 articles were published since 2016.

Time and the machine

Digitisation’s clearest benefits are speed and scale. Because of decades of exponential growth in computing sophistication, projects that once lasted a lifetime—literally, for Father Busa—now require a fraction of it. Take the work of Barbara McGillivray at the Alan Turing Institute, Britain’s national centre for data science. Having done her PhD in computational linguistics on the “Index Thomisticus”, she wanted to create a similar resource for ancient Greek. After starting as the institute’s first humanist in 2017, she and a colleague needed just three months to convert 12 centuries of classics into an annotated corpus of 10m words. The final product compresses Homer, Socrates and Plato into 2.5GB of tidy Extensible Markup Language (XML), complete with the grammatical properties of each word.

Curating such enormous archives is just the starting-point. The trick is to turn the data into interesting findings. Researchers have been trying to do that from almost the time when Father Busa began punching cards. From the late 1950s Frederick Mosteller and David Wallace, two statisticians, spent several years using a desk-sized IBM 7090 to calculate the frequency of words in the Federalist papers, written by Alexander Hamilton, James Madison and John Jay. They inferred that 12 anonymous essays were probably written by Madison, based on certain tics. He rarely used “upon”, for example, whereas Hamilton often did.

Advances in machine learning have given Ms McGillivray a far shinier array of tools. Along with four co-authors, she tested whether an algorithm could track the meaning of Greek words over time. They manually translated 1,400 instances of the noun kosmos, which initially tended to denote “order”, then later shifted to “world” (a celestial meaning that survives in the English “cosmos”). Encouragingly, the machine agreed. A statistical model reckoned that in 700BC kosmos was frequently surrounded by “man”, “call” and “marketplace”, a cluster suggesting “order”. By 100AD a second cluster emerged, suggesting “world”: “god”, “appear” and “space”.

The thrill of getting “a computer to blindly agree with us”, explains Ms McGillivray, is that she could now apply it easily to the 64,000 other distinct words in the corpus. She has already spotted that paradeisos, a Persian loan-word for “garden”, took on its theological context of “woman”, “god” and “eat” around 300BC, when the Old Testament was first translated into Greek. At a few keystrokes, the algorithm tapped into one of history’s great intellectual exchanges, between Judaistic theology and Greek literature.

Take a byte

The most compelling number-crunching of this sort has focused on English writing from 1750-1900, thanks to that era’s rapid expansion of printed texts. Such Victorian data-mining has mostly taken place in America. The Stanford Literary Lab was established in 2010. In contrast to “close reading”, by which humans spot nuances on a couple of pages, the lab’s 60-odd contributors have pioneered “distant reading”, by getting computers to detect undercurrents in oceans of text.

An early project dredged through nearly 3,000 British novels from 1785-1900, to examine which types of language had gone in and out of style. The authors, Ryan Heuser and Long Le-Khac, developed a tool called “the Correlator”, which calculates how frequently a given word appeared in each decade, and which other words experienced similar fluctuations. Though the maths was crude, it provided some surprisingly coherent clusters: “elm”, “beech” and “branch” closely tracked “tree”, for example. In order to detect broader trends, the authors then hunted for clusters that demonstrated sustained rises or falls in popularity.

First they took the words “integrity”, “modesty”, “sensibility”, and “reason”, and built a cohort of 326 abstract words correlated with them. These sentimental and moralistic terms fell increasingly out of fashion, from providing roughly 1% of all words in 1785 to half that in 1900. To provide a contrast, they then looked for a cohort of concrete terms. They found 508 correlates of the word “hard”. These fell into distinct sub-clusters: actions (“see”, “come”, “go”), body parts (“eyes”, “hand”, “face”), physical adjectives (“round”, “low”, “clear”), numbers and colours. Across the period, this “hard” cohort rose from 2.5% of words to nearly 6%. This was a pattern that led from Elizabeth Bennet’s decorous drawing room to Sherlock Holmes’ shady alleys. Strikingly, the trend-lines suggested that the movement from abstract words to concrete ones had been steady, rather than a sudden Dickensian shift.

Such quantitative studies don’t have to overturn grand theories to be interesting. The Correlator’s findings could sit comfortably within many books about the rise of novelistic realism. Sometimes, the benefit (and pleasure) of crunching literary data comes simply from measuring the strength and timing of historical tides. A second study from the Stanford Literary Lab concurred that 19th-century British novelists gradually removed sentimental words. The author, Holst Katsma, found a steady decline in melodramatic speaking verbs. “Exclaimed”, “cried” and “shouted” accounted for 19% of utterances in around 1800, but only 6% by 1900. (Novelists became fonder of “said”.)

Nonetheless, digital humanists enjoy going against the grain. Few have found as many quirky statistical patterns as Ted Underwood, a lecturer in English and computer science at the University of Illinois. In 2016 Mr Underwood decided to try to see what percentage of descriptions in contemporary novels are about female characters, and how this changed over time.

Mr Underwood took nearly 100,000 novels from 1800-2009 and an algorithm that apportions nouns, adjectives and verbs to specific characters. He found that women received about 50% of descriptions in 1800, but barely 30% by 1950 (see chart 2). This mirrored a similar fall in the share of novels by female authors. As writing became more lucrative, it veered away from the world of genteel ladies to that of grubby men. It was only after 1950 that female authorship and characterisation rebounded. Sabrina Lee, one of Mr Underwood’s colleagues, notes that this coincided with the rise of paperback publishing and romance imprints. Even so, women’s share of writing and description remained around 40% in 2010.

Some of Mr Underwood’s investigations require little modelling and a lot of counting, such as an article that examined a sweeping literary claim by Thomas Piketty, an economist. Mr Piketty reckoned that widespread inflation after 1914 made people warier of wealth, and so “money—at least in the form of specific amounts—virtually disappeared from literature”.

Instinctively, Mr Piketty’s claim may feel true. Victorian characters often agonised over inheritance or debt, such as reckless Fred Vincy in “Middlemarch”, who constantly counts the pounds and shillings he has gambled away. By contrast “The Great Gatsby”, a modernist meditation on the “young and rich and wild”, mentions dollars just ten times. However, after combing through 7,700 novels from 1750-1950, Mr Underwood and his co-authors found that these were outliers. The rate at which authors referenced specific amounts of cash nearly doubled in that period (see chart 3). One explanation is that their characters tended to use pocket change more often. The median amount mentioned fell from nearly 60% of annual income to less than 5%.

Because e-books are abundant and computational linguistics dates back to the dawn of the digital age, most humanistic number-crunching so far has been literary in nature. But other subjects are starting to produce peer-reviewed quantitative studies, too. In history, Proceedings of the National Academy of Sciences published a paper in 2018 that found Maximilien Robespierre was the most influential rhetorician of the French revolution. The authors judged this by how often members of the National Constituent Assembly copied his innovations during 40,000 speeches. In anthropology, a team of researchers published an article in Nature in 2019 that examined how religions developed, using a 10,000-year dataset of 414 civilisations. They found that societies tended to adopt moralising gods after they had already created complex hierarchies and infrastructure. This challenges the idea that humans needed divine rules in order to band together.

Similarly, a study on painting from 2018 found that Piet Mondrian, a Dutch modernist, dabbled with a much wider range of colour contrasts during his career than his European contemporaries. And a paper from 2020 calculated that Sergei Rachmaninoff composed the most distinctive piano pieces relative to his peers, using a similar measure of innovation to the one in the Robespierre paper (but judging by groups of notes, rather than words).

Despite data science’s exciting possibilities, plenty of academics object to it. The number-crunchers are not always specialists in the arts, they point out. Their results can be predictable, and the maths is reductive and sometimes sketchy. So too are the perspectives often white, male and Western. Many also fear that funding for computer-based projects could impoverish traditional scholarship. Three academics complained in the Los Angeles Review of Books in 2016 that this “unparalleled level of material support” is part of the “corporatist restructuring of the humanities”, fostered by an obsession with measurable results.

Brave new world

The arts can indeed seem as if they are under threat. Australia’s education ministry is doubling fees for history and philosophy while cutting those for STEM subjects. Since 2017 America’s Republican Party has tried to close down the National Endowment for the Humanities (NEH), a federal agency, only to be thwarted in Congress. In Britain, Dominic Cummings—who until November 2020 worked as the chief adviser to Boris Johnson, the prime minister—advocates for greater numeracy while decrying the prominence of bluffing “Oxbridge humanities graduates”. (Both men studied arts subjects at Oxford.)

However, little evidence yet exists that the burgeoning field of digital humanities is bankrupting the world of ink-stained books. Since the NEH set up an office for the discipline in 2008, it has received just $60m of its $1.6bn kitty. Indeed, reuniting the humanities with sciences might protect their future. Dame Marina Warner, president of the Royal Society of Literature in London, points out that part of the problem is that “we’ve driven a great barrier” between the arts and STEM subjects. This separation risks portraying the humanities as a trivial pursuit, rather than a necessary complement to scientific learning.

Until comparatively recently, no such division existed. Omar Khayyam wrote verse and cubic equations, Ada Lovelace believed science was poetical and Bertrand Russell won the Nobel prize for literature. In that tradition, Dame Marina proposes that all undergraduates take at least one course in both humanities and sciences, ideally with a language and computing. Introducing such a system in Britain would be “a cause for optimism”, she thinks. Most American universities already offer that breadth, which may explain why quantitative literary criticism thrived there. The sciences could benefit, too. Studies of junior doctors in America have found that those who engage with the arts score higher on tests of empathy.

Ms McGillivray says she has witnessed a “generational shift” since she was an undergraduate in the late 1990s. Mixing her love of mathematics and classics was not an option, so she spent seven years getting degrees in both. Now she sees lots of humanities students “who are really keen to learn about programming and statistics”. A recent paper she co-wrote suggested that British arts courses could offer basic coding lessons. One day, she reckons, “It’s going to happen.”

This article appeared in the Christmas Specials section of the print edition under the headline "The book of numbers"