Can computers “read” books and what knowledge can we gain from them?
by Pisana Ferrari – cApStAn Ambassador to the Global Village
Franco Moretti, literary critic and co-founder of the Stanford Literary Lab, argues that to truly understand literary history one needs the help of computers to “crunch” data from thousands of books at a time.
A new publication, “Canon/Archive”, collects 11 of the lab’s research pamphlets, covering a wide variety of areas. One finding relates to an analysis of the language of a vast sample of British novels (120 years, 1780-1890), which shows a clear and steady shift away from words related to moral judgment (modesty, respect, virtue…) to words associated with concrete description and to a less “loud” register (a decline in verbs such as “screamed”, “shouted” and “cried”and the neutral “said” gaining a near monopoly). This kind of computational analysis in literary texts could serve to detect important but less visible underlying societal changes. Also, it could potentially track where concepts first appeared and how they spread. In the words of Illinois University University Professor @Ted_Underwood: “We tend to see literary history as a story of movements, periods, sudden revolutions” (…) “There are also these really broad, slow, massive changes that we haven’t described before”.