Digital Methods of Analysing and Reconstructing Ancient Greek and Latin Texts

The digital study of the ancient Greek and Roman era is characterized by a specific context. Antiquity offers fewer sources and perhaps less text material than later times, but these sources have—to a large extent—already been digitized. Since the 1980s a huge amount of Greek and Latin text collections are digitally available in several databases. The combination of these factors has pushed classicists to investigate new data-driven methods of enquiry. With classical scholars compelled to introduce innovative approaches for the interpretation and analysis of their sources, they have not only digitized their work routine but also embraced digital technology in order to extract information systematically from Greek and Latin texts or other media, such as coins and their digital data sets. Searching, visualization and the analysis of complex data is in the process of being revolutionised. [1]
Combining digital and analogue methods, often framed in the opposition between “quantitative” and “qualitative” approaches, remains a matter for debate. The new opportunities which arise from digital approaches to Greek and Latin texts are not viewed as unproblematic; their impact and their implications do not and cannot remain unquestioned. [2] The seductive charm of digital technologies and the associated attraction of getting “impartial,” “objective” scientific results, produced by a data-driven, mechanical process, has not made scholars blind to the ramifications that come with the overwhelming speed of constantly renewed and ever-developing techniques. [3] The relevance of this issue becomes more and more urgent as an awareness of how Digital Classics can and does creatively change established traditional methods becomes more evident with every year that passes.
These methodological controversies must constantly be taken into account and reflected upon while applying a digital approach in the field of Classics. And it seems that the community does take these challenges seriously. There are constant discussions of the repercussions of the digital turn on workshops, conferences, and in numerous publications which tackle the issue of digital transformation. [4]
Our volume takes into account these methodological discussions while heading in a different direction: It presents recent developments in the field of “computational” Classics, that is, the digital analysis of Greek and Latin texts. [5] These approaches enable scholars to do the traditional philological work such as debating and contextualising word meanings and text passages, searching for certain patterns within a text corpus, while using more powerful, reliable, perhaps transparent and certainly flexible search and analytic tools and visualisation techniques than those previous generations of digital-affine scholars had at their disposal.
During the last few years, digital text analysis has increased in prominence throughout the humanities. The ever-growing availability of data has opened scholars to the possibilities of quantitative text analysis as a method of learning about the form and content of text. Greek and Latin text analysis in particular poses new questions through the development of text analysis tools and technologies. These methods are the result of the metamorphosis of the original texts into new practices of creating and interpreting digital representations. [6] Linking isolated data such as languages, texts or facts, and the direct presentation of linked data represents a promising novelty for the Digital Classics. Drawing on the methods of text analysis developed in statistics, data visualisation and corpus linguistics (cognate to the methods of “textometrie” in French-speaking academia), ancient texts are re-read in an innovative way which is essentially different from previous classical studies.
It is vital that ancient resources are made available as fast as possible for use as a basis for search queries, text mining or collocation graphs, to cite a few examples. Up until now, however, these have for the most part been digitalized versions of printed books. But what of the promise and challenges of digital editions of Greek and Latin texts? Prominent cases are the Thesaurus Linguae Graecae (TLG), the Perseus Digital Library, the Packard Humanities Institute (PHI) database for Latin texts, Packard Humanities Institute (PHI) database for Greek inscriptions, the IntraText Library, the Bibliotheca Teubneriana Latina (BTL), the Library of Latin Texts (LLT), or the Digital Library of Late Antique Latin Texts (digilibLT). [7] All these platforms, which are characterized by several search engines, statistical tools, and partly by morphological parsing, enable their users to carry out a wider range of search queries and linguistic analysis. However, these digital collections are not “critical” editions, they are digitized databases of one version of the texts. They lack significant features of a critical edition such as an apparatus criticus. Their structure incorporates some encoding advantages as only one selected edition is digitized, but this process comes along with considerable disadvantages. One of these is that the user is unable to search and analyze textual variants, which would deviate from the text chosen by the editor.
In recent years a number of digital classicists have addressed this issue. Thus “hybrid” editions which combine the constitutio textus with a documentation of variant readings have been developed. A basic form of these are PDF editions in born-digital files, e.g. the PHerc project. [8] More articulated platforms are e.g. Catullus Online or which combine a text with critical annotations. [9] However, this approach is challenged by digital classicists who consider it to be too indebted to traditional—that is analogue—versions of the critical text, as they are similar to printed editions. [10] In their view, a digital critical edition must exploit the potential that comes along with a non-analogue digital structure. This resulted in the development of hypertexts or “multitexts” like the Homer Multitext Project (HMT) [11] which represents a radically different way of displaying a text. HMT established a dynamic relational system between multiple editions that are set in reference to one another, transcending a fixed arrangement of text and apparatus criticus by using a URN architecture instead. The multilayered framework of a digital hypertext offers possibilities that are not available to printed critical editions. In displaying different variants and conjectures the user can now simultaneously view several readings of the texts, without the strict hierarchy that every printed version implies. In their contribution to the volume, Melody Wauke, Charles Schufreider and Neel Smith use the environment of the HMT to HMTstudy the Venetus A manuscript of the Iliad, and to inquire into the physical organization of this manuscript. Applying topic modelling they find strong evidence that the scribe of the Venetus A planned the layout of the manuscript so that he could incorporate material from a variety of no longer extant sources.
This development exemplifies how Digital Classics is progressing: rather than the digitalization of non-digitized sources they are moving towards a digitally conceived and born-digital media, in other words with the creation of digital texts. [12] Viewed in this light, the hypertext or multitext architecture not only challenges Urtext model; [13] it also pioneers the exploration of “holistic” approaches which transcend the choice of a specific editor, building a joint network of all the available extant data, leading to potentially infinite information layers. [14]
One of the fields where a holistic, born-digital approach particularly is coming to fruition is Papyrology. The documentation of Papyri and their readings are a complex matter which printed editions often cannot live up to. The digital space offers possibilities “where the objects of study undergo a process of dematerialization.” [15] In his article in this volume, Nicola Reggiani elaborates on this point, and discusses to what extent digital editions of ancient papyri will prevail over traditional, printed editions. He concludes that the printed medium is physically limited and the structure of analogue editions do not correspond to the many-faced layers of ancient texts. In contrast, Reggiani argues, digital editions offer a hyper-dimensional space which fits with the ancient approach of text transmission, an approach that was not characterised with text reconstruction or fixation with one given version of a text.
Another important issue of digital editions of papyri is that databases like are sometimes ill-suited for linguistic annotation as the linguistic tagging is often incompatible with the TEI/EpiDoc XML tagging system. [16] The platform Sematia, [17] developed by Erik Henriksson and Marja Vierros, addressed this problem and generates linguistic layers from TEI/EpiDoc XML documents by automatically adapting any text copied from a papyrological databank to linguistic annotation. Sematia has been replaced by the PapyGreek portal, [18] which implements some minor changes and adaptations concerning tokenization and the text division. Using these platforms, Marja Vierros and Polina Yordanova explore in their contribution the use of the Genitive Absolute in literature and documentary papyri and conclude that the Genitive Absolute—apart from well-known instances in Greek drama—was mostly used in official petitions but not in private letters. In another contribution, Lou Burnard poses a general question of the utility of standardization in a scholarly context, proposing however that documentation of formal encoding practice is an essential part of scholarship. After discussion of the range of information such documentation entails, he explores the notion of conformance proposed by the TEI Guidelines, suggesting that this must operate at both a technical syntactic level, and a less easily verifiable semantic level.
The process of annotation, i.e. tagging the smallest language units in a text which is also known as tokenization with linguistic data such as semantic, morphologic or syntactic information, has become one of the most essential aspects of corpus linguistics. [19] By encoding a text corpus, annotation makes automated linguistic studies possible. In their article, Alek Keersmaekers and Mark Depauw address the potential for such approaches, showing how machine learning-tools like part-of-speech taggers and syntactic parsers can be trained on literary texts. Looking at the documentary papyri and ostraca from Egypt, they present ways in which annotation of sources through automated analysis of texts may be achieved.
Using the digital approach to analyse ancient texts sometimes requires unconventional means. For example, scholars analysing Greek and Latin are confronted with a particular challenge as morphological parsing has been—until recent times—predominantly developed for modern languages. In addition, modern languages require a morphological parsing different to ancient languages, which makes a direct adaptation both impossible and ineffective. Digital Classicists have turned to developing not a single Greek parser but rather a system for making and operating various corpus-specific parsers. These not only identify valid forms for representing a lexical token in its lexical entity and form, but also convey the additional information of which morphological and inflectional dependence this form refers to. [20] The CTS URN notation, which has been mentioned above in context of the HMT, was developed to provide a technology-independent and machine-learning approach for the transmission of these references.
Ancient texts can also be annotated through the The Ancient Greek and Latin Dependency Treebank, developed by Giuseppe Celano, Gregory Crane, Bridget Almas and others. [21] The treebank system as grammatical parsing of ancient text corpora is of use not only for research but also for teaching. It allows for the visualization of complex syntactical structures at school and university, and it also works as an effective exercise for the tagging of lemmata, morphological word-forms and syntactical dependencies within a sentence structure by pupils and scholars. [22] The project consists of a vast corpus of Greek and Latin texts which are annotated semantically, morphologically and syntactically. [23] The most recent annotation is on the Arethusa platform, [24] which enables users to annotate the three levels of morphology, syntax and advanced syntax. In his contribution to this volume, Francesco Mambrini presents such a treebank-based approach to the language of Sophoclean characters, discussing how recurring patterns in syntactic constructions, spotted by digital tools, can lead to further directions for research.
Another statistical method, the so-called “specific collocations” method, is based on the z-score test, which allows specification of words (forms or lemmas) attracted by the words constituting the lexical field under consideration, and also estimates the closeness of their relation. This process is discussed in the contribution by Dominique Longrée who compares the specific collocations of several words belonging to a same semantic field, thus helping refine the description of the inherent semes of each word. The Latin lexical fields he examines are the abstract field of ‘power’ and the concrete field of the ‘cave’.
These are examples of how computational analysis might transform and enhance our understanding of ancient texts and sources. On the basis of Plato’s Gorgias, Bénédicte Pincemin and Stéphane Marchand present another example of the “textometric approach” that is implemented in TXM open-source software. Behind a detailed typology of queries that could be searched for in the corpus such as queries about word frequency, syntagmatic or paradigmatic uses of word meaning in the corpus, word evolution, texts’ contrastive characterization and corpus internal structures, the key idea is that it is possible to elaborate tools combining advanced computational analysis and philological attention and sensibility towards text integrity and richness.
The digital automated approach to ancient texts has also been hugely influenced by text mining, a collection of methods to process texts and corpora in order to extract information and data from ancient sources through an analysis of patterns, sequences or co-occurrences. eAqua, one project spearheading the development of methods based on algorithms to analyse ancient texts, has been conducted at the University of Leipzig between 2008 and 2013 under the leadership of Charlotte Schubert. [25] This project generated a platform which allowed users to make quantitative search enquiries in digital text corpora, e.g. co-occurrence analysis; but it also had other features which opened up digital approaches to ancient texts. For example, eAqua suggested the classification of a text, comparing key words which are frequently used in specific genres. Further, it broke new ground in text recognition: the platform suggested possible variations for a lacuna or an uncertain reading in ancient texts, based on the textual corpus of that author and on an automated list of conceivable alternatives.
Following up on eAqua, there are projects like eComparatio, which created an environment for viewing different versions of a text, and Digital Plato, which studied the reception of Plato’s works on the basis of an automated search engine that can identity paraphrases of Plato in other texts. Charlotte Schubert, who was the principal investigator of these projects, presents to us some insights from Digital Plato and illustrates how string-matching algorithms can enhance our understanding of quotations, forms of text re-use, and the direct and indirect text tradition. In her view, algorithm-based analysis is not dependent on individual interpretation and therefore bridges the gap between distant and close reading through a combination of empirical and traditional methods of interpretation.
A potential benefit of statistical text mining is shown by Margherita Fantoli in her article. Through an analysis of morphological and syntactical phenomena via statistical tools, she demonstrates how the use of digital and statistical tools offers new perspectives on language and style. By looking at the second book of Pliny’s Natural History and Seneca’s seventh book of the Natural Questions, she argues that this approach enables scholars to identify phenomena which are not always evident through direct reading. She created a corpus of texts in order to contain texts of the main prose literary genres (historiography, biography, novel, philosophy, rhetoric, epistolography, scientific and technical prose) and didactic and scientific poems; authors of the republican period accompany authors of the “silver age,” in order to give a faithful description of the eventual chronological differences. The statistical approach helps to highlight the influence of the genre on the style, and prevents the merging of interesting data in too big an amount of texts.
All the contributions in this volume show that Digital Classics—in order to make the next step—needs to combine different approaches, providing flexible tools which can be used in several contexts. Yves Ouvrard and Philippe Verkerk, who discuss Collatinus and Eulexis, present the ways in which open-source programmes can be employed in different environments. These were developed to lemmatize Latin or Greek texts and to give a short translation of parsed words. They also contain full digital dictionaries that give all details of a word’s usage. Both authors demonstrate how for specific tasks the core code can be included in other contexts.
Rodney Ast, in his article, also calls for a closer and more effective cooperation between Digital Humanities and Classical Philology. Through an examination of papyrology, where he emphasizes that new platforms should be developed that better facilitate collaborative born-digital research and publishing, Ast discusses the requirements of working side by side, without merging the two fields.
And the volume concludes with the contribution of Gregory Crane, who argues that scholarship is now entering a phase where it will focus upon integrating separate projects seamlessly. Readers should, however, always be able to see where their data is coming from, and which service they are making use of. Crane describes one example of such integration: dynamic links between the Scaife Viewer (the new browsing and searching environment for the Perseus Digital Library) and the New Alexandria Commentary System (an annotation environment developed at Harvard’s Center for Hellenic Studies).
The range of articles in this volume covers methodological challenges, examples of digital tools and their application to specific Greek or Latin authors and their texts. All articles in the main body of the volume focus mainly on concrete implementations within the context of Digital Classics, while also addressing general repercussions. Two chapters, however—a prologue (Ast) and an epilogue (Crane)—will be dedicated to a more universal reflection on the digital analysis of ancient texts, and discuss the ramifications that go along with it.
Several contributions to this volume refer to the future potential of their digital approach. Whereas it has become a communis opinio that digital development has significantly changed and will continue to change the way we organize our material, how we analyse our sources and how we communicate our results, [26] scholarship is still divided between those who emphasise the huge potential of digital tools as a means to asking completely new questions, and those who consider the digital approach as an unredeemed promise. One argument of the latter group is that digital approaches to ancient Greek and Latin texts have not yielded important original results. Rather digital methods and statistical approaches often reproduce what can and has been done following conventional approach. [27]
Such allegations sometimes hit their mark: constant discussion of the digital turn too often resonates with the idea or vision of a revolution or a radical transformation in the field. This discourse may be true with respect to changes that relate to our daily work routine, as scholars have become “digital,” mainly using laptops and personal computers for communication and text processing, using search engines to find relevant publications, reading—particularly following on from the COVID pandemic—digitized books, journal articles and sources. [28] In this sense, a revolution has in fact taken place.
The second aspect of the digital turn, in other words computer-based studies or visualizations of texts, data and objects—which are often distinguished as the computational approach [29] —cannot live up to the expectations of a similarly rapid transformation, because this “revolution” will take a long time to unfold and involves complex and as yet unresolved methodological problems and issues. Precisely because the opportunities and limitations of new methods require empirical testing, computational approaches were always unlikely to deliver ground-breaking results within a short time. Though this revolution needs time to gestate, progress in many areas, particularly concerning the automated identification of authorship, points to the direction of change. [30]
The assumption that computational approaches reproduce what has already been possible through analogue methods should also be contextualized. The fact that algorithmic processes lead to the same or similar results as purely hermeneutic processes constitutes an achievement: it allows for comparison and contrast between the two approaches and serves as a starting point for extending beyond the limits of what has been achieved up until today. Though it remains uncertain whether and to what extent machines will be capable of analysing texts or other media in the same qualitative way as humans, and also what this might mean in practice, quantitative methods have and will continue to yield results that alter qualitative readings. [31] For a full appraisal of the development of digital approaches, patience is required. And this is particularly true with regard to the field of Digital Classics. As the study of ancient Greek and Latin was one of the first scholarly disciplines, many aspects of Greek and Latin texts have already been analyzed over two centuries by scholarship. This makes entirely new findings more difficult than in disciplines where either new sources (newspapers or books) or discourses in social media abound, or where sources have not been so extensively studied. This relative scarcity, however, allows for thorough questioning of methodologies and what can and cannot be achieved through computational approaches.
Less hype, and a more complex appraisal of recent developments in computational Greek and Latin studies, might free digital Classicists from the burden of producing revolutionary results as though by magic. Taking the burden of expectation out of the digital, this volume presents ideas that characterise recent trends in the field of digital text analysis. These have provided new and innovative approaches to specific ancient Greek and Latin sources. One should not expect them to fundamentally alter our understanding of Classics within a short period of time, but over the medium term the many small steps presented here do suggest that it is a paradigm change that we are living through.


Albritton, B., Henley, G., and Treharne, E., eds. 2020. Medieval Manuscripts in the Digital Age. London.
Andrews, T. and Mace, C., eds. 2014. Analysis of Ancient and Medieval Texts and Manuscripts: Digital Approaches. Turnhout.
Arendes, C., Döring, K., and König, M., eds. 2020. Geschichtswissenschaft im 21. Jahrhundert. Interventionen zu aktuellen Debatten. Berlin.
Babeu, A. 2011. “Rome Wasn’t Digitized in a Day”: Building a Cyberinfrastructure for Digital Classics. Washington.
Bagnall, R.S. and Gagos, T. 2007. “The Advanced Papyrological Information System: Past, Present, and Future.” In Frosen, J. and Purola, T., eds. Proceedings of the 24th International Congress of Papyrology, 59–74. Helsinki.
Bastianini, G., ed. 2001. Atti del XXII Congresso Internazionale di Papirologia. Florence.
Berry, D.B. 2011. “The computational turn: thinking about the Digital Humanities.” Culture Machine 12: 1–22.
Berti, M., ed. 2019. Digital Classical Philology: Ancient Greek and Latin in the Digital Revolution. Berlin.
Berti, M. 2021. Digital Editions of Historical Fragmentary Texts. Heidelberg.
Bodard, G., Y. Broux, and S. Tarte, eds. 2016. Digital Approaches to the Ancient World. Paris.
Bodard, G. and M. Romanello, eds. 2016. Digital Classics Outside the Echo-Chamber: Teaching, Knowledge Exchange & Public Engagement. London.
Bolter, J.D. 1991. “The Computer, Hypertext, and Classical Studies.” The American Journal of Philology 112.4: 541–545.
Celano, G. 2019. “The Dependency Treebanks for Ancient Greek and Latin.” In Berti 2019, 279–298.
Celano, G., G. Crane, S. Majdi. 2016. “Part of Speech Tagging for Ancient Greek.” OLin 2: 393–399.
Chronopoulos, S., F.K. Maier, and A. Novokhatko, eds. 2020. Digitale Altertumswissenschaften: Thesen und Debatten zu Methoden und Anwendungen. Heidelberg.
Da, N.Z. 2019a. “The Computational Case against Computational Literary Studies.” Critical Inquiry 45: 601–639.
———. 2019b. “The Digital Humanities Debacle.” Chronicle Review 27 March.
Dilley, P. 2016. “Digital Philology between Alexandria and Babel.” In Ancient Worlds in Digital Culture, eds. C. Clivaz, P. Dilley, D. Hamidovic, in collaboration with A. Thromas, 17–34. Leiden and Boston.
Dobson, J.E. 2019. Critical Digital Humanities: The Search for a Methodology. Urbana.
Dunn, S. and S. Mahony, eds. 2013. The Digital Classicist 2013. Paris.
Essler, H. and D. Riano Rufilanchas. 2016. “‘Aristarchus X’ and Philodemus: Digital Linguistic Analysis of a Herculanean Text Corpus.” In Urbanik 2016, 491–501.
Fiormonte, D. 2012. “Towards a cultural critique of the digital humanities.” Historical Social Research 37: 59–76.
Frosen, J. and T. Purola, eds. 2007. Proceedings of the 24th International Congress of Papyrology. Helsinki.
Gagos, T. 2001. “The University of Michigan Papyrus Collection: Current Trends and Future Perspectives.” In Bastianini, ed., 2001, 511–537.
Garside, R. 2016. Corpus Annotation: Linguistic Information from Computer Text Corpora. New York.
Gold, M.K. 2012. Debates in the Digital Humanities. Minnesota.
Hartmann, A. 2020. “Datenbanken in der Alten Geschichte – Beobachtungen aus der Alten Welt.” In Chronopoulos, Maier, and Novokhatko, eds., 2020, 169–190.
Iezzi, D.F., D. Mayaffre, M. Misuraca, eds. 2020. Text Analytics, Advances and Challenges. München.
Jannidis, F. 2020. “On the perceived complexity of literature. A response to Nan Z. Da.” Cultural Analytics 1: 1–12.
Jentsch, P. and S. Porada 2021. “From Text to Data: Digitization, Text Analysis and Corpus Linguistics”. In Schwandt, ed., 2021, 89-128.
Jockers, M.L. 2020. Text Analysis with R: For Students of Literature. München.
Juola, P. 2015. “The Rowling Case: A Proposed Standard Analytic Protocol for Authorship Questions.” Digital Scholarship 30: 101–113.
König, M. 2020. “Geschichte digital. Zehn Herausforderungen.” In Arendes, Döring, and König, eds., 2020, 67–77.
———. 2021. “Die digitale Transformation als reflexiver turn. Einführende Literatur zur digitalen Geschichte im Überblick.” Neue Politische Literatur 66: 37–60.
Kossek, B. and M. Peschl, eds. 2012. Digital Turn? Zum Einfluss digitaler Medien auf Wissensgenerierungsprozesse von Studierenden und Hochschullehrenden. Wien.
Lebart, L., C. Pincemin, and C. Poudat, eds. 2019. Analyse des données textuelles. Québec.
Maier, F.K. 2017. “Optimal ist digital? Eine Diskussion über digitale kritische Editionen.” Magazin für digitale Editionswissenschaft 3: 21–29.
Mambrini, F. 2016. “The Ancient Greek Dependency Treebank: Linguistic Annotation in a Teaching Environment.” In Bodard and Romanello, eds., 2016, 83–99.
Nee, E. 2019. Méthodes et outils informatiques pour l’analyse des discours. Rennes.
Olson, D. 2020. “Between Two Worlds. Review of the Digital Edition of Summa de officiis ecclesiasticis.” In Chronopoulos, Maier, and Novokhatko, eds., 2020, 95–98.
Putnam, L. 2016. “The Transnational and the Text-Searchable: Digitized Sources and the Shadows They Cast.” The American Historical Review 121: 377–402.
Reggiani, N. 2017. Digital Papyrology I: Methods, Tools and Trends. Berlin.
———, ed. 2018a. Digital Papyrology II: Case Studies on the Digital Edition of Ancient Greek Papyri. Berlin.
———. 2018b. “The Corpus of the Greek Medical Papyri and a New Concept of Digital Critical Edition.” In Reggiani, ed., 2018a, 3–62.
Revellio, M. 2015. Classics and the Digital Age. Advantages and limitations of digital text analysis in classical philology. In: Konstanz Lit-LingLab Pamphlet 2,
Schubert, C., ed. 2011a. Das Portal eAQUA: Neue Methoden in der geisteswissenschaftlichen Forschung II. Leipzig.
———. 2011b. “Detailed Description of eAQUA Search Portal.” In Schubert, ed., 2011, 33–53.
———. 2021. “Digital Humanities auf dem Weg zu einer Wissenschaftsmethodik: Transparenz und Fehlerkultur.” Digital Classics Online 7: 39–53.
Schwandt, S., ed. 2021. Digital Methods in the Humanities: Challenges, Ideas, Perspectives. Bielefeld.
Smith, N. 2016. “Morphological Analysis of Historical Languages.” Bulletin of the Institute of Classical Studies 59.2: 89–102.
Urbanik, J., ed. 2016. Proceedings of the 27th International Congress of Papyrology (Warsaw, 29 July – 3 August 2013). Warsaw.


[ back ] 1. See e.g., Jentsch and Porada (2021), Chronopoulous, Maier, and Novokhatko (2020), Berti (2019), Reggiani (2018a), Reggiani (2017), Bodard, Romanello (2016), Bodard, Broux, and Tarte (2016), Andrews and Mace (2014), Dunn and Mahony (2013), Babeu (2011). For an epistemological discussion of philology as discipline from the analogue to the digital age, see Dilley (2016).
[ back ] 2. Hartmann (2020), Olson (2020).
[ back ] 3. Reggiani (2018b).
[ back ] 4. Chronopoulos, Maier, and Novokhatko (2020). However, a more balanced dialogue between digital-affine scholars and those who have not yet taken the digital path or are predominantly sceptical about digital tools is still needed. The vast majority of publications which deal with the digital transformation recruits their contributors mainly from digital scholars, something that hinders. On the tensions and challenges of editing fragmentary texts in print and digital form, see Berti (2021).
[ back ] 5. For digital text analysis and its aim of exploring and visualising diverse collections of texts such as poetry and literary treatises, political speeches, letters, archival documents, inscriptions, as well as various sources such as edited texts, manuscripts and papyri, see a detailed study by Lebart, Pincemin, and Poudat (2019). On interpreting manuscripts and interdisciplinary digital approaches to the study of the relationship between texts and their physical contexts, see Albritton, Henley, and Treharne (2020). See also Revellio 2015.
[ back ] 6. For the hermeneutics of text analysis, see Rockwell 2003. See also Née (2019), Iezzi, Mayaffre, and Misuraca (2020), and Jockers (2020).
[ back ] 10. Maier (2017).
[ back ] 12. Bagnall and Gagos (2007).
[ back ] 13. Bolter (1991).
[ back ] 14. Reggiani (2018b).
[ back ] 15. Reggiani (2018b), Gagos (2001).
[ back ] 16. Essler and Riano Rufilanchas (2016), 495.
[ back ] 19. Garside (2016).
[ back ] 20. Smith (2016).
[ back ] 22. Celano (2019), Reggiani (2017), 178–189, Mambrini (2016), Smith (2016).
[ back ] 23. Celano, Crane, and Majdi (2016), Babeu (2011).
[ back ] 25., Schubert (2011), Babeu (2011), 60–61.
[ back ] 26. Schwandt (2021), König (2020), Kossek and Peschl (2012), Berry (2011).
[ back ] 27. Most rigidly formulated by Da (2019b) and Da (2019a). Opposing views in Jannidis (2020) and Schubert (2021). Earlier discussions in Gold (2012), Fiormonte (2012), Dobson (2019), Chronopoulos, Maier, and Novokhatko (2020), Schwandt (2021).
[ back ] 28. König (2021).
[ back ] 29. Putnam (2016), 379.
[ back ] 30. E.g., Juola (2015).
[ back ] 31. Jannidis (2020).