Rethinking text, techné and tenure: Evaluation and peer review challenges around Virtual Research Environments in the Arts and Humanities


Erzsébet Tóth-Czifra
“Imagine if you were to stop being first and foremost a scholar for a little while in order to take a job in which you could do something that would be useful not just to your personal career, but to the whole scholarly community. What would be the focus, what would seem most useful to you?” (Baillot 2016)
“The method of production, rather than the published form that the resulting editions take, is the practice wherein lies most of the promised revolution within textual scholarship, but it has attracted considerably less attention than the question of digital publication.” (Andrews 2012)
In the past 10 years, virtual research environments (henceforth VREs) have become instrumental in developing new modes of digital editing, digital textual criticism, and manuscript study. By inviting the production of multimodal and collaborative editing tools, these online maker spaces enable scholars to ask completely new kinds of research questions. We experience emerging possibilities to go beyond the limitations in legacy formats rooted in print culture. The quest to represent knowledge in its full complexity and diversity turns text corpora into open-ended, dynamic sets of digital collections that connect resources from different locations. These practices enable scholars to work with different layers of critical apparatus reconstructing polyphony (see Clivaz 2019) or lead to greater transparency in scholarly argumentation where researchers can clearly articulate uncertainties in their material (Edmond 2019), express diversity of opinions, reconstruct or deconstruct canons, or prepare thick descriptions (Fenlon 2019) of primary and secondary sources.

Still, as long as traditional forms of journal and book publishing with prestigious legacy publishers serves as the highest value currency of academia (see e.g. Moore 2019, Eve 2020), neither the creation of virtual workspaces, which, as we can see, became indispensable enablers of modern science, nor the digital scholarly objects produced in them are rewarded as genuine scholarly outputs. In this paper, I aim to provide a small contribution to the pressing re-harmonization efforts of research evaluation and novel research practices in textual sciences and initiate an open discussion towards a community consensus over what matters in the evaluation of VREs. Candela, Castelli, and Pagano’s now canonical study on the five core criteria of VREs (Candela, Castelli, and Pagano 2013) will serve as a starting point for the discussion which will be structured around the following questions:

  • Why is critical engagement with VREs key in their formal recognition, also in terms of tenure and promotion?
  • What are the mechanisms for the integration of VREs into formal (and conventional) systems of research assessment so that scholars can receive proper credit for them?
  • What are the emerging future perspectives that will shape the critical assessment of VREs and how it differs from that of publication of papers, books?
  • Is peer review the most appropriate evaluation framework for them?

The paper presents findings emerging from a specific task force of the OPERAS-P project [1] “Quality assessment of SSH research: innovations and challenges.” The task force aims to support the development of the relevant OPERAS [2] activities and services by informing them about current trends, gaps and community needs in research evaluation in the Social Sciences and Humanities. This entails (1) teasing out the underlying reasons behind the persistence of certain proxies in the system (such as the ‘impact factors of the mind’ that continue to assign tacit prestige to certain publishers and forms of scholarship) and (2) the analysis of emerging trends and future innovation in peer review activities within the Humanities domain. The methodology includes desk research, case studies, and conducting in-depth interviews with 43 scholars about challenges, incentives, community practices and innovations in scholarly writing and peer review. In addition to the case studies and interviews, feedback on the topic from scholarly events, such as VREs and Ancient Manuscripts Conference 2020 (Clivaz and Allen 2021), has also been integrated into our work.

Realigning research evaluation with research realities of digital scholarship

The need to establish and strengthen evaluative cultures and frameworks around complex digital scholarly objects (digital critical editions, networked monographs, databases, software, participatory project websites, data visualizations, and other research tools and services of different sorts) is a long-standing challenge in Digital Humanities (see e.g. Krause 2007, Fitzpatrick 2011, Risam 2014). Although we can see the emergence of guidelines and manifestos [3] aiming for providing practical guidance on the quality assessment of digital scholarship and its inclusion into academic tenure and promotion guidelines, in reality, digital scholarly objects that cannot be placed on a bookshelf are still largely out of sight from research evaluation and recognition (Eve 2020). [4]
Earliest attempts to change this for the better and establish the genre of tool criticism within Digital Humanities had started out from the recognition that peer review is an absolute prerequisite of the inclusion of digital scholarship into the formal systems of research assessment and its administration, that is rooted in the conventions of print scholarship and traditional publication venues. [5] Strengthening the evaluative discourse around the many kinds of digital scholarly outputs became key to enable tool makers, creators of digital critical editions, data curators etc. to list their contributions in their CVs, and getting them recognized as genuine forms of scholarship.
An early effort to develop a framework to critically discuss (‘read’) digital objects and to accommodate digital scholarship in the well-established institution of peer review, alongside research papers, can be seen in Stan Ruecker and Alan Galey’s study, How a Prototype Argues? (Galey and Ruecker 2010). With the benefit of 11 years of hindsight, it is clear how closely this early approach follows the tradition of the classical book and article reviews. For instance, we see a great emphasis on the continuity of the scholarly discourse, as a central claim of the paper is that a digital tool is eligible for peer review if they make or exhibit a certain scholarly argument. And although the focus of this work is clearly to fit digital scholarship into the well-established criteria of assessing the quality of papers, certain points where difficulties or complexities manifest in such a mapping already highlight the special flavours of peer review of born-digital scholarly objects. For instance, Galey and Ruecker recognize that authorship (i.e. those who produced a tool) is a much more complex and less straightforward notion in the context of digital tools and environments than in the context of scholarly writings. Similarly, due to the evolving nature of digital scholarship, finding a fix, optimal and single point in their life cycle for evaluation is a much less straightforward choice than in the case of journal or book publications.

Emerging frameworks for the critical assessment of VREs

Another crucial step towards taking the evaluation of VREs to the next level is to define them and position them within the generic discourse of tool criticism by enabling a clear distinction of what is a VRE and what is not. To this end, Candela, Castelli, and Pagano’s now seminal 2013 study (Candela, Castelli, and Pagano 2013) defines five core criteria of VREs:

  1. It is a web-based working environment;
  2. It is tailored to serve the needs of a community of practice;
  3. It is expected to provide a community of practice with the whole array of commodities needed to accomplish the community’s goal(s);
  4. It is open and flexible with respect to the overall service offering and lifetime;
  5. It promotes fine-grained controlled sharing of both intermediate and final research results by guaranteeing ownership, provenance and attribution.

Their attempt to define VREs inevitably signal expectations towards their optimal design and implementation and therefore can be read as a set of preliminary measures that guides their evaluation. As such, we see a remarkable shift from the paper-centric evaluation criteria towards taking into account the specificities of digital scholarship. First, the situatedness and non-neutrality of VREs is clearly signalled. That is, the quality of a VRE is not an absolute value but essentially depends on the context and purpose of the environment and on the research questions that gave rise to it. Closely related to this, we see a strong emphasis on the defining role of community aspects, and how VREs are primarily designed to serve certain communities and their scholarly needs of different kinds (e.g. access to primary sources, data discovery, processing, analysis, enrichment, visualization and publication; exchanges with peers etc.). The extent to which a VRE remains sensible and flexible to community needs of different kinds became a widely recognized, recurrent key quality measure in later approaches as well. The following excerpt from a discussion in the context of the DESIR project, a project that is primarily dedicated to connect infrastructures and knowledge bases for Digital Humanists across Europe, exemplifies this well:

It is important to remember that interfaces and infrastructures can also affect the relations between the various communities involved in DH efforts. A VRE can socially separate computer scientists from programmers, humanist scholars and designers, if each group works separately in a different coding language or a different mode of the given environment, yet it can also connect them when it is porous enough to enable knowledge exchange between its different facets. (Tasovac et al. 2018:29)

Next, establishing publication venues specifically dedicated to tools and environments in Digital Humanities and Cultural Heritage studies is certainly a big step in making them more visible and an integral part of the scholarly discourse. In this respect, the launch of the first such journal, RIDE (Review Journal of the Institute for Documentology and Scholarly Editing), in Germany in 2014 clearly marks a new era. Since then, RIDE has been providing an increasingly renowned platform for experts to evaluate and discuss current practices, tools and environments. The novelty of the journal’s approach to tool criticism is to evaluate not only traditional aspects of scholarship that we saw in the first peer review attempts but also the methodology and technical implications. To this end, they have detailed guidelines/system of criteria in place for reviewing scholarly digital editions and tools that reflect the complexity of digital tools and environments but are still generic/high-level enough to be applicable to a wide range of artefacts. (‘Criteria for Reviewing Digital Editions and Resources – RIDE’ n.d.)

It ranges from more specific characteristics that are close to the research questions in which tools, environments are situated, such as their scope; aims and methods, covering, for instance, technical decisions about text modelling; towards more generic, infrastructure-oriented criteria such as usability, sustainability, and user interaction.
Apart from strengthening the discourse around the quality assessment of Digital Humanities tools and environments, the journal also serves as an important instrument to gain peer recognition of them and embed them in the scholarly citation system which is an absolute necessity for receiving proper academic credit for them. [6] Doing so, however, the journal also points on the difficulties of gaining academic credit for tools and VREs without aligning them to the well-established format of the scholarly journal and all its information management entailments (discovery, indexing, and citation tracking systems that are optimized for papers only and are inclusive with research tools). This clearly showcases the compromises over gaining recognition of digital tools and environments on their own terms: one needs to “gift wrap” them into tool papers in order to integrate them into the formal research assessment and administration systems. [7]

The shortage of evaluative labour: Is peer review the most appropriate evaluation mechanism for the quality assessment of VREs?

The deep embeddedness of peer review into the traditional paradigm of the research article, persistently serving as the highest value currency of academia, is not the only one major difficulty that tool makers face in their struggle to gain formal recognition to born-digital scholarly objects as genuine scholarly outputs. The shortage of evaluative capacities is another, at least just as crucial factor that prevents the large-scale extension of peer review to tools, environments, codes, and scholarly data of different kinds. As the amount of scholarly publications are still increasing year by year (see e.g. Bornmann and Mutz 2015)—not independently from the publish or perish pressure on scholars (Plume and van Weijen 2014) that has a significant role in the conservation of traditional proxies of research excellence—scholarly publication venues are becoming more and more challenged in their capacity to find and attract reviewers, especially for highly specialized, niche topics. In the light of these difficulties the traditional peer review mechanisms are already facing, their large-scale extension to a wide range of novel digital scholarly objects seems to be unrealistic, especially considering that the proper evaluation of these complex scholarly outputs require very specific knowledge, usually coming from the intersection of different knowledge areas: a specific Humanities discipline, Information Technology, Data science or Infrastructure engineering. These difficulties around integrating digital scholarly artefacts into conventional research assessment mechanisms are clearly reflected in the following interview statement of a Digital Humanities scholar:

“The problem is connected with the bottleneck of peer review. Peer reviewers are researchers and are just humans. Some of them are familiar with the new ways of preparing and distributing knowledge or research results, but others are not. The question is if someone is capable of reviewing someone’s paper that could be very innovative and creative with the way technology is used, is that person capable of actually following the idea, so we have the discrepancy.”
(UNIZD04) [8]
“And really the labor involved in evaluating these things just goes through the roof. And I just don’t think people are going to have time to do that kind of evaluation for every piece of digital scholarship that emerges in the next few years. So I think there’s a looming crisis for the labour of peer review.”
(DAE003) [9]

On the other hand, the critical engagement with research tools and environments is not limited to peer review. In the last couple of years, especially on the European research policy horizon, we see the emergence of certification and evaluation frameworks that do not directly originate from the scholarly discourse. These frameworks focus more on the infrastructural maturity of services such clearly defined access protocols, as long-term availability, the interoperability of standards to give an idea to science funders about the sustainability and the reliability of the many services with minimum administrative effort. An important consideration behind them is to set a quality threshold for the agile production of digital scholarly artefacts that are connectible to each other instead of creating knowledge silos that can only be used in the context of one single project for a short period in time.

Of them, probably the most well-known, global framework is organized around the FAIR principles (Wilkinson et al. 2016). [10] Since their inception in 2014, the FAIR principles became a synonym for good practices in high-quality and sustainable production, collection and curation of digital artefacts across funders, infrastructure providers, policy bodies and scholarly communities worldwide. Originally, the principles had been designed to enable the large-scale reusability and quality assessment of research data and then became gradually extended to other artefacts and content types. As such, lately, evaluation frameworks for FAIR training materials (Garcia et al. 2020), FAIR software (Lamprecht et al. 2020) and FAIR services (Genova et al. 2020) also became available.
In alignment to this latter, it is worth mentioning another certification framework specifically developed for the quality assessment of research repositories: the CoreTrust Seal Data Repository Certification system. [11] It comprises a set of guidelines ensuring that data repositories can be trusted as stewards of research data for the long term. These guidelines are based on the DSA-WDS Core Trustworthy Data Repositories Requirements [12] and cover 15 areas, such as data access, licenses, workflow, data integrity, etc. Data repositories can use these guidelines for self-assessment but can also apply for Core Trust Seal certification to demonstrate that they reached a gold standard in the application of good practices of repository management. Considering that using trustworthy repositories is a cornerstone element of FAIR data conducts (that is, research funders increasingly require their beneficiaries to share their data in trustworthy repositories [13] ), we can expect that such certification will become more and more important in the future landscape of research data services.
The FAIR frameworks have no such widely agreed, standardized protocols of assessment in place yet. In the first iteration, the 4 core principles forming the FAIR acronym became elaborated in 15 sub-principles. [14] This set of sub-principles form the basis of the ongoing work around FAIR maturity matrices (Wilkinson et al. 2019), pointing to the direction of elaborated and standardized, multiple-tier certification systems for research funders to use as assessment tools in the future.

FAIR play? The central role of disciplinary cultures in the successful implementation of the FAIR frameworks

Although the FAIR principles started out as high-level and discipline agnostic assessment framework for digital scholarly objects, soon it became clear that a crucial success criteria in their implementation will be the extent to which different scholarly communities will be involved in their domain- and discipline specific implementations (see e.g. Deniz et al. 2020). Depending on the level of involvement of disciplinary communities and the quality of interactions between them and policy and standardization bodies, the FAIR frameworks can serve as flexible opportunities for scholarly communities to innovate and enrich their practices for digital scholarship and accommodate their own assessment needs to specific instances of FAIR evaluation frameworks. In the absence of the former (meaning, community involvement), however, there is a serious chance that the FAIR principles and all the frameworks built on them will remain alien practices from research realities.
Very recently, we see reflections, critical interventions and assessment exercises from around Digital Humanities as well (Harrower et al. 2020; Tóth-Czifra 2020a; Rojas Cartro 2020) highlighting conceptual, financial and infrastructural challenges but also opportunities in realising the promises of FAIR within discipline-specific scholarly practices. Antonio Rojas Castro’s recent publication, FAIR enough? Building DH Resources in an Unequal World contains a critical FAIR assessment in the context of a text-oriented Digital Humanities project, Humboldt Digital (aiming for the philological development and computational analysis of sources around Alexander von Humboldt’s Cuban research in the period 1800–1830) [15] and a virtual learning environment for Humanists: the Programming Historian. [16] Both yield concrete, practical insights that can serve as a blueprint for the FAIR assessment or VREs. Apart from illustrating how the 15 subprinciples of FAIR manifest themselves in these two projects, he also touches upon a range of ethical and sustainability issues. For example, he explores how not to replicate dominant power structures while creating technologically mature FAIR resources, how to overcome inequalities in open access to infrastructure, training and skills, the extent to which funding is a prerequisite to FAIR compliance, how to overcome creator-centric approaches when describing cultural resources, and who is responsible for making such decisions within a research project.

The complexity of evaluation of this kind is also reflected in the fact that in the context of VREs, a FAIR assessment can manifest itself in two levels:

  1. Since the design of VREs define the quality of the outputs made in them, and in many cases also the access to them and to their primary sources, the first level assessment could focus on how VREs enable FAIR data collection, creation, curation and publication (e.g. whether provenance and copyright information is kept clear, whether the data model and data standards applied are in line with community standards, clearly defined access conditions to primary, interim and secondary resources etc.).
  2. The second level could assess the VRE as a complex, digital scholarly object itself (e.g. its connectivity to data repositories, maturity of operation, clear licensing etc.).

It is yet to be seen whether and to what extent communities around VREs will realize FAIR or certification systems as alternative evaluation frameworks that might complement traditional peer review in the future. If becoming translated, adopted and deeply embedded in research realities, these emerging alternative assessment frameworks carry the potential to address some of the crises peer review is facing in the digital realm and to put novel mechanisms in place that allow for the recognition of digital scholarly objects on their own terms, without the paper-centric legacy of traditional peer review. While FAIR and other certification frameworks primary aim is to assess the technical and legal maturity of scholarly objects and environments as parts of our digital commonwealth, it is still the peer review practices of different kinds that anchor them in scholarship (looking at the epistemic markedness of them or the arguments they convey etc.) and therefore the two types of assessment show complementarity.

Conclusion and a proposal for a synthesis

Having a brief overview of the two major approaches in which the quality assessment of VREs are or can be situated suggests that these approaches, the one rooted in public peer reviews and the certification-oriented ones, are expected to develop in parallel and it will depend on the circumstances of specific endeavours which one makes more sense to realize. For instance, in the case of externally funded projects, FAIRness will be likely to play an increasingly important role.
The overview also aimed to highlight some of the complexities around the evaluation of VREs. Part of them are coming from the complex nature of these environments, that is, VREs can equally be read and interpreted as instances of digital scholarship and as infrastructures that enable the production of digital scholarship. The taxonomy below aims to highlight this specificity by providing a rough but multifaceted synthesis of elements of the quality assessment of VREs that we have seen above, ranging from specific to generic.

1. The immediate context of the VRE: research question it had been designed for

As it had been already highlighted above, the quality of a scholarly object cannot be taken as an absolute value, but it depends on the goals the user would like to achieve with the help of it. An assessment of to what extent a VRE is appropriate to investigate a certain research question could include aspects like: Is the VRE applicable to a wide range of research questions or one particular research question? Are the data types and annotation schemas/metadata fields suitable to accommodate domain-specific information relevant to the research project (e.g. languages, paper size, direction of fibres etc.)? These aspects define the scope of the VRE.

2. The broader epistemological context and epistemological markedness of the VRE

The second layer continues to call attention to the situatedness of the creation of the VRE and aims to uncover certain assumptions of its builders, or the scholarly paradigm the VRE is embedded into. Paying attention to the ways in which the information is organized and categorized within the VRE (e.g. in its data model) and the perspectives this arrangement impose on their subjects to study belong here. But also questions like: How is the GUI interface(s) of the VRE foregrounding or backgrounding certain functionalities of the VRE? How does the VRE facilitate multiple interpretation? What are the underlying mechanisms in data visualization? How does the VRE allow for going beyond the linear nature of the book? How does the VRE enable collaboration (e.g. real-time editing, multimodality, validation tools, mechanisms in place) and how inclusive is it with users coming from different backgrounds and with different levels of computational skills?

3. Infrastructural maturity

In addition to the scholarship-oriented criteria, the maturity and sustainability of the VRE as infrastructure can also form a layer of assessment. Investigation of its technical and social sustainability (see Edmond and Morselli 2020), the business model behind it, the technological readiness level of the VRE (e.g. whether it operates on prototype/ beta/production; whether it has a PID system in place, whether the long-term availability of data in the VRE is guaranteed) belongs here, but also whether there is a rich documentation of the VRE (including, among others, documentation of access conditions, intellectual property rights management, enabling citability) is available, whether training materials and support/helpdesk to the VRE are available.

4. Connectivity and community uptake

Finally, it would be a mistake to lose community uptake, a defining and all probably the most powerful success criteria of any infrastructure, platform, or environment out of sight. This social aspect yields two subdimensions: user engagement (including usage metrics and other indicators of community uptake) and whether the VREs can achieve interoperability with and connect to other systems and environments. This latter includes first connectivity within the research field in which the VRE is embedded, that is, whether it achieves interoperability with other tools, environments and data standards applied in the field (i.e. whether it builds on existing data models, whether the content of the VRE can be connected to other infrastructural components for instance, via APIs, whether there is collaboration with other projects in the field, how the VRE facilitates access to primary, secondary resources across the field etc.). The second level of connectivity includes connectivity with the bigger scholarly ecosystems, such as data or software repositories, discovery platforms, or service catalogues. In this context, the following questions are relevant: Is the VRE open source/proprietary? Is the data in the VRE harvestable/accessible by other services (e.g. export functionalities)? Does the VRE adopt encoding practices that prioritize compatibility and interoperability wherever possible? Are external tools easy to integrate? How re-usability of material in or of the VRE is enabled? How does the VRE itself allow for an investigation of a range of research questions?
These dimensions are not absolute measures and exhibit different levels of granularity and difficulty levels to tackle them. They highlight some of the key elements that are relevant for the evaluation of a VRE and, at the same time, showcase the complexity of the exercise. Strengthening the consensus around evaluative frameworks specifically dedicated to VREs (either within or outside the institution of peer review) is an essential criterion to their recognition as genuine scholarly outputs on their own and to their inclusion into academic tenure and promotion criteria. The VREs and Ancient Manuscripts Conference 2020 provided significant contributions towards paving the way for evaluation criteria for VREs. However, it seems that the most important challenge in this respect is the capacity building for evaluative labour. Only through strengthening evaluative communities around VREs for textual criticism can we eliminate the biggest barrier in the mass adoption of open and digital data sharing and appropriately reward scholarly practices that are the cornerstones in the advancement of modern scholarship.

Bibliography

Andrews, T. L. 2012. “The Third Way: Philology and Critical Edition in the Digital Age.” The Journal of the European Society for Textual Scholarship. https://www.academia.edu/2510270/The_Third_Way_Philology_and_Critical_Edition_in_the_Digital_Age.
Baillot, A. 2016. “A Certification Model for Digital Scholarly Editions.” https://halshs.archives-ouvertes.fr/halshs-01392880.
Ball, C. L., C. E. Lamanna, C. Saper, and M. Day. 2016. “Annotated Bibliography on Evaluating Digital Scholarship for Tenure & Promotion.” Kairos: A Journal of Rhetoric, Technology, and Pedagogy. https://kairos.technorhetoric.net/stasis/2016/ball-et-al/Evaluating_Digital_Scholarship_for_Tenure.pdf.
Bornmann, L, and R. Mutz. 2015. “Growth Rates of Modern Science: A Bibliometric Analysis Based on the Number of Publications and Cited References.” Journal of the Association for Information Science and Technology 66(11): 2215–2222. https://doi.org/10.1002/asi.23329.
Candela, L, D. Castelli, and P. Pagano. 2013. “Virtual Research Environments: An Overview and a Research Agenda.” Data Science Journal 12:GRDI75–81. https://doi.org/10.2481/dsj.GRDI-013.
Clivaz, C. 2019. “The Impact of Digital Research: Thinking about the MARK16 Project.” Open Theology 5(1): 1–12. https://www.degruyter.com/document/doi/10.1515/opth-2019-0001/html.
———, and G. Allen. 2021. “Introduction.” In “Ancient Manuscripts and Virtual Research Environments,” ed. C. Clivaz and G. Allen, special issue, Classics@ 18. https://classics-at.chs.harvard.edu/volume/classics18-ancient-manuscripts-and-virtual-research-environments/.
“Criteria for Reviewing Digital Editions and Resources – RIDE.” n.d. Accessed 28 February 2021. https://ride.i-d-e.de/reviewers/catalogue-criteria-for-reviewing-digital-editions-and-resources/.
Dorofeeva, A. 2014. “Towards Digital Humanities Tool Criticism.” Master Thesis, Universitat Leiden. Leiden. https://studenttheses.universiteitleiden.nl/access/item%3A2661339/view.
Edmond, J. 2019. “Strategies and Recommendations for the Management of Uncertainty in Research Tools and Environments for Digital History.” Informatics 6(3):36. https://doi.org/10.3390/informatics6030036.
Edmond, J., and F. Morselli. 2020. “Sustainability of Digital Humanities Projects as a Publication and Documentation Challenge.” Journal of Documentation 76(5): 1019–1031. https://doi.org/10.1108/JD-12-2019-0232.
Eve, M. P. 2020. “Violins in the Subway: Scarcity Correlations, Evaluative Cultures, and Disciplinary Authority in the Digital Humanities.” In Digital Technology and the Practices of Humanities Research, ed. J. Edmonds. Cambridge. https://doi.org/10.11647/OBP.0192.06.
Fenlon, K. 2019. “Interactivity, Distributed Workflows, and Thick Provenance: A Review of Challenges Confronting Digital Humanities Research Objects.” Zenodo. https://doi.org/10.5281/zenodo.3459770.
Fitzpatrick, K. 2011. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York.
Fyfe, A., K. Coate, S. Curry, S. Lawson, N. Moxham, and C. Mørk Røstvik. 2017. “Untangling Academic Publishing: A History of the Relationship between Commercial Interests, Academic Prestige and the Circulation of Research.” Zenodo. https://doi.org/10.5281/zenodo.546100.
Galey, A., and S. Ruecker. 2010. “How a Prototype Argues.” Literary and Linguistic Computing 25(4):405–424. https://doi.org/10.1093/llc/fqq021.
Garcia, L., B. Batut, M. L. Burke, M. Kuzak, F. Psomopoulos, R. Arcila, T. K. Attwood, et al. 2020. “Ten Simple Rules for Making Training Materials FAIR.” PLOS Computational Biology 16(5):e1007854. https://doi.org/10.1371/journal.pcbi.1007854.
Genova, F., J. M. Aronsen, O. Beyan, N. Harrower, A. Holl, P. Principe, A. Slavec, and S. Jones. n.d. Recommendations on Certifying Services Required to Enable FAIR within EOSC. Publications Office of the European Union. Luxembourg.
Harrower, N., B. Immenhauser, G. Lauer, M. Maryl, T. Orlandi, B. Rentier, and E. Wandl-Vogt. 2020. “Sustainable and FAIR Data Sharing in the Humanities.” ALLEA-All European Academies. Berlin.
Juric, M. 2020. “Transcript Interview UNIZD 04 (H2020 OPERAS-P)” (text). Nakala. https://doi.org/10.34847/nkl.bc25h4v7.
Koers, H, M. Gruenpeter, P. Herterich, R. Hooft, S. Jones, J. Parland-von Essen, and C. Staiger, Christine. 2020. “Assessment report on ‘FAIRness of services’ (Version 1.0).” Zenodo. https://zenodo.org/record/3688762.
Krause, S. 2007. “Where Do I List This on My CV? Considering the Values of Self-Published Web Sites.” https://kairos.technorhetoric.net/12.1/topoi/krause/version2.html.
Lamprecht, A.-L., L. Garcia, M. Kuzak, C. Martinez, R. Arcila, E. Martin Del Pico, V. Dominguez Del Angel, et al. 2020. “Towards FAIR Principles For Research Software.” Data Science 3(1):37–59. https://doi.org/10.3233/DS-190026.
Moore, S. 2019. “No Amount of Open Access Will Fix the Broken Job Market.” Samuel Moore (blog). https://www.samuelmoore.org/2019/04/12/no-amount-of-open-access-will-fix-the-broken-job-market/.
Moore, S., C. Neylon, M. P. Eve, D. P. O’Donnell, and D. Pattinson. 2017. “‘Excellence R Us’: University Research and the Fetishisation of Excellence.” Palgrave Communications 3(1):1–13. https://doi.org/10.1057/palcomms.2016.105.
Mustajoki, H., J. Pölönen, K. Gregory, D. Ivanović, V. Brasse, J. Kesäniemi, and E. Pylvänäinen. 2021. “Making FAIReR assessments possible. Final report of EOSC Co-Creation projects: ‘European overview of career merit systems’ and ‘Vision for research data in research careers’.” Zenodo. https://zenodo.org/record/4701375.
Oravecz, C., T. Váradi, and B. Sass. 2014. “The Hungarian Gigaword Corpus.” In Proceedings of LREC 2014, ed. N. Calzolari, K. Choukri, T. Declerck, et al. http://www.lrec-conf.org/proceedings/lrec2014/pdf/681_Paper.pdf.
Plume, A., and D. van Weijen. 2014. “Publish or Perish? The Rise of the Fractional Author… – Research Trends.” https://www.researchtrends.com/issue-38-september-2014/publish-or-perish-the-rise-of-the-fractional-author/.
Risam, R. 2014. “Rethinking Peer Review in the Age of Digital Humanities.” https://digitalcommons.salemstate.edu/cgi/viewcontent.cgi?article=1002&context=english_facpub.
Rojas Cartro, A. 2020. “FAIR Enough? Building DH Resources in an Unequal World.” https://hcommons.org/deposits/item/hc:32187/.
Schreibman, S., L. Mandell, and S. Olsen. 2011. “Introduction – Evaluating Digital Scholarship.” Profession, 123–135. https://www.jstor.org/stable/41714114.
Stanton, D. C., M. Bérubé, and L. Cassuto. 2007. “Report on Evaluating Scholarship for Tenure and Promotion.” Modern Language Association. https://www.mla.org/Resources/Research/Surveys-Reports-and-Other-Documents/Publishing-and-Scholarship/Report-on-Evaluating-Scholarship-for-Tenure-and-Promotion.
Takats, S. 2013. “A Digital Humanities Tenure Case, Part 2: Letters and Committees.” Quintessence of Ham (blog). http://quintessenceofham.org/2013/02/07/a-digital-humanities-tenure-case-part-2-letters-and-committees/.
Tasovac, T., R. Barthauer, S. Buddenbohm, C. Clivaz, S. Ros, and M. Raciti. 2018. “D7.1 Report about the Skills Base across Existing and New DARIAH Communities.” Technical Report. Belgrade Center of Digital Humanities; DARIAH. https://hal.archives-ouvertes.fr/hal-01857379.
Tennant, J. P., J. M. Dugan, D. Graziotin, D. C. Jacques, F. Waldner, D. Mietchen, Y. Elkhatib, et al. 2017. “A Multi-Disciplinary Perspective on Emergent and Future Innovations in Peer Review.” F1000Research 6:1151. https://doi.org/10.12688/f1000research.12037.3.
Tóth-Czifra, E. 2020a. “10. The Risk of Losing the Thick Description: Data Management Challenges Faced by the Arts and Humanities in the Evolving FAIR Data Ecosystem.” In In Digital Technology and the Practices of Humanities Research, ed. J. Edmond. Cambridge. https://doi.org/10.11647/OBP.0192.06.
———. 2020b. “Transcript Interview DAE03 (H2020 OPERAS-P).” (text). Nakala. https://doi.org/10.34847/nkl.3e9bjm89.
Wilkinson, M. D., M. Dumontier, I. J. Aalbersberg, G. Appleton, M. Axton, A. Baak, N. Blomberg, et al. 2016. “The FAIR Guiding Principles for Scientific Data Management and Stewardship.” Scientific Data 3, article 160018. https://doi.org/10.1038/sdata.2016.18.
Zundert, J. J. van, S. Antonijević, and T. L. Andrews. 2020. “6. “Black Boxes” and True Colour — A Rhetoric of Scholarly Code.” In Digital Technology and the Practices of Humanities Research, ed. J. Edmond. Cambridge. https://doi.org/10.11647/OBP.0192.06.
Zundert, J. van. 2016. “Close Reading and Slow Programming — Computer Code as Digital Scholarly Editions.” Presentation given at the ESTS-DiXiT Conference, Antwerp, 5–7 October 2016. https://www.semanticscholar.org/paper/Close-Reading-and-Slow-Programming-%E2%80%94-Computer-Code-Zundert-Joris/4630d054463630fa18c1dae4d3878ff388f157ed.

Footnotes

[ back ] 3. The Modern Language Association (MLA)’s dedicated task force on evaluating scholarship for tenure and promotion and its outcomes (Stanton, Bérubé, and Cassuto 2007), the Evaluating Digital Scholarship special issue of the Profession journal (Schreibmann, Mandell, and Olsen 2011), and Ball et al.’s “Annotated Bibliography on Evaluating Digital Scholarship for Tenure & Promotion” (2016) mark milestones in this progress; still functioning as reference works for anyone aiming to start a conversation about the inclusion of digital scholarship in formal assessment criteria of one’s institutions.
[ back ] 4. A striking example is the tenure case of the Zotero co-director Sean Takats: It was his monograph in French history that served the basis of his tenure evaluation in 2013, while the reference management system that became a widely used, cornerstone infrastructural component in scholarly writing across continents and disciplinary boundaries had only been marginally considered. (Takats 2013, cited by Dorofeeva 2014).
[ back ] 5. Tennant et al. 2017, Moore et al. 2017, and Fyfe et al. 2017 highlight both the complexities and severe consequences of the close associations between excellence and journal and publication prestige in the academic prestige economy.
[ back ] 6. It is worth mentioning here that RIDE is not a standalone effort but is embedded into an emerging tool criticism culture (see e.g. Zundert 2016; Zundert, Antonijević, and Andrews 2020) on the one hand, and in the context of venues such as OpenMethods (https://openmethods.dariah.eu/) or the Journal of Open Source Software Blog (https://joss.theoj.org/) that come with similar aims but are much less integrated into the formal scholarly communication.
[ back ] 7. A similar tendency can be observed with scholarly data publications, e.g. the citation as suggestion of the Hungarian National Corpus as pointed to in this paper: Oravecz, Váradi, and Sass 2014.
[ back ] 8. Juric 2020.
[ back ] 9. Tóth-Czifra 2020b. Interview segment from the corpus of 42 interviews collected with 44 scholars, 32 of which have been transcribed and analysed within the framework WP 6.5 and WP 6.6 tasks of the OPERAS-P project and 9 of which have been summarized. The pseudonymized transcriptions and their encodings are progressively made available as open data in the NAKALA repository.
[ back ] 10. The acronym stands for: Findable, Accessible, Interoperable, Reusable.
[ back ] 13. See e.g. Tsoukala 2019.