Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us
Categories
Open Access Open Scholarship Open Science

What’s in a number? A closer look at Open Access readership data, Part One

This is Part One of a two-part series on Open Access Readership Data. For Part Two, click here.


Open Access (OA) literature is freely available online to be read by anyone, anytime, and anywhere. It is a publishing model that offers an alternative for the traditional method of publishing scholarly output behind a paywall, where articles and books are inaccessible to everyone except a select few with a library subscription or the money to pay for exorbitant individual fees. Broadly speaking, there are two routes to achieving an Open Access publication: the first is Green OA (the author publishes behind a paywall and self-archives a digital copy in a free online repository); and the second is Gold OA (the author publishes an article or book immediately in OA, making it directly available to the public without any costs charged for reading).1 Whereas academic journals that offer Gold OA options have become widespread in the last decade, the transition to Open Access for academic books is lagging behind, despite the fact that monographs are still the leading publishing format in the Humanities and Social Sciences. In order to boost the publication of OA books, KU Leuven Libraries reserved a substantial part of the KU Leuven Fund for Fair Open Access, established in 2018, to help finance OA books published by Leuven University Press (LUP).

KU Leuven Fund for Fair Open Access

The KU Leuven Fund for Fair Open Access was founded to foster a sustainable implementation of OA according to the principles of Fair OA. In order to do so, the Fund supports various non-commercial and community-owned OA initiatives and infrastructures. The partnership with LUP fits perfectly within this objective. Leuven University Press charges a cost-covering publication fee for OA books (the so-called book processing charge [or BPC]), and is therefore in line with the most important aim of the Fund to only endorse non-profit Open Access. What is more, LUP does not impose a traditional author-publisher contract, whereby the authors cede their rights to the publisher, instead they offer an OA-license agreement in which the authors retain copyright.

Authors affiliated with KU Leuven typically apply for a subsidy amounting to two-thirds of the OA costs charged by LUP; authors who are not affiliated with KU Leuven can apply for a subsidy up to one-third of the costs. Applications for OA book publications are reviewed quarterly by a committee consisting of representatives of KU Leuven Libraries, Leuven University Press, Research Coordination Office, and academics of different disciplines; the committee is presided over by KU Leuven’s vice rector of research policy. Open Access books are submitted to the same rigorous process of peer review assessment as any other book published by LUP, and thus the process of judging if an application is eligible for funding from the KU Leuven Fund for Fair OA is completely separated from the editorial evaluation of the manuscript. All OA books financed by the Fund are made available in multiple formats: as a free eBook (typically ePDF and ePUB) and in a reasonably priced paper edition (print-on-demand).2

Since its launch in 2018, nine monographs have been published in Open Access thanks to the financial support of the Fund, with another 29 subsidy applications approved. The Fund is gaining in visibility and attractiveness as we clearly notice a rising trend in the amount of applications: whereas in 2018 and 2019 there were approximately six to seven proposals per application round, in 2020 this number has almost doubled to twelve requests for the first round.3 The Fund contributes to LUP’s international branding too – so far 14 of the approved applications to date have been submitted by authors not affiliated with KU Leuven.

We are happy to see that the Fair OA Book Fund is a great success, not only in the amount of books we are able to help publish, but also in terms of the reach of these books which is demonstrated by the readership data that we now share online for each OA book published with the support of the Fund. I write about readership data here, even though you can never be 100% certain that someone who has viewed or downloaded the book has in fact read it. Similarly, buying a hard copy and putting it on your shelf does not prove actual readership. Platforms cannot measure whether or not a book has been read, they can only record the number of times the book has been accessed. Still, it is reasonable to assume that when someone downloads a publication, they have the intention of reading it, at least partially. Just as we can also assume that when people borrow a book from the library, they plan to at least skim through it.

Data collection

We thought carefully about how we wished to display our data and, as is evidenced below, this is not just one simple number of total online readers. One of the many advantages of Open Access books is that they are made available on various platforms, which increases their reach and impact. However, there is a lack of standardization in how to gather and present usage metrics, thus making readership data platform-specific, with each platform applying a different method for counting the number of times a book has been viewed and/or downloaded. This results in a complicated system of having to add up numbers that do not necessarily mean the same thing. These issues have been explained very accurately by Lucy Barnes at Open Book Publishers.45 I will build on Lucy’s observations to clarify how our data is collected and presented.

The Open Access books subsidized by the Fair OA Fund are distributed on three academic publication platforms: OAPEN, JSTOR, and Project Muse. Working with these platforms guarantees global impact, dissemination, and digital preservation.6 Each platform collects its data slightly differently:

PlatformMeasureUpdate FrequencyGeographic InformationCOUNTER7 compliant
OAPENBook downloadsMonthlyYesYes
JSTORChapter views; Chapter downloadsMonthlyYesNo
Project MuseChapter views; Chapter downloadsMonthlyYesYes
Table 1. Overview of Open Access publication platforms and their types of readership metrics.

A first important difference is that while OAPEN provides readership data for the entire book, JSTOR and Project Muse only aggregate numbers for separate chapters. This makes it extremely complicated to combine the numbers into one total: whereas a user who wishes to read the entire book needs just one download from OAPEN, that same user will need as many downloads as there are chapters from JSTOR or Project Muse. Moreover, even the method for how these chapters are counted varies: the reports offered by JSTOR exclude usage for table of contents, front matter, and back matter; conversely, Project Muse does count usage statistics for the cover, title page, about the authors section, and so forth. Consequently, when a user only accesses the table of contents, this information is not counted by JSTOR, but it is integrated in the numbers provided by Project Muse. 

Secondly, while JSTOR and Project Muse count both views and downloads, OAPEN only reports number of downloads (although they do also offer readers the option to view the book via a PDF viewer). Again, this warrants a certain caution: we cannot just add up the three different datasets as they do not disclose the same information. Furthermore, one view or one download are not always similarly defined. What is counted as a unique view depends on how the respective platform has determined the length of one continuous session (as in, a group of visits to the same book by the same user within a continuous time frame).8 A download number might appear to be more straightforward but even here we should be critical of our data – some platforms count repeat downloads to the same IP address in quick succession as one, others count this as two separate downloads.

Lastly, the usage statistics by OAPEN and Project Muse are COUNTER-compliant, but the reports offered by JSTOR are technically not. Even though JSTOR does build on the same reporting system, they have tweaked some features. For example, as stated above, JSTOR’s reports exclude usage for table of contents, front matter, and back matter.9 Whether or not a platform is COUNTER-compliant also has an impact on how individual sessions are defined, resulting in counting more or less unique consultations.

As each platform applies its own measures (book downloads vs. chapter downloads vs. book/page views), I have decided (on behalf of the KU Leuven Fund for Fair Open Access) to devise a methodology inspired by Open Book Publishers. With this methodology I have broken up our usage statistics per platform instead of providing one number of total readers.10 Thus, when a given user clicks on a book, they will see the following:

Figure 1. Video of readership data for Images of Immigrants and Refugees in Western Europe: Media Representations, Public Opinion and Refugees Experiences. (2019) Eds. Leen d’Haenens, Willem Joris, and François Heinderyckx. The data includes an interactive map of downloads and user engagement for the edited volume.

As explained above, it is important to differentiate between how many times the book has been consulted as a whole, versus the individual chapters. That’s why OAPEN’s metrics are set apart from those of JSTOR and Project Muse. As of 2020, it is not possible to make a further distinction between online readers and downloads in our presentation of the chapter consultations because JSTOR now combines the counts of both views and downloads and refers to these collectively as total item requests. The readership map is intended to reflect the geographical reach of the publication and therefore combines all the data.11 Of course, we are only able to display the access location when the user has allowed the platform to track this information and we can never be completely certain that the acquired information is reliable, because users may be using Virtual Private Networks (VPNs) to provide an extra layer of security for their IP address. Both the detailed figures and the map are updated quarterly.

It is possible to argue that metrics should actually not be displayed as long as platforms apply different collecting and reporting mechanisms.12 There are also other arguments against communicating usage statistics. In the first place, by providing access to these metrics people can mistakenly apply these numbers for quantitative comparisons. It is by no means our intention to judge the performance of individual books based on this data. Although you can view them as an, albeit incomplete, indication of usage or distribution, you cannot employ these metrics for assessment purposes such as the evaluation of the scientific quality of the work. Measuring usage brings me to my second point, as OA books are openly licensed and thus by definition free to share on authors’ personal websites, social media accounts, institutional and subject repositories or exchanged with colleagues and friends, you will never be able to reproduce the total amount of times that the book has been accessed. Hence, these figures only paint part of the picture. The same can actually be said for tracking readers through sales figures of print copies since the book can be lent out and distributed further by the buyer. Thirdly, usage metrics are very much related to changing variables: language and subject interests and barriers, marketing activities, brand presence, etc. How we should engage with readership data thus differs for every individual press, and even for every individual title.

While members of the KU Leuven Fund for Fair Open Access recognize all of these concerns and difficulties, we nevertheless do wish to communicate our data in order to provide both our authors and interested users with a transparent and nuanced record of readership information. Moreover, we believe that these metrics help us to underline the added value of publishing in Open Access venues, as the numbers clearly show the high impact and wide reach of the publications. From our viewpoint as the funding agency, the data also affirms that the Fair OA Fund is achieving its goal. As long as we are clear in describing the methodology of how the data is measured, and all the complexities that come with it, we think the figures are worth disseminating.

In the next instalment of this blog series, I will describe the readership data in more detail for specific titles supported by the KU Leuven Fund for Fair Open Access.

Laura Mesotten is the Process Manager for Research and Open Scholarship at KU Leuven Libraries Artes. Her main expertise is in scholarly communication and Open Science, and she is a strong believer in Fair Open Access. If you want to know more about how to publish an Open Access book with Leuven University Press you can always contact her or have a look at the guidelines and application procedure.

  1. This is Open Access in a nutshell. For more detailed information see: https://www.kuleuven.be/open-science/what-is-open-science/scholarly-publishing-and-open-access/open-access-why-and-how []
  2. In a future project we will analyze the impact of OA on print revenues; our first findings reveal that OA does not cause a decline in print sales. []
  3. Until now, applications were reviewed twice a year, as of September 2020 they will be reviewed quarterly. []
  4. Barnes, Lucy (2019) “What We Talk About When We Talk About…Book Usage Data” [Blog] Available at https://blogs.openbookpublishers.com/what-we-talk-about-when-we-talk-about-book-usage-data/ []
  5. Other presses have signaled the same challenges of combining data from a variety of platforms. For example, see Sherer, John (2020) “Making OA Monographs More Discoverable, Usable, and Sustainable”[Blog] Available at https://longleafservices.org/blog/the-sustainable-history-monograph-pilot/ []
  6. As soon as it becomes available, the digital version of a given title is uploaded on these platforms. However, the official publication date of the book is that of the printed edition, which often appears a couple of weeks later than the eBook. This explains why we display usage statics that predate the official publication date. []
  7. The COUNTER project stands for Counting Online User NeTworked Electronic Resources. This project aims to maintain a set of standards that publishers can reference to check that the reporting of their usage data is similar to usage data provided by other publishers. This makes it easier for library staff and other Open Access stakeholders to compare like-for-like data. For more information, see https://www.projectcounter.org/ []
  8. Definition of a session taken from Open Book Publishers (n.d.) “How We Collect Our Readership Statistics.” https://www.openbookpublishers.com/section/84/1 []
  9. See https://support.jstor.org/hc/en-us/articles/360040981054-Books-at-JSTOR-Reports []
  10. See Open Book Publishers (n.d.) “How We Collect Our Readership Statistics.” https://www.openbookpublishers.com/section/84/1 []
  11. Our map is also inspired by Open Book Publishers. See for example Open Book Publishers (2018) “Open Access Around the World: Tracking Our Books Using Online Statistics.” [Blog] Available at https://blogs.openbookpublishers.com/open-access-around-the-world-tracking-our-books-using-online-statistics/ []
  12. There are currently some initiatives around OA book usage that aim to achieve a comprehensive and transparent mechanism to collect and aggregate usage metrics, most importantly the HIRMEOS project (see the OPERAS Metrics Portal at https://metrics.operas-eu.org/) by OpenEdition and the Book Industry Study Group (see Hawkins, Kevin and Brian O’Leary (2019) “Exploring Open Access Ebook Usage.” [White Paper] Available at https://hcommons.org/deposits/item/hc:24147/ Also interesting to note is that the DH Commons blog is hosted on the Hypotheses platform, which is also an outcome of OpenEdition. Hence, this post about readership data could not have been published on a more suitable platform. []

OpenEdition suggests that you cite this post as follows:
Laura Mesotten (September 4, 2020). What’s in a number? A closer look at Open Access readership data, Part One. The Digital Humanities Commons. Retrieved March 27, 2025 from https://doi.org/10.58079/nklc


3 replies on “What’s in a number? A closer look at Open Access readership data, Part One”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.