mobile-menu mobile-menu-arrow Menu
 
 

COUNTER Release 5 Draft Code of Practice FAQs

 

Why is there a need for Release 5 (R5)?

The COUNTER Code of Practice (COP) has developed organically over 15 years, incorporating more reports as new format types and usage reporting requirements have come along. This growth has increased complexity, reduced consistency among reports and made compliance more difficult for publishers and other content providers. For libraries, it has introduced ambiguities in analysing data.

 

Based on feedback from libraries, publishers and other vendors, COUNTER has rethought the approach to generating, presenting and analysing COUNTER usage data, increasing the focus on consistency and clarity of metric types, reports and report formats. It has also introduced flexibility that reduces the number of reports while making the COP more adaptable for future changes to requirements.

How is Release 5 responding to feedback?

In surveys held in 2015/16, libraries prioritized the need for greater compliance among content providers, clarification of current ambiguities, and SUSHI compliance. Publishers and other content providers also prioritized reduction in ambiguities to ease the path of development.

 

As a result, Release 5 proposes the following improvements:

What are the “standard” COUNTER R5 reports?

COUNTER R5 reports are grouped by level of reporting, with each level having both standard and expanded reports. The expanded reports enable users to make requests for statistics that can be filtered or summarized by attributes such as Data_Type, Access_Type, Is_Archive, YOP, etc.

Platform Reports:

Report Name
Short Description
Abbreviation (for SUSHI)
Details
Host Types
Platform Report 1 Usage by Month and Platform PR1 Provides summary usage of the most common metric types for searches, requests and access denied. All
Expanded Platform Report Activity by Month and Platform PRx Flexible and detailed reporting on metrics captured and summarized at the platform level. All

 

Database Reports:

Report Name
Short Description
Abbreviation (for SUSHI)
Details
Host Types
Database Report 1 Usage by Month and Database/Collection DR1 Reports on key Search and Request metrics needed to evaluate the effectiveness of a database or collection. – A&I Database

– Aggregated Full Content

– Multimedia (collections)

Database Report 2 Access Denied by Month and Database/Collection DR2 Reports on Access Denied activity for databases where users were denied access because simultaneous-use licences were exceeded or their institution did not have a licence for the database. – A&I Database

– Aggregated Full Content

Expanded Database Report Activity by Month and Database/Collection DRx Flexible and detailed reporting on metrics captured and summarized at the database/collection level. – A&I Database

– Aggregated Full Content

– Multimedia (collections)

 

Title Reports:

Report Name
Short Description
Abbreviation (for SUSHI)
Details
Host Types
Book Title Report 1 Usage by Month and Book BTR1 Reports on key Request metrics needed to evaluate the effectiveness of a given book. – E-Book

– Aggregated Full Content

Book Title Report 2 Access Denied by Month and Book BTR2 Reports on Access Denied activity for books where users were denied access because simultaneous-use licences were exceeded or their institution did not have a licence for the book. – E-Book

– Aggregated Full Content

Journal Title Report 1 Usage by Month and Journal JTR1 Reports on key Request metrics needed to evaluate the effectiveness of a given journal. – E-Journal

– Aggregated Full Content

Journal Title Report 2 Access Denied by Month and Journal JTR2 Reports on Access Denied activity for journal articles where users were denied access because simultaneous-use licences were exceeded or their institution did not have a licence for the journal. – E-Journal

– Aggregated Full Content

Expanded Title Report Activity by Month and Title TRe Flexible and detailed reporting on metrics captured and summarized at the title level. – E-Book

– E-Journal

– Aggregated Full Content

 

 

Item reports:

Report Name
Short Description
Abbreviation (for SUSHI)
Details
Host Types
Expanded Item Report Activity by Month and Item IRx Flexible and detailed reporting on metrics captured and summarized at the article or item level. – E-Book

– E-Journal

– Aggregated Full Content

– Repository

 

 

Provider Discovery Reports:

The following reports are designed to supply publishers and other data providers with usage of the content that they allow other sites, such as discovery services, to host and index.

Report Name
Report Description
Abbreviation (SUSHI)
Details
Host Type
Provider Discovery Article Report 1 Usage by Month and Article PDAR1 Reports on requests and investigation activity by article and customer. -Abstract & Index

-Discovery

-Aggregated Full Content

Provider Discovery Database Report 1 Usage by Month and Database PDDR1 Reports on requests and investigation activity by database and customer. -Abstract & Index

-Discovery

-Aggregated Full Content

Provider Discovery Title Report 1 Usage by Month and Title PDTR1 Reports on requests and investigation activity by title and customer. -Abstract & Index

-Discovery

-Aggregated Full Content

 

See the “Changes from COUNTER R4” document to see how COUNTER R4 reports relate to the new COUNTER R5 reports.

What happened to the Consortium Reports?

Due to their size, creating and consuming R4 consortium reports was not always practical or possible. COUNTER recognizes the challenges and complexities consortia face in gathering and reporting on usage for their members. Release 5 will include a solution that will enable consortia to use standard COUNTER reports in a way that best fits their workflows. The approach offered in R5 will allow for the development of tools that can fetch all member usage through a single action on the part of the consortium administrator and to present the results in a single spreadsheet, if desired.  Supporting documentation and tools will be provided in conjunction with the R5 release. COUNTER is committed to facilitating the creation of such Open Source tools and making them available to consortia worldwide.

What happened to the Mobile Reports?

Capturing usage by mobile devices is less relevant with the responsive design of most sites. The variety of “mobile” devices also makes them difficult to categorize, given that today’s smartphones have screen resolutions that exceed those of some desktops.

What are the COUNTER R5 metric types?

One of the main goals of COUNTER R5 is to simplify the Code of Practice and remove ambiguities and inconsistencies. Over time the list of COUNTER metric types has grown, yet many of the newer metric types have had problems that resulted in confusion or inconsistencies among content providers. The new list of R5 metric types is more generic and greatly reduces the effect that varying user-interface approaches can have on usage statistics. The new metric types, by category, are:

 

Searches

Metric Type
Description
Host Type
Reports
searches_regular Number of searches conducted on the host site or against a user-selected database where results are returned to the user on the host UI. The user is responsible for selecting the databases or set of data to be searched. This metric only applies to usage tracked at the database level and is not represented at the Platform level. Aggregated Full Content;
A & I Databases
DR1, DRx
searches_automated Searches conducted on the host site or discovery service where results are returned in the host-site UI and multiple databases are searched without the user selecting those databases. This metric only applies to usage tracked at the database level and is not represented at the Platform level. All, except Repository DR1, DRx
searches_federated Searches conducted by a federated search engine typically where the search activity is conducted remotely via client-server technology. This metric only applies to usage tracked at the database level and is not represented at the Platform level. All, except Repository DR1, DRx
searches_platform Searches conducted by users and captured at the Platform level. Each user-initiated search can only be counted once, regardless of the number of databases/collections involved in the search. This metric only applies to Platform reports. All, except Repository PR1, PRx

 

 

Requests

 

Metric Type
Description
Host Type
Reports
total_investigations Total number of times a content item or information related to a content item was accessed. Double-click filters are applied to these transactions. Examples of items are articles, book chapters, multimedia files. All except Repository BTR1, JTR1

TRe,

DR1, DRx,

PR1, PRx,

IRx

unique_item_investigations Number of unique content items investigated in a user session. Examples of items are articles, book chapters, multimedia files. All except Repository as above
unique_title_investigations Number of unique titles investigated in a user session. Examples are titles of journals and books. All except Repository as above
total_requests Total number of times a content item was requested (i. e. the full text or content was downloaded or viewed). Double-click filters applied. All BTR1, JTR1

TRx,

DR1, DRx,

PR1, PRx,

IRx

unique_item_requests Number of unique content items requested in a user session. Examples of items are articles, book chapters, multimedia files. All as above
unique_title_requests Number of unique titles requested in a user session. Examples of items are articles, book chapters, multimedia files. All as above

 

Access Denied

Metric Type
Description
Host Type
Reports
no_licence Number of unique content items within a user session where access was denied because the user’s institution did not have a licence to the content. All BTR2, JTR2
user_limit_exceeded Number of unique content items within a user session where access was denied because the licensed simultaneous-user limit for the user’s institution was exceeded. All BTR2, DR2, JTR2

 

See the “Changes from COUNTER R4” document to see how COUNTER R4 metric types relate to the new COUNTER R5 reports.

What will the new COUNTER Reports look like?

The COUNTER website will provide an extensive set of documents to describe COUNTER R5 and how it differs from COUNTER R4, including:

 

Will the Book Reports include zero usage?

No. COUNTER looks for ways to ensure that usage reporting is consistent and comparable across various providers and attempts to have a Code of Practice that can be implemented by the majority of publishers and analysed by the majority of libraries. Including zero usage for e-books creates two challenges that make it impossible to offer comparable and consistent reporting.

 

 

Although R5 reports will not include zero usage, COUNTER does recognize that librarians need to match usage and non-usage to their entitlements. COUNTER is therefore endorsing and being represented on a new NISO initiative, currently referred to as KBART-Automation. This initiative will set expectations for delivering entitlement data with automatic harvesting using the SUSHI protocol. The goal is that publishers will provide next-generation SUSHI harvesting of BOTH usage and entitlements, and that both sets of reports will include the necessary identifiers to allow the matching analysis that librarians are expecting. Additionally, COUNTER envisages libraries and publishers joining together to create a community that develops open-source tools to help retrieve and analyse usage. One such tool could automatically retrieve the KBART entitlements and COUNTER reports from the same content provider and output a report with only the metric types of interest but including titles with zero usage.

Could there be a way to limit or filter by package entitlements?

COUNTER reports do not provide usage by package; however, package analysis should be possible by using one of the journal reports, then use Excel, Google Sheets, etc. to merge in package details (from a separate package title list from the publisher) then use filters and/or pivot tables to get package-level totals.

What is the timeline for release?

r5-timeline

 

 

 

 

 

 

 

 

 

COUNTER is publishing the draft Release 5 of the Code of Practice (COP) for consultation in January 2017. Feedback from this consultation phase will be analysed, the COP will be refined and the final COP will be published in July 2017. We value your feedback through this consultation phase, and ways in which you can provide feedback will be posted on the COUNTER website.

 

What is the timeline for compliance?

All COUNTER R4-compliant content providers will need to be compliant with COUNTER 5 within 18 months of publication of the COP. The effective date for COUNTER R5 (when compliance is required)

is with the delivery of the January 2019 reports.

If I am not already R4 compliant, should I wait until R5 is released?

 

I am due an audit. How will R5 affect the audit timescale?

How long will COUNTER 4 reports be available once COUNTER 5 takes effect?

Content providers that are compliant with COUNTER R4 at the time COUNTER R5 goes into effect are expected to continue to offer COUNTER R4 metrics and reports for a minimum of 12 months after the COUNTER R5 effective date. However, there is no audit requirement on the R4 reports once COUNTER R5 takes effect.

How will the Code of Practice be published?

The Code of Practice (COP) will be published on the COUNTER website as a single document and as a web-navigable set of links between sections of the COP and linked support pages and appendices. Therefore, users who want the full technical schema will be able to access it easily, while those who are interested an overview or a particular area will be able to navigate to the appropriate section.

How will the Code of Practice be updated in the future?

Release 5 of the Code of Practice will include procedures that allow for continuous maintenance. This means that changes can be made to the Code of Practice without an entire re-release being prepared. The change process will be transparent and will include a consultation period to ensure input has been received from the community. Once substantive changes (changes that require content providers to do something different) are approved by the COUNTER Executive Committee, content providers will have approximately 12 months to implement the changes before they become officially required for compliance (see section 12 in the COUNTER R5 Code of Practice draft).

 

Freely available abstracts


Question:
 If a database publisher makes some abstracts are freely available on Google, to promote usage because authorised users don’t necessarily find them when they search through their library systems. There is concern, in this context, about the attribute Access_Type. This describes the nature of access control that was in place when the content item was accessed. Would this be a complication for freely available abstracts?

Answer: Even through the abstracts are openly available via Google, the content would be contented as controlled, because further access would only be available to users covered by the library’s licence to the content. Also, the “Access_Type” attribute reflects the state of access control to the content and not the metadata.

 

Free to read for some people


Question:
Some content is not made freely available to everyone, for example, free access is given to some authors.  What attribute would apply?

Answer: This would count as controlled.

 

Back filling metadata for access types


Question:
There is concern about Access_Type where the associated metadata for back articles does not record it, do publishers need to back fill?

Answer: Back filling of metadata about access types is not required, only going forward.

 

Concerns about devaluing HTML


Question:
There is concern that the new metrics may devalue use of HTML what is reason behind the new metrics in the draft Release 5.

Answer: The challenge in Release 4 is that that PDF and HTML only identified two of many formats, therefore a format agnostic metric is proposed for Release 5. There is also evidence that some librarians disregard HTML usage entirely because of concerns about double counting. The metric type unique_item_requests eliminates double counting when HTML and PDF accessed in the same session.  Because of the concern of double-counting when HTML is displayed automatically as users navigate to the PDF, with COUNTER R4 some librarians are only using the PDF usage which resulted in completely devaluing the HTML usage (i.e. in such a scenario, if the user’s information needs were satisfied by the HTML the transaction would not be counted. COUNTER R5 strives to address this through the unique_item_requested metric.

 However, publishers and vendors who wish to report HTML separately can “extend” the Release 5 Code of Practice to include these metrics in the Expanded Title Report.

Question: It can be useful to know HTML/PDF because of site design different platforms count HTML views differently – some force an HTML view with every abstract view, some don’t, so I need to be able to determine for myself how much to “weight” HTML views and not have them just folded in with the PDF count.

Answer: total_item_requests vs unique_item_requests provides similar information.  High number of total_item_requests compared to unique_item_requests provides insight into user interface design and would indicate that the UI may be automatically presenting the full text when the user arrives at the article landing page (i.e. HTML automatically displays) where they can select another full text format (i.e. PDF). When a user views the HTML then the PDF, total_item_requests will increment by 2, but unique_item_requests will increment by 1.

Question: The format differentiation allows us to examine usage data in the context of site design.Release 5 usually gives more granularity, which is good, but HTML/PDF is granularity that we’re losing.

Answer: Format-specific granularity is lost… these were introduced to help detect the “user interface effect” of HTML being shown before PDF and perceived double-counting.  Release 5 addresses that with the “unique_items_requested” metric.

Question: Does one of these metrics only count a full text use? Our subscription analysis is based on downloads or download equivalent rather than clicks.

Answer: Yes, unique_item_requests for journal content and unique_title_requests for eBook content.

 

Transition


Question:
If the Release 5 Code of Practice is introduced in 2019, say, does that mean that Release 4 is stopped abruptly? Or do the reporting formats exist alongside each other for a while, if so, how long?

Answer: This question is addressed in the FAQs that accompany the CoP.  The requirement is that, to help facilitate libraries’ smooth transition to R5, R4 reports must continue to be provided for a minimum of 12 months after the effective date for R5.

 

Book Reports


Question:
How will we report download figures for books?

Answer: There are three proposed reports for books:
Book Title Report 1         Usage by Month and Book Title
Book Title Report 2         Access Denied by Month and Book
Expanded Title Report    Activity by Month and Title

Section 4.3.1. of the Draft code of practice provides the detail of which metric types are included and Appendix B discusses the changes from R4 and highlights how the new metric types translate from the R4 reports.  For example, unique_title_requests is the equivalent of what was counted in R4’s Book Report 7 and provides consistent reporting that was not possible with some content providers offering BR1 and others offering BR2.  Total_item_requests is the equivalent of what is counted in R4’s BR2.

 

JR5

Question: Journal Title Report 5 should be considered as required and essential as 1 and 2 – JR5 has become critical for library decision-making.

Answer: Please send us (lorraine.estelle@counterusage.org) more examples of use cases for YOP data.

Question: Would the Expanded Reports give options that would imitate JR5 data?

Answer: Expanded reports, as proposed, would provide Year of Publication detail that could be converted into JR5 “look” using pivot tables.

Question: JR5 was always about much more than just archives use.

Answer: Please provide more examples of use cases for Year of Publication data.

Question: We used JR5 to count more than archival use. We’re tracking years where we have perpetual access. I thought item investigations corresponded to record views and item requests corresponded to result clicks?

Answer: Investigations correspond to a combination of result clicks and record views. Requests corresponds to full text requests.  Investigations counts all activity that can be attributed to a content item (i.e. abstract viewed, full text viewed, OpenURL link clicked, etc.) and Requests are a subset of Investigations that reflect accessing/downloading the full text.  Note that the term “full text” was not used in Release 5 because these metrics are about requesting full content and that could be text, or video content or audio files or images, etc. so the term “requests” was used.

 

Sessions

Question: Does anyone but me miss “sessions” as a metric from Release 3? For a number of our business databases, that is the only metric we can record, so we create Release 3 DB1 reports manually.

Answer: Consortia dropped the sessions count several years ago and, because of the nature of modern interfaces, which often operate in a stateless way, the notion of a session is hard to capture.  Federated search engines may hold a single session open for an entire day servicing all users; or another federated search may generate a session for every search.

 

Federated and automated

Question: Is an API call considered federated or automated?

Answer: It depends on the context in which the API is being used.  If the API is being used by a search interface operated by another vendor and the typical behaviour is multiple database searching, then activity us considered “federated” … (the interface is operating on one platform and the searching is happening & counted on another i.e. MetaLib using an API to search EBSCOhost data would be counted as “federated” in EBSCOhost stats).  If the API is being used by an application like a mobile App, then activity would be considered “regular” searches. Vendors that offer applications normally assign APIKeys to app users or have specific controls or interfaces in place for their use and thus should be able to determine the context of the activity.

 

Access types

Question: Are there a lot of hybrid journals that mix the APC and non-APC, is that an attribute that will tend to hold for the entire journal?

Answer: We suspect this would be rare if it happens at all.  A non-APC Open Access journal is one where a society or other organization is sponsoring the journal’s publication cost so that the journal is completely open access.  This is different from an organization providing funding to pay APCs.

Access Methods

Question: If the publishers can detect SciHub as TDM, we would LOVE to have that broken out separately!

Answer: COUNTER has a group that is looking into how to track robotic and rogue usage. Their findings may inform future releases of the Code of Practice but for now Release 5 will continue with the approach taken by Release 4

Question: What is the distinguishing criteria that usage is TDM or not?

Answer: The main distinction is that the TDM usage is done through agreement and special arrangement and thus the expectation is the provider will use a special profile, API key or required a separate IP address be used so content can be tracked.

Question: I think the issue with TDM usage is being missed: Users using content for TDM are coming through the same channels as traffic indistinguishable from regular usage, i.e. local portal applications making calls driven by users against a web interface or an API vs. users coming for TDM purposes are making the same calls.  The difference is purpose, not method, often.

Answer: Using regular interfaces without prior arrangement for TDM would be considered in-appropriate use. Such usage patterns could trigger IPs to be blocked and, when detected, that usage should not be counted as regular usage.

Year of Publication

Question: Why would the print Year of Publication be used?

Answer: For consistency as this represents when the work was published initially and would be the same year of publication used in bibliographic references.  Also, publishers base their subscriptions and entitlements print publication years.

Question: What is the maximum number of years that can be displayed in a Year of Publication report?

Answer: All years with usage. There are no restrictions.

Report formats

Question: Can the header on a separate tab? The header is getting very long for someone trying to read this in excel etc. – pushes actual data too far down. Consider putting header in separate tab so the actual data is just one row of headers, so better ingesting into data manipulation software?

Answer: Putting headers on separate tabs may work for Excel versions, but for larger reports downloaded as TSV, there are no tabs.  For reading of spreadsheets, both Excel and Google Sheets allow you to “Freeze” the header row.  Scroll the display so that the header row and title both show, in cell B14 and select the “Freeze” option.  Now you can scroll vertically and the header row stays. And scroll horizontally and the title stays.  As for ingesting into reporting software, where possible SUSHI should be used as it provides for a direct feed and is much easier.

Question: The header on the same tab makes adding filters to the data less appropriate automatically

Answer: In Release 5 the draft Code of Practice, a blank line was added above the column headers to simply filtering and sorting.  With both Google Sheets and Excel, all that is needed is to click on cell A12 (i.e. Title) and click the “Filter” option.  The body is automatically filtered.

 

Expanded Reports

Question: Can we indicate a range of Year of Publication?

Answer: Implementation of the reporting interface would be up to the content provider, but we imagine an option to specify a range of years or possibly multiple ranges of years (i.e. like in a PDF print interface for Acrobat where you can specify a comma separate list of numbers or ranges of numbers.

Question: If we put in a range of years for Year of Publication for the extended report, will we get that entire range added up into a single total, or get each year in a separate line or separate column? I want every year separately, and if I have to generate a separate report for every Year of Publication, that will be unmanageable

Answer: The reports, as proposed, would have one row per Year of Publication with usage.

Question: The JR5 report allowed publishers to collapse “pre-[some year] into a single column. It was not consistent, but it seems we will lose that totally.

Answer: The proposed report would have individual years of publication (with usage) presented and thus avoids the grouping of years on the report.  The result will be a report that is more verbose, but compatible across all vendors.

Question: I’m worried that the spreadsheet will be too big for Excel to even open if Release 5 makes a separate line for each year x each journal.

Answer: The reports may be large.  COUNTER has engaged with several content providers to review technical implementation of the Code of Practice to make sure it can be implemented and use that feedback to make adjustments, if necessary.

Question: Why not have simple canned reports that service the needs of all libraries, not just big research libraries, with the option to run customized reports.  This is too complicated.

Answer: The “standard” reports are actually “canned” reports designed to meet the most common needs.  The “expanded” reports are the customized reports.

Question: Can you show us an example of what an expanded report would look like with 10 separate YOPs broken out for a long list of journals and in a single file?

Answer: Various examples will be prepared.

Question: Consider what software like Tableau needs re header – they need the table to have just one column header at the top, so it isn’t just human readers that will have trouble with that big header.

Answer: For SUSHI-harvested usage, the plan is to have a report attribute that says the report header is to be omitted. Ideally reporting systems will use SUSHI to harvest reports rather than Excel/TSV which are primarily for human consumption.

Robots and crawlers

Question: Will rogue usage  be removed?

Answer: COUNTER has a group that is considering how to track robotic and rogue usage. Their findings may inform future releases of the Code of Practice but for now Release 5 will continue with the approach taken by Release 4.

 

 
Release 5 Queries COP Register Members Guides Members

Gold and Silver Sponsors