Welcome to our Frequently Asked Questions area.
At the start of this page you’ll find answers to the questions we are asked most frequently about the implementation of Release 5. The numbers refer to the relevant sections in the Code of Practice.
The second part of this page includes answers to frequently asked questions from librarians and information managers, including those submitted via webinar.
Q: Our platform search tool can be customized to filter on a single journal title. Should this type of search be included in the Platform Report?
Q: If a requested report includes a timespan that has not been processed (e.g. because it lies in the future), should the fields be populated by zeros, left blank, or should the columns be excluded from the report?
A: The report should include only the months for which usage has been processed. If the request was for months that are not yet processed, include an exception to that effect in the header. See Section 5.0 for further information about exception reporting to indicate that partial data is being returned.
Q: Should book series, online versions of loose-leaf products, and other irregularly updating products be included in the Journals Reports?
A: Yes, if they have an ISSN and providing they are not being included in a Book Report, as may be the case with books that are part of a monographic series.
Q: Our reference works have their own search tool and index. The standard views for Book Title Reports do not include metrics for search activity, but we feel that the search counts are important. Should these titles be included in the Database Reports?
A: Yes. The searches in these titles should then only be counted in the Database Reports, not in the Platform Report.
Q: Is a search counted any time the system executes a search to retrieve a new set of results?
A: Yes. For example:
Q: Some publishers present abstracts, full text, and references on different pages, others within the same HTML page, usually with tabs or anchors. Will the first solution not generate a lot more “Investigations” than the second?
A: Yes, but the “unique” metrics were created to alleviate these differences in publishers’ interfaces. An “Investigation” is intended to measure users’ expression of interest in an item or title; “Requests” are about accessing the actual content item. Requesting an item is also considered an expression of interest; therefore, accessing a content item will be counted as both a Request and an Investigation.
Q: If a user views an abstract that pops up while scanning a table of contents, does this count as an “Investigation”?
A: Yes. If the popup was opened, then an Investigation should be counted.
Q: Should the following events be counted as a “Request”?
A: Yes, as long as an individual user triggers the downloads. If, on the other hand, a harvesting program triggers the download or a download is pushed towards the user by the service without the user’s prior request, this does not count as a request. Note that COUNTER “Unique_Item_ Requests” are counted at article level, so if the user downloads five articles using the “Download the full text of all search results” button, five “Unique_Item_ Requests” will be recorded.
A: No. This does not count as a “Request”, nor as an “Investigation”.
A: This counts as in “Investigation”, not a “Request”.
Q: What happens if a user clicks a link to the full text but does not have a license, and the access control system redirects the user to the abstract page? No actual “Access Denied” page is displayed, so does this count as “Access Denied”?
A: Yes, this counts as Access denied: “No_License”. The view of the abstract counts as an “Investigation”.
Q: Should the following be counted towards “Searches_Regular”?
Q: As a publisher, should I count journal home-page views as “Investigations”?
A: No. “Requests” and “Investigations” relate to viewing or interacting with content published within a journal; viewing the home page does not count at all.
Q: As a publisher, should I count table-of-content views as “Investigations”?
A: No. “Requests” and “Investigations” relate to viewing or interacting with content published within a journal. Note that if a user links from the ToC to an article in the journal, the subsequent interaction with the article counts as an “Investigation”, and possibly a “Request” if full text is requested.
Q: If a user views an article HTML and then downloads the PDF in the same session, how should this be counted?
A: This is counted as:
Unique_Item_Requests = 1
Total_Item_Requests = 2
Q: Does the purchase of a document through our interface count as a full-text retrieval?
A: If the article is delivered from the host to the user as a result of the purchase (e.g. pay-per-view), then yes, that is a full-text retrieval. If the purchase is related to document delivery or Interlibrary loan (ILL) activity, that counts as an “Investigation”. Note that in Release 5 the JR1 report includes by definition all full-text article usage: no filters are applied to indicate licensing status for the content. The “Access_Type” filter (“Controlled” vs “OA_Gold”) is based on the access control restrictions for the content, not the licensing model. To filter on licensed content only, a library should use its holdings report and limit to licensed content only.
Q1: We have noticed that in our case, some reports will contain exactly the same numbers, because we host only one type of publication, and offer only one Access_Type. Do we need to provide two reports if the numbers are the same?
Q2: We are a database publisher providing a range of databases but nothing else. We find that each in COUNTER reports (Platform Report and Database Report) Searches_Platform is the same as the sum of Searches_Regular.
Do we need to provide both reports if the numbers are the same? If not, which one should we provide?
A: In both above cases all metrics must be reported, even if the numbers are identical. The reason is that library evaluations may require Unique_Item_Requests or Searches_Regular, other library evaluations may require Unique_Title_Requests or Searches_Platform. Identical numbers also provide additional insight in how the host user interfaces function which can be helpful for comparing the usage across content providers.
Q3: We provide complete books rather than chapters. TR_B1 has both Item_Requests and Title_Requests. If we show both metrics, they will always have the same values (you can’t get more item requests than title requests as we deliver the entire book). Should we show the item requests row with the same values as the title or suppress the rows? Whatever the answer, do we do the same in the SUSHI output?
A: Both Item_Requests and Title_Requests must be included in the tabular and JSON reports.
Q: What should appear in the Created_By field when a third party generates usage reports on behalf of a content provider?
A: COUNTER reports are not only created by the content providers or the third parties that generate the reports on their behalf, but also by other systems like ERM and usage consolidation systems which collect the reports, process them and create reports from the processed data. Therefore, it is relevant whether a report with a content provider’s usage was generated by a third party and this information should be included in the report.
Therefore, COUNTER suggest:
Q: Can Section_Type be applied to Unique_Title metrics in the Title Master Report (TR)?
A: The Section_Type should be shown for Unique_Item metrics, but not for Unique_Title metrics. If Section_Type is requested as an attribute to show in a Title Master Report, it must be left empty for Unique_Title metrics in tabular reports and omitted for Unique_Title metrics in JSON reports.
The rules for calculating the unique title counts are as follows:
If multiple transactions qualifying for the metric type in question represent the same title and occur in the same user-session, only one “unique” activity MUST be counted for that title.
Unique_Title metrics are independent from the Section_Type, because the associated Unique_Items may be one, or more than one type of section.
Example: a user downloaded two chapters and also the whole of a book in one session.
The counts are be:
Unique_Item_Request = 2, Section_Type = Chapter
Unique_Item_Request = 1, Section_Type = Book
Unique_Title_Request = 1, Section_Type = left empty in tabular reports/omitted in JSON reports.
In a JSON report two Report_Items are required; one with the Unique_Title metrics without Section_Type and one with the Unique_Item and Total_Item metrics with Section_Type.
Q: How can I make sure the tabular reports show the monthly usage data dates in Mmm-yyyy format?
A: In all COUNTER reports there are a series of columns with usage for each month covered by the report, for example, “Aug-2019”, “Sep-2019”, etc. In the tabular reports you must use the “Mmm-yyyy” for these dates. (Note in the SUSHI version of the report, this is represented by “Begin” and “End” date elements for each month).
If you are working in Excel, “Mmm-yyyy” can default to a different format depending on in which country you work. For example, if you work in the US and key “Jun-2019” into a cell, the date may automatically default to “jun/01/2019”. If you work in the in the UK, “01/06/2019” is likely to be the automatic default.
This happens because when you enter something like “Jun-2019”, this value is immediately converted to an internal representation (a number, in this example 43617 for 01-Jun-2019) and then displayed according to the format of the cell set automatically by Excel.
You can fix this issue by using the ‘Text’ format in Excel. Text format cells are treated as text even when a number is in the cell. The cell is displayed exactly as entered.
Another issue is that dates may be correct in CSV and TSV files but automatically default to another date format if you open them in Excel. It is important to import the CSV and TSV into Excel and not to open them with Excel. This useful article explains how to how to open a CSV file in Excel to fix date and other formatting issues: https://support.insight.ly/hc/en-us/articles/212277188-How-to-open-a-CSV-file-in-Excel-to-fix-date-and-other-formatting-issues
Q The Friendly Guide to RELEASE 5 for Providers indicates that “watch whole video” should be counted as a “Request”. How should “whole video” be quantified? Does it mean that on a 60-minute video a user must watch all 60 minutes before it is recorded as a “Request”? What if they only watch for 59 minutes?
A: Watching a video should be treated in the same way as reading an article: the request comes with starting to watch, not necessarily watching the whole thing. As with a journal article, if the whole video is available to be watched, this counts as a “Request”.
Q: I am an open access publisher. How can I use COUNTER reports if I cannot identify usage by individual institutions?
A: The definition of Institution_Name has been changed in 5.0.1 to include: For open access publishers and repositories, where it is not possible to identify usage by individual institutions, the usage should be attributed to “The World”. While the reports – especially the Standard Views – were developed with the libraries as audience in mind, that doesn’t mean that the (Master) reports cannot be used for other audiences.
Q: Section 5 of the COUNTER Release 5 Code of Practice has a bullet which reads as follows:
“Usage must be processed for the entire month before any usage for that month can be included in reports. If usage for a given month is not available yet, no usage for that month must be returned and an exception included in the report/response to indicate partial data is being returned.”
Does this mean that Release 5 forbids any usage being delivered for a given month until after that month has ended and its data have been processed?
A: Yes. Full months only (all-or-nothing) and partial-usage exception are used when a range of months is requested and not all months are available. For the current/incomplete month, leave the cells empty; don’t put in a zero.
Q: The Code of Practice states: “Usage MUST be processed for the entire month before any usage for that month can be included in reports.” However, we’ve had a couple of clients ask for reporting twice a month (every two weeks) instead of the monthly reports. Could we provide both monthly and semi-monthly reports (or daily, or whatever frequency our client requests) and still be considered COUNTER compliant?
A: Vendors MUST provide the monthly reports to be COUNTER compliant; however, vendors can provide whatever “proprietary” reports they want with whatever frequency they want. It would be good if these reports were identified as “customized” or “proprietary” so as to not accidentally cause issues by someone taking a one-day report and treating as entire month’s usage.
Examples would be to change the Report_Name and Report_ID to a non-standard value to prevent the report being mistaken for a true COUNTER report.
Q: How often is the list of internet robots, crawlers and spiders updated and how quickly do content providers need to implement the new list?
A: The list is reviewed on a regular basis and a notice of updates is published in the COUNTER newsletter and on Twitter. Transactions with a user agent matching a name on the list must not be included in COUNTER reports.
Please contact firstname.lastname@example.org to let COUNTER know of any user agents that should be included in this list or to suggest other amendments.
The current list will be held on GitHub and should be pulled whenever usage is being processed. If you are re-processing old data, please pull the archived copy of the list that is relevant to that time period.
Q: The list of user agent strings outlined in Appendix H seems curiously uniform – all those listed have an uppercase codename, and a suffix of SCOCIT, SDICIT or SDIABS (which seems curious given the wide variety of possible search vendors). Are we interpreting this list correctly – in other words, if we see a search carried out by software with a user agent containing “COSMADRALI-SCOCIT”, do we count it as a federated search?
A: Yes. However, the appendix is for reference only and may not be complete. Furthermore, the list is not available in a machine-readable format. The Robots and Crawlers Working Group is working to improve the list and make it available via GitHub in a similar fashion to the robot exclusion list.
Q: A user has downloaded more content than they could possibly read, without registering for text mining. Should this usage be excluded from the reports?
A: The COUNTER Robots and Crawlers Working Group is working on guidelines to help content providers identify systematic mass downloads for exclusion. Publishers who wish to show customers a report of usage that was classified as “crawler/robot/abuse” can do so by introducing a custom element into their dataset (see section 11.3).
Q: The SUSHI examples on the SwaggerHub website (https://app.swaggerhub.com/apis/COUNTER/counter-sushi_5_0_api/1.0.0) give the impression that the SUSHI interface should return an error only if the user requested a date range including a month with only partial usage. Does COUNTER Release 5 require a report of zero usage for the month with the error code, in addition to reporting the other months in the requested range as usual?
A: There are two issues here. COUNTER does not allow for a month to have partial usage: it either has usage or it doesn’t. The “Partial Usage” error response occurs because one of the months requested did not have usage. When that happens, the month without usage should be excluded from the report. Do not output that month showing zero usage.
Q: In the Report Response, Exception is included in the Report_Header element. If a user wants to exclude report_header using exclude_report_header=true, how should the Exceptions be handled?
A: Based on the current Swagger definition, a simple exception can be provided as “Default” (e.g. not a return code of 200). If “Exclude_report_headers” was true, then if an exception occurs, simply return the exception and don’t attempt to return the partial report – without the header it would be nearly impossible to tell what was different about the report from the request.
Q: /reports request, report_filters and report_attributes are not specified in Swagger API, whereas they are mentioned in the COUNTER Code of Practice. Will the Swagger API will be updated?
A: COUNTER simplified the response for the /reports path response to eliminate the return of supported filters and attributes since these would also be included in the site’s Swagger/API documentation. The Swagger is correct, and we will update the COUNTER Code of Practice to match.
Q: For /reports, should the search parameter be based on a report_id such as TR, DR, or on a report_name such as “Title Master Report”?
A: Both should be supported (name and report ID).
Q: Is there a JSON schema for responses or a standard convention for naming? For example, metric_type or MetricType?
A: All element names in the response should be in the form Camel_Case.
Q: What are permitted element/value combinations in JSON reports?
A: The Counter_SUSHI Swagger file allows all combinations of elements that can occur and lists all values that are valid for an element, but not all elements are permitted in all reports, and not all combinations of values are permitted. For example, Data_Type and Section_Type are not permitted in the Book Standard Views as set out in section 4.3.2 of the because the rules from the CoP also apply,even though the COUNTER_title_report object would allow this.
Q: Now that consortia reports are not required, what is the simplest way to retrieve reports for a complete consortium?
A: SUSHI implementation for Release 5 includes the requirement to provide a consortium with a list of their member sites and corresponding SUSHI credentials so that the consortium can pull the desired usage reports for each member.
COUNTER has made available a free tool which will help small to medium sized consortia gather COUNTER reports for their affiliated libraries. The R5 Harvester uses COUNTER_SUSHI to streamline harvesting COUNTER reports for all member institutions. https://www.projectcounter.org/r5_harvester/.
Q: What are the differences between the JSON format compared to the tabular reports?
A: For COUNTER reports in JSON format the header is in the Report_Header field (see the COUNTER_SUSHI API Specification for details). The differences compared to the header in the tabular reports are:
For COUNTER reports in JSON format the body is in the Report_Items field (see the COUNTER_SUSHI API Specification for details). The differences compared to the body in the tabular reports are:
Q: If an institution develops the capability of offline viewing, where data can be sent to users once they are back online, should these requests and/or deliveries be counted in COUNTER reports?
A: Yes. They should be counted for the month when the data are received, to avoid having to reprocess previously generated monthly statistics.
Q: Our institution is looking into future technological developments, including the possibility of automatically loading the next article when a user reaches the end of the one they are reading. How would this work from a COUNTER perspective? If, say, a user clicked on the HTML content of an article and, when they got to the end, the next article automatically loaded as HTML, should the next download be counted in the COUNTER reports?
A: Unless the user actively chooses to move to the next article, this should not be counted as either an Investigation or a Request.
Q: Do if access denials count as a subset of investigations?
A: If a user successfully accesses ‘information related to a content item or a content item itself’ then that counts as an investigation
If a user successfully accesses the ‘content item itself’ then that counts as a request. If a user attempts to access ‘information related to a content item or a content item itself’ but gets a denial message, then that counts as a denial.
So, if for example the user (successfully) goes to an abstract and then (clicks on full content link) gets a denial message. That is one investigation and one denial. When the user gets a denial message without access first to an abstract (they’ve clicked on a link to full content item, e.g. from an index of articles) that’s just one denial.
Q: What is the difference between Item_Requests and Item_Investigations?
A:There are several different types of usage metric in Release 5, which break down into investigations and requests.
An investigation is tracked when a user performs any action in relation to a content item or title, while a request is specifically related to viewing or downloading the full content item. Requests relate to full-text views or downloads. Please note Requests are also Investigations.
Q: What is the duration of a “session” in order to be assigned as unique?
A: A user session is defined in any of the following ways: by a logged session ID + transaction date, by a logged user ID (if users log in with personal accounts) + transaction date + hour of day (a day is divided into 24 one-hour slices), by a logged user cookie + transaction date + hour of day, or by a combination of IP address + user agent + transaction date + hour of day.
To allow for simplicity in calculating session IDs, when a session ID is not explicitly tracked, the day will be divided into 24 one-hour slices and a surrogate session ID will be generated by combining the transaction date + hour time slice + one of: user ID; cookie ID; or IP address + user agent. For example, consider the following transaction:
The above replacement for a session ID does not provide an exact analogy to a session. However, statistical studies show that the result of using such a surrogate for a session ID results in unique counts are within 1–2 % of unique counts generated with actual sessions. [PY – should this be analogue or analogy or another word?]
Q: So, if a user accessed the full text or an article and then accessed the same article an hour and a half later for example, it would count as two unique item requests?
A: That is correct in some cases, and this may result in a small amount of over counting.
Q: What are the best R4 & R5 reports to use for an overall usage total comparable over time?
A: Release 4 Journal Report 1 Period Total, minus Journal Report 1 GOA = Total_Item_Requests in TR_J1: Journal Requests
Q: When a user opens two different articles from the same issue of a journal, how are these counted?
A:This user behaviour counts as 2 Unique_Item_Requests and will also be added to the Total_Item_Requests.
Q: Is research funder public access in the reports also counted, e.g. papers that are freely available, but may not be Open access? (Otherwise known as research public access.)
A: OA Gold is defined as a content item which was immediately and permanently available as open access because an APC was paid by an author, their institution or their funder. This type of content is not included in TR_J1. Other types of open/freely available content are not excluded from TR_J1 and counted as ‘Controlled’.
Q: If I filter TR_J3 Controlled Unique_Item_Requests, are the numbers the same as TR_J1 Unique Item_Requests?
A: Yes, because TR_J1 only includes Controlled usage and excludes Gold Open Access (GOA). [PY – I’ve added in “excludes Gold Open Access” – I hope that’s correct?]
Q: For the TR_J4 report, could the spreadsheet be reconfigured to display in the same way as the JR5 report with the dates along the top rather than multiple lines for each title?
A: No, this not possible due to the way in which all the Release 5 reports follow a consistent format. However, in Excel you can pivot the tabular reports.
Q: We would like to use No_License metrics to make the decision whether to buy a journal archive (or not). Is there No_License data per YOP in the title master report?
A: Yes, YOP is a column heading in the Title Master Report, and No_License is one of the metrics shown.
Q: We are concerned about the lack of zero usage information. We want titles that we subscribed to appear in our reports even if usage is 0. Excluding them is likely to have an effect on our usage graph due to “no value” (this should be zero).
A: COUNTER looks for ways to ensure that usage reporting is consistent and comparable across various providers and attempts to have a Code of Practice that can be implemented by the majority of publishers and analysed by the majority of libraries. Including zero usage for e-books and journals creates two challenges that make it impossible to offer comparable and consistent reporting.
KBART provides a way to match holdings against the reports, and while on the webinar, two organisations offered to share their templates and methodologies. We will contact them to follow up these kind offers.
Last updated: 3 February 2020