The Code of Practice for Research Data Usage Metrics standardizes the generation and distribution of usage metrics for research data, enabling for the first time the consistent and credible reporting of research data usage.
COUNTER welcomes input and feedback from the community on this first iteration, so that it can be further developed and refined.
A downloadable PDF is now available in the download section below.
Aligned as much as possible with the COUNTER Code of Practice Release 5 glossary.
|Access_Method||A COUNTER attribute indicating whether the usage related to investigations and requests was generated by a human user browsing and searching a website (Regular) or by a computer (Machine).|
|Collection||A curated collection of metadata about content items.|
|Component||A uniquely identifiable constituent part of a content item composed of more than one file (digital object).|
|Content item||A generic term describing a unit of content accessed by a user of a content host. Typical content items include articles, books, chapters, datasets, multimedia, etc.|
|Content provider||An organization whose function is to commission, create, collect, validate, host, distribute, and trade information in electronic form.|
|Creator(s)||The person/people who wrote/created the datasets whose usage is being reported-|
|Data repository||A content provider that provides access to research data.|
|Data type||The field identifying type of content. The Code of Practice for Research Data Usage Metrics only recognizes the Data type Dataset.|
|Dataset||An aggregation of data, published or curated by a single agent, and available for access or download in one or more formats, with accompanying metadata. Other term: data package.|
|Description||A short description of a dataset. Accessing the description falls into the usage category of Investigations.|
|DOI (digital object identifier)||The digital object identifier is a means of identifying a piece of intellectual property (a creation) on a digital network, irrespective of its current location (IDF).|
|Double-click||A repeated click or repeated access to the same resource by the same user within a period of 30 seconds. COUNTER requires that double-clicks must be counted as a single click.|
|Host types||A categorization of Content Providers used by COUNTER. The Code of Practice for Research Data Usage Metrics uses the following host types:
● Data Repository
|Internet robot, crawler, spider||An identifiable, automated program or script that visits websites and systematically retrieves information from them, often to provide indexes for search engines rather than for research. Not all programs or scripts are classified as robots.|
|Investigation||A category of COUNTER metric types that represent a user accessing information related to a dataset (i.e. a description or detailed descriptive metadata) or the content of the dataset itself.|
|Log file analysis||A method of collecting usage data in which the web server records all of its transactions.|
|Machine||A category of COUNTER Metric Types that represents a machine accessing content, e.g. a script written by a researcher. This does not include robots, crawlers and spiders.|
|Master reports||Reports that contain additional filters and breakdowns beyond those included in the standard COUNTER reports.|
|Metadata||A series of textual elements that describes a content item but does not include the item itself. For example, metadata for a dataset would typically include publisher, a list of names and affiliations of the creators, the title and description, and keywords or other subject classifications.|
|Metric types, Metric_Type||An attribute of COUNTER usage that identifies the nature of the usage activity.|
|ORCID (Open Researcher and Contributor ID)||An international standard identifier for individuals (i.e. authors) to use with their name as they engage in research, scholarship, and innovation activities.|
|Persistent Identifier (PID)||Globally unique identifier and associated metadata for research data, or other entities (articles, researchers, scholarly institutions) relevant in scholarly communication.|
|Platform||An interface from an aggregator, publisher, or other online service that delivers the content to the user and that counts and provides the COUNTER usage reports.|
|Provider ID||A unique identifier for a Content Provider and used by discovery services and other content sites to track usage for content items provided by that provider.|
|Publication date, Publication_Date||An optional field in COUNTER item reports and Provider Discovery Reports. The date of release by the publisher to customers of a content item.|
|Publisher||An organization whose function is to commission, create, collect, validate, host, distribute and trade information online and/or in printed form.|
|Regular||A COUNTER Access_Method. Indicates that usage was generated by a human user browsing/searching a website, rather than by a computer.|
|Reporting period, Reporting_Period||The total time period covered in a usage report.|
|Request||A category of COUNTER Metric Types that represents a user accessing the dataset content.|
|Session||A successful request of an online service. A single user connects to the service or database and ends by terminating activity that is either explicit (by leaving the service through exit or logout) or implicit (timeout due to user inactivity). (NISO).|
|SUSHI||An international standard (Z39-93) that describes a method for automating the harvesting of reports. Research Data SUSHI API Specification is an implementation of this standard for harvesting Code of Practice for Research Data Usage Metrics reports.|
|Total_Dataset_Investigations||A COUNTER Metric_Type that represents the number of times users accessed the content of a dataset, or information describing that dataset (i.e. metadata).|
|Total_Dataset_Requests||A COUNTER Metric_Type that represents the number of times users requested the content of a dataset. Requests may take the form of viewing, downloading, or emailing the dataset provided such actions can be tracked by the content provider’s server.|
|Transactions||A usage event.|
|Unique_Dataset_Investigations||A COUNTER Metric Type that represents the number of unique “Datasets” investigated in a user-session.|
|Unique_Dataset_Requests||A COUNTER Metric Type that represents the number of unique datasets requested in a user-session.|
|User||A person who accesses the online resource.|
|User agent||An identifier that is part of the HTTP/S protocol that identifies the software (i.e. browser) being used to access the site. May be used by robots to identify themselves.|
|Version||Multiple versions of a dataset are defined by significant changes to the content and/or metadata, associated with changes in one or more components.|
|Year of publication||Calendar year in which a dataset is published.|
This is the first version of a Code of Practice for Research Data. The purpose of this report is to enable data repositories andproviders to produce consistent, comparable, and credible usage metrics for research data. This first release of the Code of Practice for Usage Metrics has been kept intentionally narrow in scope to focus on the level and avoid creating unnecessary hurdles to adoption.
The purpose of the Code of Practice forUsage Metrics is to facilitate the recording, exchange, and interpretation of online usage data by establishing open standards and protocols for the provision of content-provider-generated usage statistics that are consistent, comparable, and credible.
This Code of Practice forUsage Metrics is aligned with the COUNTER Code of Practice Release 5 and provides a framework for recording and exchanging online usage statistics for at an international level. It covers the following areas: data elements to be measured; definitions of these data elements; content and format of usage reports; requirements for data processing; and guidelines to avoid duplicate counting.
Developed by members from themanagement community (RDM) in close coordination with COUNTER, this Code of Practice for follows the COUNTER Code of Practice Release 5 (COUNTER Code of Practice Release 5, 2017) recommendations as much as possible (where relevant) and deviates from them only when necessary.
There are different use cases and practices betweenand the majority of scholarly resources. For example, research data does not need to be reported at the institutional level, but geographic aggregation may be important. Another significant difference is the need for aggregation of usage across components for all versions of a dataset. It is common practice for to be versioned, and we recommend reporting the usage data for each specific version and the combined usage for all versions.
The first release of the Code of Practice forCOUNTER Code of Practice Release 5, standard usage statistics are not reported by format distribution, e.g., no separate numbers for downloads in CSV and XLSX formats.Usage Metrics only describes reporting of usage at the level. For future releases, reporting usage statistics for components will be considered based on community feedback. Following the
Download volume (i.e., file size) can be reported. There are widely varying practices in thecommunity regarding the granularity and structure of datasets, components, and collections. Reporting download makes it easier to compare usage for packaged into datasets with different granularity.
Geolocation information and country are reported, but not IP addresses. For large countries (e.g. United States) reporting at the state or province level may be enabled. Reporting of geolocation information helps to better understand usage for the same datasets hosted in multiple locations, and for datasets where usage is dependent upon the location of the user, e.g., datasets describing research in a particular geolocation.
Usage metrics are reported for each specific version of a dataset, as well as the combined usage for all versions. Usage metrics are only reported for individual datasets. In this version of the Code of Practice forUsage Metrics there is no report format for reporting usage for collections of datasets, for example all datasets in a data repository.
The Code of Practice forUsage Metrics will evolve in response to the demands of the international library, data management, and communities. The Code of Practice for Usage Metrics is continually under review; feedback on its scope and application are actively sought from all interested parties.
The Code of Practice forUsage Metrics is developed by the Make Data Count project (Make Data Count, 2017), in close collaboration with Counter Online Metrics (COUNTER) (Project COUNTER, 2002), a non-profit organization that maintains the COUNTER Code of Practice.
The Code of Practice for Research Data Usage Metrics will be extended and upgraded as necessary, based on input from the communities it serves. Future versions might be integrated into the COUNTER Code of Practice. A continuous maintenance process will allow the Code of Practice forUsage Metrics to evolve over time minimizing the need for major version changes.
Nofollowing the Code of Practice for Usage Metrics has been audited at the time of this first release of the Code of Practice for Research Data Usage Metrics.
While we expect the auditing process forusage reporting to be similar to audits in the context of the COUNTER Code of Practice Release 5, it is not yet known which organizations are willing to perform audits according to the Code of Practice for Usage Metrics, and how these audits differ from COUNTER Code of Practice Release 5 audits. For these reasons audits for usage reporting according to the Code of Practice for Usage Metrics are not required at this point in time.
Statistical reports or data that reveal information about individual users will not be released or sold by content providers without the permission of that individual user, the consortium, and its member institutions (ICOLC Guidelines for Statistical Measures of Usage of Web-Based Information Resources (1998, revised 2001, 2006), 2006).
The Code of Practice forUsage Metrics builds on several existing industry initiatives and standards that address content provider-based online performance measures. In addition to the COUNTER Code of Practice this includes the Scholix Schema for the Exchange of Scholarly Communication Links (Burton et al., 2017) and the NISO Alternative Assessment Metrics Project (NISO RP-25-2016: Outputs of the NISO Alternative Assessment Metrics Project, 2016).
Where appropriate, definitions of data elements and other terms from these sources have been used in this Code of Practice forAppendix A.Usage Metrics, and these are identified in
This is the first release of the Code of Practice forUsage Metrics.