At Crossref and ROR, we develop and run processes that match metadata at scale, creating relationships between millions of entities in the scholarly record. Over the last few years, we’ve spent a lot of time diving into details about metadata matching strategies, evaluation, and integration. It is quite possibly our favourite thing to talk and write about! But sometimes it is good to step back and look at the problem from a wider perspective.
This year’s public data file is now available, featuring over 156 million metadata records deposited with Crossref through the end of April 2024 from over 19,000 members. A full breakdown of Crossref metadata statistics is available here.
Like last year, you can download all of these records in one go via Academic Torrents or directly from Amazon S3 via the “requester pays” method.
Download the file: The torrent download can be initiated here.
Earlier this year, we reported on the roundtable discussion event that we had organised in Frankfurt on the heels of the Frankfurt Book Fair 2023. This event was the second in the series of roundtable events that we are holding with our community to hear from you how we can all work together to preserve the integrity of the scholarly record - you can read more about insights from these events and about ISR in this series of blogs.
Crossref is undertaking a large program, dubbed 'RCFS' (Resourcing Crossref for Future Sustainability) that will initially tackle five specific issues with our fees. We haven’t increased any of our fees in nearly two decades, and while we’re still okay financially and do not have a revenue growth goal, we do have inclusion and simplification goals. This report from Research Consulting helped to narrow down the five priority projects for 2024-2025 around these three core goals:
We test a broad sample of DOIs to ensure resolution. For each journal crawled, a sample of DOIs that equals 5% of the total DOIs for the journal up to a maximum of 50 DOIs is selected. The selected DOIs span prefixes and issues.
The results are recorded in crawler reports, which you can access from the depositor report expanded view. If a title has been crawled, the last crawl date is shown in the appropriate column. Crawled DOIs that generate errors will appear as a bold link:
Click Last Crawl Date to view a crawler status report for a title:
The crawler status report lists the following:
Total DOIs: Total number of DOI names for the title in system on last crawl date
Checked: number of DOIs crawled
Confirmed: crawler found both DOI and article title on landing page
Semi-confirmed: crawler found either the DOI or the article title on the landing page
Not Confirmed: crawler did not find DOI nor article title on landing page
Bad: page contains known phrases indicating article is not available (for example, article not found, no longer available)
Login Page: crawler is prompted to log in, no article title or DOI
Exception: indicates error in crawler code
httpCode: resolution attempt results in error (such as 400, 403, 404, 500)
httpFailure: http server connection failed
Select each number to view details. Select re-crawl and enter an email address to crawl again.
Page owner: Isaac Farley | Last updated 2020-April-08