Altmetrics for the Information Professional: a Primer
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Researcher ID/Publons – 2019 Instructions
Researcher ID/Publons – 2019 Instructions ResearcherID is has now been integrated into the Publons platform: https://publons.com/account/login/. Publons is similar to researcher ID and is a platform which records, verifies, and showcases peer review contributions of researchers. Publons information integrates with Web of Science or records imported from ORCID, EndNote or manual uploads and allows you to track citations to your research outputs that are indexed in Web of Science. Please note that publications that have been manually entered into your Publons from journals outside of Web of Science will not track your citation metrics. Your Account Existing Account If you already have an account for ResearcherID, or Web of Science, you can login at https://publons.com/account/login/ using the same username and password. The publications you previously added into ResearcherID have automatically be added into your Publons account. Create a New Account If you do not already have a ResearcherID, create a Publons account here: https://publons.com/account/register/. You will receive a ResearcherID overnight from Publons that you will be able to use to attach to your publications in Web of Science. Adding Publications Once logged into your account you can load your publications by importing them from Web of Science, ORCID account, EndNote Library or manually using identifiers such as DOI numbers. (Note that citation metrics won't be tracked for records that do not appear in Web of Science). Loading Web of Science publications Select “Publications” under My Records then select “Import Publications” Publons will automatically search Web of Science when you select “Import Publications” for publications that match the email addresses and publishing names you listed in your profile. -
Exploratory Analysis of Publons Metrics and Their Relationship with Bibliometric and Altmetric Impact
Exploratory analysis of Publons metrics and their relationship with bibliometric and altmetric impact José Luis Ortega Institute for Advanced Social Studies (IESA-CSIC), Córdoba, Spain, [email protected] Abstract Purpose: This study aims to analyse the metrics provided by Publons about the scoring of publications and their relationship with impact measurements (bibliometric and altmetric indicators). Design/methodology/approach: In January 2018, 45,819 research articles were extracted from Publons, including all their metrics (scores, number of pre and post reviews, reviewers, etc.). Using the DOI identifier, other metrics from altmetric providers were gathered to compare the scores of those publications in Publons with their bibliometric and altmetric impact in PlumX, Altmetric.com and Crossref Event Data (CED). Findings: The results show that (1) there are important biases in the coverage of Publons according to disciplines and publishers; (2) metrics from Publons present several problems as research evaluation indicators; and (3) correlations between bibliometric and altmetric counts and the Publons metrics are very weak (r<.2) and not significant. Originality/value: This is the first study about the Publons metrics at article level and their relationship with other quantitative measures such as bibliometric and altmetric indicators. Keywords: Publons, Altmetrics, Bibliometrics, Peer-review 1. Introduction Traditionally, peer-review has been the most appropriate way to validate scientific advances. Since the first beginning of the scientific revolution, scientific theories and discoveries were discussed and agreed by the research community, as a way to confirm and accept new knowledge. This validation process has arrived until our days as a suitable tool for accepting the most relevant manuscripts to academic journals, allocating research funds or selecting and promoting scientific staff. -
ORCID: Connecting the Research Community April 30, 2020 Introductions
ORCID: Connecting the Research Community April 30, 2020 Introductions Shawna Sadler Sheila Rabun Lori Ann M. Schultz https://orcid.org/0000-0002-6103-5034 https://orcid.org/0000-0002-1196-6279 https://orcid.org/0000-0002-1597-8189 Engagement Manager ORCID US Community Sr. Director of Research, Americas, Specialist, Innovation & Impact, ORCID LYRASIS University of Arizona Agenda 1. What is ORCID? 2. ORCID US Community Consortium 3. Research Impact & Global Connections 4. ORCID for Research Administrators 5. Questions What is ORCID? ORCID’S VISION IS A WORLD WHERE ALL WHO PARTICIPATE IN RESEARCH, SCHOLARSHIP, AND INNOVATION ARE UNIQUELY IDENTIFIED AND CONNECTED TO THEIR CONTRIBUTIONS AND AFFILIATIONS ACROSS TIME, DISCIPLINES, AND BORDERS. History ● ORCID was first announced in 2009 ● A collaborative effort by the research community "to resolve the author name ambiguity problem in scholarly communication" ● Independent nonprofit organization ● Offering services in 2012 ORCID An non-profit organization that provides: 1. ORCID iDs to people 2. ORCID records for people 3. Infrastructure to share research data between organizations ORCID for Researchers Free Unique Identifier Sofia Maria Hernandez Garcia ORCID iD https://orcid.org/0000-0001-5727-2427 ORCID Record: ORCID Record: ORCID Record: What is ORCID? https://vimeo.com/97150912 ORCID for Research Organizations Researcher ORCID Your Organization 1) Researcher creates ORCID iD All records are saved in the API Transfer Member data 2) Populates record ORCID Registry to your CRIS System Current -
Inter-Rater Reliability and Convergent Validity of F1000prime Peer Review
Accepted for publication in the Journal of the Association for Information Science and Technology Inter-rater reliability and convergent validity of F1000Prime peer review Lutz Bornmann Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8, 80539 Munich, Germany. Email: [email protected] Abstract Peer review is the backbone of modern science. F1000Prime is a post-publication peer review system of the biomedical literature (papers from medical and biological journals). This study is concerned with the inter-rater reliability and convergent validity of the peer recommendations formulated in the F1000Prime peer review system. The study is based on around 100,000 papers with recommendations from Faculty members. Even if intersubjectivity plays a fundamental role in science, the analyses of the reliability of the F1000Prime peer review system show a rather low level of agreement between Faculty members. This result is in agreement with most other studies which have been published on the journal peer review system. Logistic regression models are used to investigate the convergent validity of the F1000Prime peer review system. As the results show, the proportion of highly cited papers among those selected by the Faculty members is significantly higher than expected. In addition, better recommendation scores are also connected with better performance of the papers. Keywords F1000; F1000Prime; Peer review; Highly cited papers; Adjusted predictions; Adjusted predictions at representative values 2 1 Introduction The first known cases of peer-review in science were undertaken in 1665 for the journal Philosophical Transactions of the Royal Society (Bornmann, 2011). Today, peer review is the backbone of science (Benda & Engels, 2011); without a functioning and generally accepted evaluation instrument, the significance of research could hardly be evaluated. -
Web of Science, Scopus, & Altmetrics
Measure Your Research Impact: Author Impact Indices & Other Tools Open Access Week 2017 – Theme: “Open in Order To…” October 25, 2017 About this session We will explore the importance of academic reputation, explaining your research to a wider audience, measuring the impact of your activities related to spreading the word about your publications, and what you can do to enhance yours. Defining Impact “… the beneficial application of research to achieve social, economic, environmental and/or cultural outcomes…. … impact in the academic domain, which is seen more as an indicator of the intrinsic quality of the research on scholarly or academic measures” Australian Research Quality Framework, 2006 a host of social networking platforms designed specifically for scholars abound Author Identification and Measuring Impact Authorship and researcher ID Establishing a unique author/researcher identity is an important step to improving your research visibility and impact. There are a variety of options for creating a unique identity, with ORCID being the latest development. ORCID is well supported by many publishers. ORCID is a central registry of unique identifiers for individual researchers and an open and transparent linking mechanism between ORCID and other current author ID schemes which includes: •Researcher ID - linked to Thomson Reuter's Web of Knowledge platform •My Citations - Google Scholar •Author identifier - Scopus •arXiv – arXiv author identifiers ORCiD® iDs are unique researcher identifiers designed to provide a transparent method for linking researchers and contributors to their activities and outputs. arXiv allows you to link your ORCID iD with your arXiv account. Organizational names suffer from similar ambiguities to those described for researcher names. -
ORCID: Building Academic Trust
ORCID: building academic trust Research Ethics STM Publishing and China Publishing Training day Beijing, 26 August 2015 Nobuko Miyairi Regional Director, Asia Pacific [email protected] http://orcid.org/0000-0002-3229-5662 orcid.org orcid.org 1 Publishing Ethics • Multiple dimensions of publishing ethics • Data fabrications § Honest oversight, or intentional • Plagiarism • Duplicate submissions • Authorship § Ghost/guest/gift • Review misconduct • Citation ethics orcid.org 2 Honest oversight Dear Editor, “ ” In [month] [year], your journal published my colleagues’ and my article. Since the publication, we have been working on follow- up analysis using the same database. When new results were implausible, we undertook an intensive investigation… we found we failed to include 8 files of data in the original dataset. This mistake resulted in the under-reporting of xxx… The mistake occurred despite the intensive quality checks. We sincerely apologize for the issues and would like to ask for your cooperation in correcting the published article. (This letter is fictional) orcid.org 3 Intentional? Fabrication & Plagiarism • Haruko Obokata, Japanese scientist • First claimed to have developed a radical and remarkably easy way to make stimulus-triggered acquisition of pluripotency (STAP) cells that could be grown into tissue for use anywhere in the human body • Published in Nature, January 2013 • Results were unable to replicate • RIKEN eventually launched an investigation in response to allegations of irregularities in images • Failing her own replication study, Obokata resigned from RIKEN • The scandal has become one of the world’s most well-known scientific frauds alongside the Shön scandal and Hwang Woo-suk’s cloning experiments orcid.org 4 “ Experts pointed out that Obokata possibly ” copied and pasted passages on stem cells from the U.S. -
Validity of Altmetrics Data for Measuring Societal Impact
Accepted for publication in the Journal of Informetrics Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime Lutz Bornmann Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8, 80539 Munich, Germany. E-mail: [email protected] Abstract Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag “good for teaching” do achieve higher altmetric counts than papers without this tag – if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (“new finding”). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact. If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers’ subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. -
Author Records and Author Search in Web of Science
New Author Records and Author Search in Web of Science Pioneering the world of research for more than 50 years Bob Green Solution Specialist October 2019 Agenda 1. Why the Change? 2. What has Changed? 3. Features and Functionality 4. Workflows/Scenarios 5. FAQs 6. Additional Resources 2 Why the change? The Power of the Group “The Web of Science ✓ Help researchers track more of their impact and Group is on a journey own their online identity. of transformation and innovation to support ✓ Deliver the highest-quality disambiguated author a more holistic and data in Web of Science… and the world. researcher-centric workflow.” ✓ Bring the highest-quality author data into the Web of Science Group’s other solutions. ✓ Make the Web of Science Group’s solutions the most trusted resources for confident discovery of an author’s published work, as well as assessment of their output and associated impact. 4 What has changed? What has changed? Features will release in BETA We are enhancing the quality of • A fully re-imagined Author Search. author disambiguation and accessibility of author data in • A new Author Record view. Web of Science, while giving researchers ownership of their • Ability for Web of Science users to submit feedback Author Record via Publons. to correct publication records. • An enhanced author disambiguation algorithm that suggests author records and learns from user feedback. • Give researchers ability to claim ownership of their ‘Web of Science Author Record’ via Publons. 6 What is author disambiguation? Also published as: • Avram Noam Name ambiguity is a frequently encountered problem Chomsky in the scholarly community: • N. -
Adapting to Plan S: Experiences from an Open Science Publisher
Adapting to Plan S: experiences from an open science publisher Hannah Wilson Associate Publisher, F1000Research [email protected] @hannahluwilson Adapting to Plan S: experiences from an open science publisher 1. What is F1000Research? 2. How are publishers adapting? 3. How can open publishers add value? 4. How can we look beyond the APC model? © 2000-2019 Faculty of 1000 Ltd What is F1000Research? Open access Open peer review ▪ CC–BY ▪ Author-led ▪ APC model ▪ No blinding Open data ▪ Citable reviews ▪ FAIR data principles ▪ Co-reviewing ▪ Enforced before acceptance © 2000-2019 Faculty of 1000 Ltd © 2000-2019 Faculty of 1000 Ltd How are publishers adapting to Plan S? Transformative agreements Limitations ▪ Budgets still eaten up by big publishers ▪ APC model not accessible to all What does this mean for smaller, open publishers? © 2000-2019 Faculty of 1000 Ltd What value can we add? Commitment to all aspects of open science ▪ Open data ▪ At F1000Research we enforce our data policy strictly ▪ Rarely seen in publications outside the data field (e.g. Scientific Data and GigaScience) ▪ Journal of Open Source Software – example of community-led innovation on this front © 2000-2019 Faculty of 1000 Ltd What value can we add? Commitment to all aspects of open science ▪ Open data ▪ Transparent article evaluation ▪ At F1000Research open peer review means no blinding and publication of reports ▪ We also have no editorial bias in the publication process – no rejections on impact or novelty © 2000-2019 Faculty of 1000 Ltd How can we look beyond the current APC model? Institutional agreements Partnering with funders Partnering with underrepresented communities © 2000-2019 Faculty of 1000 Ltd Summary Transformative agreements are an initial step Open publishers must show their worth and add value As an industry we must look beyond the APC model © 2000-2019 Faculty of 1000 Ltd Thank you [email protected] @hannahluwilson © 2000-2019 Faculty of 1000 Ltd. -
ORCID: What, Why, How?
ORCID: What, Why, How? Harvard-Smithsonian Center for Astrophysics October 21, 2015 Alice Meadows Director of Communications [email protected] http://orcid.org/0000-0003-2161-3781 #alicejmeadows Contact Info: p. +1-301-922-9062 a. 10411 Motor City Drive, Suite 750, Bethesda, MD 20817 USA orcid.org How can you stand out in the crowd? And ensure your work is unambiguously connected to you? 21 October 2015 orcid.org 2 How can you minimize your reporting and admin burden, and more easily comply with mandates? 22 October 2015 orcid.org 3 ORCID orcid.org 4 What is ORCID? • ORCID provides a persistent digital identifier that uniquely distinguishes each researcher " • Through integration in key research workflows such as manuscript and grant submission, ORCID supports automated links between researchers and your professional activities, ensuring that your work is appropriately attributed and discoverable 5 ORCID is… a registry • Free, non-proprietary registry of persistent unique public identifiers for researchers" • Community-led initiative supported by member fees" • Open data, software, APIs, and documentation 6 ORCID is… a hub DOI URI Thesis ID DOI With other ISBN FundRefID! identifiers, GrantID ORCID enables machine- readable Higher Other Educati Professio connections Repositori nal person Publish Funders on and es Associati identifier ers with: Employ ons s " ers • works ISNI ISNI Researcher ID Ringgold ID • organizations Scopus Author ID • person IDs Internal identifiers Member ID Abstract ID 7 ORCID is… interdisciplinary Science Medicine -
Open Science and the Role of Publishers in Reproducible Research
Open science and the role of publishers in reproducible research Authors: Iain Hrynaszkiewicz (Faculty of 1000), Peter Li (BGI), Scott Edmunds (BGI) Chapter summary Reproducible computational research is and will be facilitated by the wide availability of scientific data, literature and code which is freely accessible and, furthermore, licensed such that it can be reused, inteGrated and built upon to drive new scientific discoveries without leGal impediments. Scholarly publishers have an important role in encouraGing and mandating the availability of data and code accordinG to community norms and best practices, and developinG innovative mechanisms and platforms for sharinG and publishinG products of research, beyond papers in journals. Open access publishers, in particular the first commercial open access publisher BioMed Central, have played a key role in the development of policies on open access and open data, and increasing the use by scientists of leGal tools – licenses and waivers – which maximize reproducibility. Collaborations, between publishers and funders of scientific research, are vital for the successful implementation of reproducible research policies. The genomics and, latterly, other ‘omics communities historically have been leaders in the creation and wide adoption of policies on public availability of data. This has been throuGh policies, such as Fort Lauderdale and the Bermuda Principles; infrastructure, such as the INSDC databases; and incentives, such as conditions of journal publication. We review some of these policies and practices, and how these events relate to the open access publishinG movement. We describe the implementation and adoption of licenses and waivers prepared by Creative Commons, in science publishinG, with a focus on licensing of research data published in scholarly journals and data repositories. -
Exploring the Relevance of ORCID As a Source of Study of Data Sharing Activities at the Individual-Level: a Methodological Discussion
Paper accepted for publication in Scientometrics Exploring the relevance of ORCID as a source of study of data sharing activities at the individual-level: a methodological discussion Sixto-Costoya, Andrea1,2; Robinson-Garcia, Nicolas3; van Leeuwen, Thed4; Costas, Rodrigo4,5 1,2Department of History of Science and Information Science, School of Medicine, University of Valencia, Valencia, Spain; UISYS Research Unit (CSIC – University of Valencia), Valencia, Spain; 3Delft Institute of Applied Mathematics, Delft University of Technology, Delft, Netherlands; 4Centre for Science and Technology Studies (CWTS), Leiden University, Leiden, The Netherlands; 5DST-NRF Centre of Excellence in Scientometrics and Science, Technology and Innovation Policy, Stellenbosch University, Stellenbosch, South Africa ORCID: Andrea Sixto-Costoya: 0000-0001-9162-8992 Nicolas Robinson-Garcia: 0000-0002-0585-7359 Thed van Leeuwen: 0000-0001-7238-6289 Rodrigo Costas: 0000-0002-7465-6462 Email of corresponding author: [email protected] 1 Paper accepted for publication in Scientometrics Abstract ORCID is a scientific infrastructure created to solve the problem of author name ambiguity. Over the years ORCID has also become a useful source for studying academic activities reported by researchers. Our objective in this research was to use ORCID to analyze one of these research activities: the publication of datasets. We illustrate how the identification of datasets that shared in researchers’ ORCID profiles enables the study of the characteristics of the researchers who have produced them. To explore the relevance of ORCID to study data sharing practices we obtained all ORCID profiles reporting at least one dataset in their "works" list, together with information related to the individual researchers producing the datasets.