3.2 Incentives & Rewards

The Current Reward System

As we have seen, flashy science may trump good science.

Many of the underlying reasons for this are a result of the reward system in academia. The reward system in academic research has four main players: researchers, publishers, employers, and funders. Traditional research outputs (academic papers) are coordinated by publishers, while researcher rewards - tenure, wages, and grants - are handled by their employers, funders, and peers.

As you'll recall, journals want to publish exciting research. You could say this is good business sense - I mean, who wants to pay for a journal full of research articles that say 'we conducted an experiment, but didn't find anything.'?

Funders, as well, want their projects to achieve clear, novel, and measurable results; they obviously want to be able to justify why they invested in a given project. Interestingly, this often results in 'the next big thing' being awarded significant funding while the niche, but potentially novel project is overlooked. This is equally problematic for replication studies.

Employers want to know that their institution is having an impact - researcher funding dollars and researcher impact are proxy measures of institutional success and impact.

Researcher impact is gauged largely by the number of times their research is cited by others, the prestige of the journals they publish in, and the level of funding they can get. Thus, the rewards a researcher receives depend largely on the outputs of journals.

Journals want to publish exciting research. Funders want their projects to achieve clear, novel, and measurable results. Employers want to know that their institution is having an impact. Researcher impact is gauged largely by measures of citation impact.

You can see we have a bit of a quandary here.

Researchers may want to change their practices - say explore an unlikely, but potentially exciting avenue of inquiry, re-address an already published study and attempt a replication, or invest more time in making their research methods computationaly reproducible or undertake their next project with a level of care and direction that supports the needs of those impacted by their research - but if such changes negatively affect whether they're published - or where they're published - they risk falling out of favour with their employers or face greater challenges getting research funding.

At the same time, funders and employers struggle to find other ways to measure research impact to coordinate funding and to demonstrate value - citation metrics are the easy option.

Strategies for Change

Strategies for change centre on how value and impact are interpreted and understood in the context of research. And many large organizations representing researchers, publishers, employers, and funders have started to change the way in which they evaluate these factors.

Changes in Assessment Criteria

In 2012, the Annual Meeting of the American Society for Cell Biology in San Francisco gave rise to the Declaration on Research Assessment (DORA). This document calls on all parties involved in the research process to critically re-evaluate how research is assessed for merit. It has since turned into a worldwide initiative with thousands of individual and organizational supporters. You can read more about DORA here.

UBC is currently not a signatory, however.

Funder Requirements

Since 2015, the three major Canadian research agencies - the National Sciences and Engineering Research Council (NSERC), the Social Sciences and Humanities Research Council (SSHRC), and the Canadian Institutes of Health Research (CIHR) - have required that recipients of their research grants provide open access to the resulting papers within a year of publication. These same agencies are currently working on a similar policy for research data. You can read more about the Tri-Agencies Open Access Policy here.

UBC invests significantly in ensuring that researchers have the resources needed to be able to adhere to these funder requirements.

Journal Efforts

In 2014, the journal Psychological Science broke new ground by giving authors the opportunity to signal their use of open practices by awarding badges to their papers. Under the guidance of UBC Vancouver’s Dr. Eric Eich, editor-in-chief of the journal at that time, authors began to receive badges for three types of OS practices: open data, open materials, and preregistration. These badges were then added as images to the paper.

Before badges were introduced, fewer than 3% of papers in Psychological Science reported that their data was open. By mid 2015, this percentage had grown by a factor of 13 to 39 percent!

Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency.

It would not be sound science to suggest that badging directly resulted in a 13 fold increase in open data - some of this data may have already been intentioned to be open. However, given the opportunity to signal that one is undertaking this open activity helps to encourage and normalize the process among peers.

So, while the full increase should not be attributed exclusively to badging, the act of badging made it possible to connect open data directly to published research findings, increasing transparency and reproducibility.