I started writing this post to provoke a conversation and scratch the surface on an important issue. There are many differing points of view and most are valid, and less valid at the same time depending on the lens you are looking through.
Measuring impact from research requires a more sophisticated or complex conversation than some may think. It is not simply a matter of deciding on a few relevant metrics and analysing outcomes after the research has been completed.
When developing a strategy to achieve a mission or fulfil a purpose, measuring success should be front and centre. It will help you ask the right questions and select the right projects or activities. You should also consider why you are measuring and how you will use the data.
Importantly, it is also worth considering other key stakeholders, what they are measuring and why. You should also consider the impact of what you are measuring and needs of researchers.
Key stakeholders include;
- Researchers
- Research institutions and universities (employers of the researchers)
- Government (policy makers and funders of researchers and their institutions)
- Other funders of research (such as philanthropy and industry)
- The wider community
Researchers
The burden on researchers applying for funding and reporting on funding they receive can be onerous. Each stakeholder asks questions relevant to their individual needs. Researchers are heavily influenced by their need to keep working, pay the rent and feed the kids.
The needs of their institutions and largest funders (NHMRC and ARC) are of particular importance. Continuity of employment and opportunities for promotion are heavily dependent on academic measures of research success, publications and citations.
Historically these academic outputs have been by far the greatest measure of success employed widely across all stakeholders.
More recently, however, competition for traditional funding, a greater focus on translation and outcomes (as opposed to outputs) and the increasing level of irreproducible results has spurred a conversation around alternative measures.
In the meantime, it is now possible for funders of research to more easily measure academic outputs without asking the researchers to undertake the task. Online resources and identifiers such as ORCID, google scholar and myResearcherID make it easier to follow publications and citations.
Research institutions and universities
The prime purpose of universities is teaching. Research is applied to stay at the forefront of education, but it is also a key metric used internationally in university rankings and by government to assess performance.
Publication and citation measures heavily influence government funding initiatives such as Excellence in Research Awards (ERA) and future National Health and Medical Research Council (NHMRC) and Australian Research Council (ARC) success rates that all drive revenues to the Publicly Funded Research Organisations (PFROs). These measures also heavily influence university rankings that help attract fee paying students and influence the fees they can charge.
The source of funding can also influence other types of revenue to PFROs such as ‘block grants’.
These academic measures are key to PFROs and enable them to build global reputations and drive revenue.
One conversation to consider is the link between quality teaching, student needs and outcomes, research and the applied metrics.
Government
Governments are the stewards of the community’s tax dollars. For government, it is important to have beginning to end measurements that demonstrate an investment in research returns socioeconomic benefits to the community. This can justify policy, priorities and government initiatives.
Are publications the best measure to show how the community benefits or are other metrics more important?
Demonstrating that there have been positive health outcomes, a cost saving to the community or better use of resources all require research in their own right. This research, however, specifically looks at the application of earlier research and its implementation. These metrics are very different from academic metrics which have a different purpose and use.
Is there a disconnect between the funding of research through the NHMRC and ARC and the metrics employed or are they just doing different things that require different measures?
Government are the biggest elephant in the room and sudden changes in metrics or direction could cause chaos and confusion as their key stakeholders reconsider their strategies and where they fit in the funding ecosystem.
If measures were reviewed and changed, should the change be evolutionary or revolutionary?
Other funders of research
There are many other funders of research including individual philanthropists, charities and foundations, investors and industry.
Each will have its own purpose and goals they wish to achieve.
Generally, these funders are more interested in translational outcomes than academic outputs. Their measures will also be about understanding progress, ‘research failure’ and the impact of their own strategy.
It is important not to confuse the assessment of a funding strategy with individual research outcomes. A funding strategy could be correct, but the wrong funded project or activity could lead to doubts on the effectiveness of the strategy. Conversely, a lucky research result could unjustifiably support a funding strategy.
Two key questions come to mind here.
- Can the specific project outcomes be scaled or progressed to deliver far reaching community benefits?
- Can the funding strategy be replicated and scaled to influence culture, policy or practice change?
Some will apply the outcomes from their measurements to justify continued funding, provide additional support, look for efficiencies, and promote their successes in support of their business objectives (e.g. fundraising). Smaller organisation in particular need to be careful that they are not wasting researchers’ time by asking for reports or information they cannot use or do not need. Reporting for the sake of reporting can have detrimental effects.
Community
Community is often overlooked in measuring success. They are however donors, tax payers and voters. Communicating success and failure of projects and strategies can be important.
Whilst most donors make decisions heavily influenced by emotional and personal experience outcomes including case studies and measures demonstrating effective and efficient giving can help the conversation and encourage further giving.
Communities are also the intended beneficiaries of research. It is important the metrics used encourage and facilitate this end goal, are relevant to the purpose and do not distract or impede the pathway.
Most key stakeholders are trying to paddle the boat in the same direction, and in the end supporting and delivering community benefits from research. Their strategies, organisational needs, funding and reporting requirements all may differ, but by working together they can provide more effective and efficient support to researchers, who in turn should assist in delivering a greater good.
Whilst trying to simplify reporting is a common goal amongst all stakeholders, making reports universally relevant is a significant challenge. Questions that are important to one key stakeholder may be less so to others. Questions relevant to one type or specific research program may not be applicable to all. For example, research into discovering and advancing a new medicine can be very different to research investigating effective and efficient ways of making medicines available, ensuring compliance and monitoring effectiveness and adverse events in remote communities.
It’s important to consider that sometimes what is not asked can also lead to some unintended consequences. For example, I am aware of an organisation with quality systems in place, where researchers wanted to remove the systems as the larger funders don’t appear to be interested in that metric or factor (i.e. they don’t ask the question). This could lead to significant ramifications to translational activities and the reproducibility of results.
Whilst we ask researchers and their institutions to collaborate and work together there are few opportunities for the various key stakeholders to come together and explore how we can do things better.
Join the conversation at NFMRI’s third annual conference, “Philanthropy: Creating Impact and Dancing with Elephants”. The conference will take place on the 21st and 22nd of November 2017 at the Australian National Maritime Museum in Sydney.
Don’t forget if you are going to post comments, please also make them at the original post to help grow this important conversation.
[poll id=”24″] [poll id=”25″]