Research outcomes have many less tangible impact that may not be as straightforward to quantify and report. Pic by AZHAR RAMLI

AS we go forth into 2019, many of us are taking stock of achievements in 2018 and making plans for the new year. For some, it is also the time of annual performance reviews. Have we achieved what we had set out to do for 2018? What are we setting out to achieve in 2019? For a number of us, this is when that dreaded term, “Key Performance Indicator (KPI)”, makes an appearance.

In many sectors, a quantifiable measure is used as an indicator of performance or achievement — this is basically what a KPI is. In sectors that carry out research and development activities, including academia, the common indicators used are publications, patents and the commercialisation of research. This pipeline is commonly shortened to RDC&I — research, development, commercialisation and innovation. Since a KPI needs to be measurable, therefore it is the number of publications, patents, products and such that are reported and used as indications of performance or success.

However, to limit the scope of research deliverables and their outcomes merely by confining them to the RDC&I narrative is inaccurate and perhaps even short-sighted. It is therefore important to extend beyond the common narrative that out of research comes commercialisation and economic returns. Not all research carried out will have immediate market potential. Some research generate fundamental knowledge that are themselves of no direct commercial value but open up other avenues for commercialisation.

For example, the world wide web, when it was first developed and on its own, has no commercial value. But today, the world wide web serves as the platform through which billions of dollars in commercial transactions are carried out each day. Something that had no initial clear tangible value other than perhaps a platform for the sharing of information is today a vast economic ecosystem that is able to reach billions of people.

In Malaysia and much of the world, the majority of academic research is funded by public money — in other words, the cost of the research activities are borne by the taxpayers or through donations and gifts. In this context, the deliverables and outcomes of research activities from such funding must ultimately benefit the public.

An oversimplification of assessing research impact by making it a numbers game is dangerous. This is the unfortunate trap that we have fallen into. Our current methods of measuring research outcomes and success have been partly dictated to us by data corporations intent on making a profit out of enabling customers to achieve the goals that these corporations themselves have set. In a way, the world has swallowed this narrative hook, line and sinker. It then becomes a vicious cycle of throwing money to achieve the desired numbers, but are these numbers the real outcomes that the public requires and aspires to?

Does such an approach duly serve those who provided those funds? In this scenario, we must see the origin of the funds as the taxpayer or, in some cases, individuals who have donated funds to a particular charity entrusted with managing research funds. The outcomes of research activities must therefore be of benefit to them and not merely the intermediary tasked with managing the funds.

In measuring research outcomes, we must perhaps first be clear what the desired outcomes are and from there we can perhaps determine what the deliverables that can achieve them should be. If the desired outcome is merely to rise in some ranking tables, then by all means, we must strategise to achieve that by delivering the numbers such as publications, citations and various other quantifiables associated with research output.

For those unfamiliar with the workings of academia, the number of times papers are referred to by other research articles are termed as citations. In a way, the number of citations is a crude but nevertheless still useful measure for the importance of the paper and the research that was reported in the article. For example, a seminal paper reporting fundamental knowledge that leads to a method to develop new drugs may be highly cited, but the follow-up work on the drug development processes themselves may not be as highly cited but that does not mean they are less important.

I do not have the answers as to what may be the best or most accurate method of measuring research success. But what I can propose is that to ensure the best returns on research investment, we must invest into research on research itself. Therein lies some irony that to measure the benefits of research is a major research endeavour itself.

The argument here is not that the current method of measuring research outcomes and returns are wrong, but that they do not fully reflect the impact that research activities have had on the science, the economy and society. For example, publications are definitely a good thing. For more than 1,000 years, mankind has preserved and disseminated knowledge through the written word and that remains relevant today. But publishing for the sake of making up the numbers may result in the pollution of human knowledge with irrelevant records.

The focus should therefore be on the quality of the output. If publications are to be used as a measurement, then it should not be mere numbers but also the perceived importance of each paper that is reflected in the number of times they are cited.

However, it should not stop there. Many important papers have low citations due to various reasons and thus citations are also not the end all of measuring research impact.

One can then argue that for research to have significant impact, then there must have been innovations that resulted from it. Innovations are usually patented. As a result, the number of patents has become a measurement of research outcomes as well. Yet, one cannot simply look at the number of patents but must also consider the financial returns from the patents instead.

Research outcomes have many less tangible impact that may not be as straightforward to quantify and report. Yet, these outcomes cannot be ignored as they may have far-reaching and long term benefits, such as the example of the world wide web. We must therefore rethink the concept of research deliverables and innovate the new methods to measure those deliverables.

Doing research is a privilege, a deeply satisfying intellectual exploration that is done by a few but must benefit many. On the shoulders of each researcher, especially those funded by public money, is the heavy responsibility to deliver the desired outcomes. These outcomes must ultimately have wide ranging and sustainable benefits that can also serve as the drivers for nation-building, as well as the engine for economic growth and societal well-being.

The writer is a bioinformatician and molecular biologist heading the Centre for Frontier Sciences, Faculty of Science and Technology and a Senior Research Fellow at the Institute of Systems Biology, Universiti Kebangsaan Malaysia. Email him at firdaus@mfrlab.org

127 reads