By: Valeria Arza of CENIT
What does a scientist do when he or she obtains an important research result? The obvious answer is: publish it. But, where and how does the scientists publish it? Here the answer is no longer obvious.
The incentive and evaluation system in research privileges publications in peer-reviewed journals. But not any journal. In some countries, scientific production is more highly valued when it is published in high-impact journals. Impact is frequently measured with an index that evaluates the journals, known as the impact factor, which is obtained by averaging the number of citations that an article receives in each journal during the last two years. Once the paper is published, another factor important for the institutional evaluation of the researcher is the number of citations that their papers have received in other scientific publications.
A scientist who aspires to make progress in their career, should therefore publish their results in journals with the highest impact factor or where they expect to have more chances of being cited by the scientific community.
This means writing the manuscript, formatting it in accordance with the requirements of the chosen journal and submitting it. This obviously takes time and effort. But it is only a first step. A recent study shows that 21% of papers are rejected without peer review, either because of lack of merit or lack of editorial interest. An additional 39% are rejected during the peer review process and only 40% are normally accepted, prior to suggested changes, by two or three reviewers. If the manuscript is within this last group, the researcher will have to wait an average of 12 months to see their article published. Nevertheless, it is more likely that a manuscript will not be accepted by the first journal where it is submitted.
The time between obtaining the research result and its publication is usually very long. The process is tedious. In order to surpass it, the scientist invests resources motivated more by the need to make progress in their career rather than by the desire to share information with the interested public (which surely is broader than the scientific group researching in any specific knowledge field).
It is worth noting that access to these publications is generally restricted. Normally only scientists who work in institutions that have paid the corresponding journal subscription have access to it. In some cases, the journals offer authors the option to pay for their articles so that they are freely available (this way the cost of the research grows because not only does the scientist need finance for research but also for others to know the results). Finally, there are a growing number of open-access journals (free for authors and readers) but they are not amongst the most prestigious journals and therefore the researcher has less incentive to publish in these.
So, although publishing in peer-reviewed and high impact journals responds current incentives of the scientific system and works to inform results to the scientific community, this is not an effective way of diffusing results to the whole potentially interested audience.
The risks of diffusion
Imagine for instance that a research team produces an urgent and socially important finding. For example, it discovers that a certain widely used agrochemical has harmful effects on health. Would it be ethically correct that those results were not immediately published? Suppose that the team, given the urgency of the case, had immediately shared the findings in the media and social networks but then to subsequently face the arduous task of publishing it in a scientific journal. Without a doubt, for the scientific team, this would be a quite courageous move.
Since the scientific career was professionalized, a scientific result is not seen as a true fact until it is backed by a publication. Moreover, as journals demand original results, arriving first is fundamental. Disseminating results before they are published puts at risk both the reputation and the possibility of publication of those findings in high impact scientific journals as well as progressing in ones career.
Furthermore, how could users trust in the quality of what is published?
Revision and quality
One of the arguments in favor of the current system is that it guarantees certain levels of quality control on the scientific results that finally are diffused. However, the review process is far from being objective. Those who have sent papers for revision or have been reviewers or journal editors themselves know that the opinions of different referees on the same article are not always shared and they are often radically opposed.
Reviewers’ reports depend to a large extent on variables such as their age, predisposition to review the document, their personality, their knowledge of the topic, and their political-ideological position in relation to the discussed hypothesis. The review process can function as a quality filter, but certainly this is not always the case.
On the other hand, reviews are not public, which prevents the reader from reflecting on the validity of reviewers opinions on the original manuscript. The study mentioned above estimated that 15 million person hours per year are lost in revising articles. A more valuable use could be given to this time if we all could know those comments.
Impact factor indexes are even more arbitrary. Firstly, because the calculus methodology and the data used are not transparent. Second, because the number of average citations received by the article in a journal during the last two years largely depends on the editorial strategy. In the third place, because this does not sufficiently reflect the scientific quality of the research, neither or its social importance.
A new consensus
Fortunately, in recent years, new policy tools and practices have been gaining visibility and support, anticipating winds of change.
Amongst these, in Argentina some tools are being experimented with, such as the obligation of open access to scientific publications funded by public money, as established by the National Law on Creating Open Access Institutional Digital Repositories, sanctioned by the Congress in 2013 (not yet regulated).
Furthermore, alternative impact evaluation metrics for scientific research are beginning to be proposed. These indicators are known as almetrics and are constructed based on scientific publication citations in the Web. Those existing at the moment use sources of information such as the main social networks, including Facebook, Twitter, Google+, LinkedIn, Weibo, etc., Wikipedia, feednews, science blogs, traditional mass media, reference managers such as Mendeley and CiteUlike, and international policy documents such as those published in the National Health Service of Great Britain.
These measures are complementary to traditional information about citations on which the traditional system of research evaluation and reputation is based. Those more oriented to capture impact on the scientific community and are better prepared for accounting the impact in a broader audience that includes not only scientists but also science amateurs, people interested in knowledge advances in certain areas of research for various reasons (for example, people under a certain type of medical treatment who want to investigate the latest advances in science), public policy responsible and the public in general.
There is much space to improve these alternative measures. In the first place, sources that are consulted should be regionalized, including locally diffused media, policy documents and reference managers. In the second place, nowadays information sources are more accurate for biomedicine or exact science but could be extended too towards other disciplines. In the third place, impact measures could be extended in order to capture not only citations to published papers but also references to authors.
In any case, even with its imperfections, this alternative impact measurement system manages to better capture the interest that research has provoked within broader audience. Let’s take a domestic example. In the last ten years, which research in Argentina has had the largest impact according to alternative metrics?
The answer: studies that show the toxicity of the use of glyphosate. This is a widely used herbicide in soy production. Argentina is among the countries that consume more herbicides in the world. It is clear that this type of research attracts a lot of social interest.
To show some numbers that are understandable to the reader. Let’s look at just one of almetric sources: Facebook. The most shared article on Facebook (2225 Facebook users showed this article on their walls) was published in 2014 and studied the impact that the herbicide has on the population that is not the target of that herbicide (studies worms) and finds that it exposes them to the risk of extinction. The second study found adverse effects in the metabolism of certain mollusks and was published in 2013 and shared 374 times. Only this last article receives a citation (only one) in other scientific journals.
It is true that both works were recently published, and as we said, the publication terms and citation terms in the scientific system are very long (certainly, longer than in the Web). But even in comparison to its cohort of publications the attention that these articles have received in scientific journals is under the average.
This evidence, even though it is only illustrative, is quite suggestive. The type of problems and outcomes that matter the most to society do not always coincide with those that create the most interest within the scientific system. The use of glyphosate is a clear example.
Opening the game
In order to bring science closer to society, it is necessary to open the game. We could think of different tools like for example: promoting publications in open access journals, diffusing comments of specialized reviewers and improving assessment systems using judgement, transparency and openness criteria (see for example the Leiden Manifesto for research metrics).
In the digital era, wouldn’t it be fairer to immediately spread all results and let readers based on specialised reviewers’ open comments, different actors’ critics and citations/use of ideas and data decide about the quality of the publication? What is the logic that only two, three, five people, chosen with little transparent criteria, obtain the right to privately define which results are publishable and which ones are not? Besides, what is the rationality for an incentive scheme that takes a path that wider society does not have access to? For what reason do we invest public resources in producing scientific results for which society has late and restricted access?
A general principle should be transparency and participation. On the one hand, decisions about how to assess career performance and progress should be public and transparent. On the other hand, incentive schemes should promote a broader participation of scientists in the society (for instance, promoting science communication efforts) and of society in science (for instance, encouraging a broader participation of social actors in building the agenda).
Leave a Reply