Tagged: Team Science

Quality is key

Authors: Esther Plomp & Antonio Schettino & Emmy Tsang

The quality of research is important to advance our knowledge in any field. To evaluate this quality is a complicated task, since there is no agreement in the research community on what high-quality research is, and no objective set of criteria to really quantify the quality of science

Where some try to argue that quantitative metrics, such as the ‘journal impact factor’ and the ‘h-index’ are objective measurements of research quality, there is plenty of scientific literature that provides evidence for the exact opposite. This blog delves into some of that literature and questions the objectivity of these metrics.

Journal Impact factor

The Journal Impact Factor (JIF) was originally established to help librarians identify the most influential journals based on the number of citations the journal’s publications have received over the two preceding years. If this was the intended purpose, why is the JIF currently embraced as an indicator of the importance of a single publication within that journal? Or even further removed, used to assess the quality of the work by individual scientists

Multiple studies demonstrated concerns regarding the use of the Journal Impact Factor for research assessment as the numbers, to put it bluntly, do not add up:

  • By focusing on the mean, rather than the median, the JIF is also arbitrarily increased by 30-50%. Journals with high ratings appear to depend on a minority of very highly cited papers, overestimating the real citation rate.

As if that isn’t enough, the journal impact factor is also heavily affected by gatekeeping and human biases. Citation metrics reflect biases and exclusionary networks that systemically disadvantage women2 and the global majority (see for example racial disparities in grant funding from the NIH). Citations themselves are also biased towards positive outcomes. Reviewers and editors have also tried to increase their citations by requesting references to their work in the peer review process. Authors themselves can also decide to needlessly cite their own papers, or set up agreements with others to cite each other’s papers and thus artificially increase citations.  

Commercial interests

Next to the inaccuracies and biases in using the JIF for quality assessment, it should be noted that the JIF is a commercial product managed by a private company: Clarivate Analytics. This raises further points of concern as the missions of commercial companies do not necessarily align with those of universities.  

h-index

The h-index is another metric that tracks citations. A researcher has an h-index of x when they published x papers which were cited at least x times each. While this metric was developed to assess productivity and research impact at the individual level, it is routinely used for research assessment. This is problematic, as it is based on the same underlying issues of other citation-based metrics (as described above). Furthermore:

  • The number of papers that researchers produce is field-dependent (which makes this metric unsuitable to compare researchers from different disciplines). For example, some disciplines cite more extensively than others, which artificially increases this metric.
  • The h-index also does not take into account the individual’s placement in the author list, which may not be important in some disciplines but it makes the difference in others where the first and last authors have more weight.
  • The h-index will never be higher than the total number of papers published, focusing on quantity over quality.
  • Moreover, the h-index is an accumulating metric which typically favours senior male researchers as these tend to have published more.
  • The h-index has spawned several alternatives (37, as of 2011) in an attempt to counteract these shortcomings. Unfortunately, most of these alternatives are highly correlated with each other, which makes them redundant. 

Warnings against the use of these metrics

Many individuals and institutions have warned against the use of these metrics for research assessment, as this has a profound impact on the way research is conducted. Even Nature has signed DORA (which means Springer Nature is against the use of the impact factor for research assessment). The creator of the JIF, Eugene Garfield, also stated that the JIF was not appropriate for research assessment. Even Clarivate Analytics, the company that generates the JIF, stated that “What the Journal Impact Factor is not is a measure of a specific paper, or any kind of proxy or substitute metric that automatically confers standing on an individual or institution that may have published in a given journal.” The creator of the h-index, Jorge Hirsch, warned that the use of h-index as a measurement of scientific achievement could have severe negative consequences.

Tweet by Michael Merrifield on the use of citations as a proxy for quality in research. The image in the tweet shows an increase of his citations after he joined a large consortium project.

Consequences

The focus on citations has severe consequences on scientific research, as it creates a research culture that values the quantity of what is achieved rather than the quality. For example, the use of the JIF and h-index results in the tendency for individuals that experienced success in the past will more likely experience success in the future, an effect known as the Matthew effect. High-risk research that is likely to fail or research that only provides interesting results over the long term is discouraged by focusing on the quantity of outputs. The focus on short term successes therefore reduces the likelihood of unexpected discoveries that could be of immense value to scientific research and society.

When a measure becomes a target, it ceases to be a good measure. – Goodhart’s Law

So what now?

Rather than using metrics to evaluate scientific outputs or researchers, it may be impossible to objectively assess the quality of research, or reach a universal agreement on how to assess research quality. Instead, we could start judging the content of research by reading the scientific article or the research proposal rather than looking at citation metrics. This means that in an increasingly interdisciplinary world researchers will have to communicate their findings or proposals in different ways that are, to a certain extent, understandable to peers in other fields. If that sounds too simplistic, there are also some other great initiatives listed below that serve as alternatives to citation-based metrics in assessing research quality:

Alternative methods and examples of research assessment

See also:
1) Brito and Rodríguez-Navarro 2019, Seglen 1997, Brembs et al. 2013, Callaham et al. 2002, Glänzel and Moed 2002, Rostami-Hodiegan and Tucker 2001, Seglen 1997, and Lozano et al. 2012
2) Caplar et al. 2017, Chakravartty et al. 2018, King et al. 2017, and Macaluso et al. 2016

Transition to Team Science and Rewards

Authors: Young Science in Transition

Young Science in Transition (YoungSiT) represents a think-tank of early career researchers to promote a change in the academic reward and incentive system, facilitating the fight against the evaluation of scientists by numbers or publications rather than their societal relevance (see the website of Science in Transition for more information). YoungSiT aims to raise awareness, to promote engagement and to develop ideas for Open Science, Team Science and Recognition and Rewards. In this context, YoungSiT held a symposium on the 10th of December 2020. In total, ~70 participants (early career researchers, support staff and policy officers) from various universities, medical centers, funding agencies and academic networks (for example, the Promovendi Netwerk Nederland) gathered virtually to discuss the topic of Team Science.

Charisma Hehakaya (YoungSiT) opened the symposium with an introduction of YoungSiT and the concept of Team Science. What is Team Science? Are there theoretical frameworks that center on Team Science? And what are daily practice examples of Team Science? The term Team Science seems to be very complex, as there is currently no common definition. Key elements of Team Science are collaboration between disciplines and/or domains, and diversity within and between teams (for example: background, expertise, position, gender, and age). The YoungSiT symposium provided a platform to learn more about the topic of Team Science and to exchange best practices. Below follows a summary of the presentations that were given during the symposium.

What does Team Science look like? (Image credit)

Sustainable careers

Jos Akkerman’s research focuses on sustainable careers in academia. In our debate in recognition and rewards we are ultimately discussing people’s careers in academia. A career is an evolving sequence of work related experience over time. We used to view these careers as a one way road: you choose an occupation and then you do this for life. For many people, especially academics, there is no such rigid structure. It is no longer the case that we make one big choice after graduation. Nowadays, we can choose multiple paths or switch paths, which makes careers much more dynamic and flexible.This also makes careers much more difficult to navigate. Most of us are in need of more support to navigate this complex landscape. If you have the team that provides you with this support and the  resources to give you a kickstart you are in a privileged position compared to others that do not have access to these resources. This potentially leads to polarisation, where the strong get stronger and the weak get weaker (Matthew effect). We should therefore move away from the focus on single individual superstar careers, and instead focus on the long term effects to ensure that careers are sustainable.Jos Akkermans worked on a conceptual model for this (De Vos et al. 2020), which demonstrates that we shouldn’t only look at success or productivity (generating papers and receiving grants), but focus on whether this productivity is sustainable over time. People need to enjoy what they do and be healthy to be able to keep doing this more productively in the long run!

Synergy between education & research 

Niels Bovenschen described the case study of Micha, a 4 year old boy with a muscle weakness disease. After four years of research the cause of Micha’s disease was still not known. Micha’s case was presented to biomedical students. These bachelor students worked together with medical experts to develop proposals to study the disease of Micha. All the proposals were reviewed by the students and the results were presented at a symposium as part of the course. They identified the best proposal and the students were given the opportunity to execute their proposals in the lab of Niels. The Bachelor Research Hub is a place where students can engage with transdisciplinary research on such real world patient cases. By creating teams from patients, students and researchers, each of these stakeholders gain a lot of mutual benefits! Students learn a lot of skills (technical, scientific, academic) and get to work on case studies that really matter, which motivates and inspires them. The patients can participate in research and teaching, and receive additional research on their disease. Researchers receive help from students, acquire new data, scout talents, bring down research costs and contribute to their own ongoing work. Niels hopes that the Bachelor Research Hub can be a space for other universities to collaborate together on this transdisciplinary type of research/education.

From theory to practice

Erik van Sebille is an oceanographer, trying to understand how the ocean’s currents move stuff around. Most of his research is about plastic and this topic transcends the field of oceanography. It can be difficult, however, to know who you can work with outside of your own research field and sometimes these collaborators come from unexpected areas of expertise. For example, Erik worked together with archaeologists when they needed to not only find out how old a New Zealand shipwreck was, but also where the ship actually came from. To address the last question the archaeologists needed Erik’s research on the ocean’s currents. In working across disciplines you need to know what you can and cannot contribute to the research project.As long as you are an expert in your contribution to the research you do not need to be an expert in the areas of your collaborators: that is why you collaborate! You should be open about your collaborations to avoid miscommunication and conflict of interests. There are a lot of opportunities nowadays, see for example the Centre for Unusual Collaborations. Erik argues that we should move from authorship to contributorship in these collaborations. Authors should be listed alphabetically, with detailed contributions of authors listed in a taxonomy such as CRediT.

Team Science in Sports 

Karlien Sleper presented her view on teams as a monobob athlete. Although this seems like a very individualistic sport, just as academia, Karlien really stands on the shoulders of her team. The team takes care, for example, of the transport of the monobob sled which cannot be transported or lifted by a single person. Outside of the monobob team, Karlien also depends on the support from her family and friends. According to Karlien, you need some help if you want to achieve something in life.

Rewarding Team Science

Marc Galland is part of the Amsterdam Science Park Study Group. This group is building up a community of computational biologists and bioinformaticians and recently received the Team Science Award from NWO. With this prize, the group would like to organise more training activities (such as Carpentry Workshops), organise a “data and code festival”, and hire software experts. By being part of research groups they can collaborate easily with researchers. The Team Science award made the group feel recognised for their contributions to research projects. 

Sanneke van Vliet from ZonMW highlighted the importance of connecting different types of knowledge in order to improve the quality of health research and solve social health challenges. Generally, this type of research requires scientists to work together in larger teams. ZonMW stimulates the rewarding of Team Science through their grant programmes by recognising these contributions in their review procedures. ZonMw changed their CV templates to a narrative CV, allowing researchers to elaborate on their contributions in their teams. Sanneke thinks that personal grants are here to stay, but the focus of the funding is increasingly moving towards consortium funding. Importantly, while personal grants might be rewarded to individuals this doesn’t mean that the individual alone is responsible for their success!

Martijn Deenen provided NWO’s perspective on Team Science. NWO stimulates collaboration between teams, such as consortia that consist of several disciplines. NWO stimulates Team Science by allowing teams to win the most prestigious prize in the Netherlands (Spinoza and Stevin Prize for small teams of maximal three PI’s). Unfortunately, the universities do not nominate teams yet so the prizes are still awarded to individuals. NWO has several funding schemes for consortia (inter- and transdisciplinary): the NWA, KIC and Infra. In the Open Competition small Team Science projects (consisting of two PI’s) can be funded. The Team Science Prize, rewarded to the team of Marc, was introduced in 2020. The only focus on individual grants is found In the Talent Programme (Veni, Vidi, Vici). Nevertheless, NWO is working on shifting away from the ‘star science’ focus of these grants. Martijn thinks that NWO and ZonMW are ahead of the universities in terms of Open Science, Team Science and changes in recognition and reward systems. According to Martijn, the deans, rectors and directors of the universities should be encouraged to stop using metrics to evaluate researchers at their institutes and move from quantity (numbers) to quality (narrative).

 “I really like NWO’s new policy of having narrative CVs rather than lists of various outputs. This allows people to give their own unique emphasis on their own career story. Of course, NWO can design this policy but, ultimately, the reviewers and panel members also need to be on board. If they (implicitly?) still look at top journals, it is difficult to change.” 

Jos Akkermans
Image on achieving the balance in recognition of individual and team contributions, taken from Room for everyone’s talent: towards a new balance in the recognition and rewards for academics.

After the discussion led by Rinze Benedictus (YoungSiT) during the symposium there are still many questions left. For instance, how do we find the balance between the individual and the collective? How do we reward individuals that make up a team? What if not all contributions are easily measured? What does being a team player mean? How can we focus on how individuals fit within a team rather than focusing solely on their individual contributions? What can we learn from industry? We did not find a one size fits all solution, or a definition of Team Science, but in general the conclusion of the symposium was positive. The move from rewarding the individual to rewarding team contributions was visible throughout the symposium. The word balance was also used a lot during the discussion and in the chat. It is important to recognise the complexity of scientific research and provide sustainable career paths and recognition for all types of contributions, whether these are generated by an individual or by a team. 

See also: