The next software carpentry will take place on the 8-9th of July. The Data Stewards can also be contacted to organise smaller sessions to support beginners with their code/software, or you can join one of our Coding Lunch and Data Crunch walk in consultation sessions where we can also support you with more advanced problems. The next one is on the 27th of June (12:00 – 14:00, IDE, Studio 23/24).
The RDA project “Engaging Researchers with Research Data – What works?”, part of the Libraries for Research Rata Interest Group, aims to collect information on data engagement activities undertaken by different stakeholders, to possibly identify the types of engagement most suitable to the type and size of organisations. This will be also mapped to specific requirements and difficulties encountered in successfully running these activities.
Ultimately, the project will produce a handbook of diverse case studies to support stakeholders decisions, contributing to building a better understanding of how FAIR and open research data policies are/can be successfully implemented globally. Still, there seems to be a gap between the perceived benefit of data sharing and actual practice (dilemma of data sharing).
Preliminary outcomes of the team’s work were presented at the Plenary 13 meeting in Philadelphia, during the Libraries for Research Data Interest Group session, by Helena Andreassen, Raman Ganguly and Andrea Medina-Smith. The presentation gave an insight into the survey conducted, which was one of the core activities of the project. It acknowledged the importance of a collection of engagement activities by communicating to the audience the overwhelming interest that the survey got from the international community: 216 answers were received at the 1st call for participation! Among respondents, the survey counts 60 funders and 80 institutions.
For more information on the history of the project, how it was formulated and who is involved, please check this blogpost.
To facilitate analysis of those data and to effectively respond to both qualitative and quantitative needs when translating data into information, the group is divided into two sub-groups: one dedicated to quantitative analysis and one working on the qualitative case studies. Out of 216 responses, 88 cases from 50 independent institutions, offered thorough descriptions of activities aiming at increasing researchers’ engagement with research data. Current work of the groups includes categorisation of these cases per activity type as well as the assignment of tags to ease classification and discovery, in a matrix style approach.
The team is now halfway through to reach the projects’ objectives, and is now focusing on:
Splitting data analysis into qualitative and quantitative branches, cleaning and structuring data accordingly.
Preparing for a booksprint in order to create the handbook. The booksprint will take place 10-12 July in the Netherlands.
Authors: Esther Plomp, Jasmin K. Böhmer, Yasemin Turkyilmaz-van der Velden and Mateusz Kuzak
Right before Ascension the Data Stewards organised two workshops on FAIR data and Data Management Plans (DMPs). The first workshop took place on the 27th of May in Niel, Belgium, at the Helis Academy course on FAIR Data Stewardship and the second at the first NWO Life Congress in Bunnik, the Netherlands in collaboration with Jasmin (Data Steward at the Utrecht Medical Center, Center for Molecular Medicine) and Mateusz Kuzak (Scientific Community Manager at the Dutch Techcentre for Life Sciences – DTL).
During the Helis Academy workshop we also attended an interesting talk on data ownership and patents by Ben Rigou from NLO (Nederlandsch Octrooibureau). Ben explained that according to the Dutch law (article 12 of the Dutch patent act – ROW1995 / Artikel 12 Rijksoctrooiwet 1995) the right to patent inventions will come to the employer if it is your job to invent (article 1), as is the case at Universities. The same applies to inventions that happen during training periods, the rights will then belong to the supervisor or boss (article 2). Even if you invent something new at home while you are employed at the university the rights will belong to the university (article 3)! You can only deviate from these articles if you have a written agreement that specifies these deviations (article 5).
Helis Academy – Data Management Plans, why and how?
Esther and Santosh gave an introduction presentation on Research Data Management, Data Management Plans, FAIR data and working with personal data (~45 min) before going into a very hands on session using DMPonline (~60 min). TU Delft has its own instance and DMP template available on DMPonline. Here TU Delft researchers can set up your DMP digitally, with only a press of the button away from support by your Data Steward!
Helis Academy – Data Carpentry
In the morning of the second day of the Helis Academy course, Mateusz Kuzak led an interactive session during which participants organised their tabular data sets in a reusable and interoperable way. They also learned how to implement quality assurance and assessment with the use of the spreadsheet program. During the last session before the lunch, learners dived into quality assessment and data cleaning with Open Refine. The teaching content have used have been adapted from the Data Carpentry lessons on spreadsheet organisation and Open Refine.
NWO Life2019 – Plan ahead: practical tools to make your data more FAIR
Esther, Yasemin (Data Stewards at TU Delft for the Faculties of Applied Sciences and 3mE), Jasmin (Data Steward at the Utrecht Medical Center, Center for Molecular Medicine) and Mateusz Kuzak (Scientific Community Manager at the Dutch Techcentre for Life Sciences – DTL) gave a workshop for the NWO Life2019 participants (#life2019) on making data more FAIR using three practical tools, as well as two poster presentations.
Esther’s introduction presentation on the FAIR principles was followed by pitches given by Mateusz, Yasemin and Jasmin on three different data management/fair tools. Then the participants could join two parallel sessions to try two out of these three tools.
NWO Life2019 – Data Stewardship Wizard
Mateusz introduced the participants to the Data Stewardship Wizard, developed by ELIXIR Europe. He explained how the “knowledge model” behind the wizard is used to learn about good data stewardship practices and how one’s choices affect the FAIRness and openness of their research. Mateusz also defined different roles and privileges of the wizard users, administrators, data stewards and researchers and how they can be used in the organisation.
NWO Life2019 – DMPonline
Yasemin introduced the participants to DMPonline which is an online platform developed by the Digital Curation Center (DCC) for writing, reviewing and sharing Data Management Plans (DMP)s. DMPonline is currently in use by 203 organisations in 89 countries, including Dutch institutions. The participants learnt how to set up an account and how to generate a DMP using the templates of DCC, NWO, ZonMw and Horizon2020. Moreover, participants were introduced to functionalities such as exporting the DMPs, inviting collaborators and reviewers with read and/or edit rights, commenting on given answers and finding funder specific guidance per question. The tool was received with interest and participants were curious to learn more about the content of the DMP templates of NWO and Horizon2020.
NWO Life2019 – FAIR self-assessment tool
Jasmin introduced the participants to the Australian Research Data Commons FAIR self-assessment tool. After a quick recap of the differences between ‘FAIR data’ on a data-package level vs. on data-file level, the audience equipped with a summarizing handout about the original FAIR data principles by Wilkinson et al was introduced to the ARDC FAIR Data self-assessment tool. Showcasing the (inter)nationally available data repositories and archives, the participants could choose between an example from the DANS archive EASY , 4TU.Research Data, or the European Phenome and Genome Archive (EGA) to use for the self-assessment with the tool. While working through the provided answer options together based on the features of the selected data-set, arising questions from the audience were answered and discussed. Most of the audience wanted to know more about the FAIR data principles and how to satisfy funder needs with their FAIR data regards.
During our workshop at NWOlife we found that instead of having two sessions of ~25 min on three different tools, it would have been better to either briefly demonstrate each tool (10-15 min) or have a more practical hands-on session of 45-60 min using one tool where participants use their own laptop and make use of their own datasets (as we had done during the Helis Academy workshop). We very much enjoyed participating in the Helis Academy course and at NWO Life2019 and we are looking forward to future Helis Academy courses and NWO Life2020!
Participants were encouraged to register and assemble as duos of researchers and/or students along with a data scientist and/or research data librarian. I was invited, as a data librarian with a research background in the physical sciences, to form a duo with Joseph Weston, a theoretical physicist by background and a scientific software developer at TU Delft, who is also one of the TU Delft Data Champions.
I presented about the Hackathon at the last TU Delft Data Champions meeting. The presentation is available via Zenodo. All the presentations and materials from the FAIR Hackathon are also publicly available. The FAIR data principles are defined and explained here. This blog post aims to offer some of my views and reflections on the workshop, as an addition to the presentation I gave at the Data Champions meeting on 21 May 2019.
The grand vision of FAIR
The workshop’s keynote presentation, given by George Strawn, was one the highlights of the event for me. His talk set clearly and authoritatively what is the vision behind FAIR and the challenges ahead. Strawn’s words still ring in my head: “FAIR data may bring a revolution on the same magnitude as the science revolution of the 17th century, by enabling reuse of all science outputs – not just publications.” Drawing parallels between the development of the internet and FAIR data, Strawn explained: “The internet solved the interoperability of heterogeneous networks problem. FAIR data’s aspiration is to solve the interoperability of heterogeneous data problem.” One computer (“the network is the computer”) was the result of the internet, one dataset will be FAIR’s achievement. FAIR data will be a core infrastructure as much as the internet is today.
“The internet solved the interoperability of heterogeneous networks problem. FAIR data’s aspiration is to solve the interoperability of heterogeneous data problem.” — George Strawn
Strawn warned that it isn’t going to be easy. The challenge of FAIR data is ten times harder to solve than that of the internet, intellectually but also with fewer resources. Strawn has strong credentials and track record in this matter. He was part of the team that transitioned the experimental ARPAnet (the precursor to today’s internet) into the global internet and he is part of the global efforts trying to bring about an Internet of FAIR Data and Services. In his view, “scientific revolution will come because of FAIR data, but likely not in a couple of years but in a couple decades.”
Researchers do not know about FAIR
Strawn referred mainly to technical and political challenges in his presentation. One of the challenges I encounter in my daily job as a research data community manager is not technical in nature but rather cultural and sociological: how to get researchers engaged with FAIR data and how to make them enthusiastic to join the road ahead? Many researchers are not aware of the FAIR principles, and those who are, do not always understand how, or are willing, to put the principles into practice. As reported in a recent news item in Nature Index, the 2018 State of Open Data report, published by Digital Science, found that just 15% of researchers were “familiar with FAIR principles”. Of the respondents to this survey who were familiar with FAIR, only about a third said that their data management practices were very compliant with the principles.
The workshop tried to address this particular challenge by bringing together researchers in the physical sciences, experts in data curation and data analysts, FAIR service providers and FAIR experts. About half of the participants were researchers, mainly in the areas of experimental high energy physics, chemistry, and materials science research, at different stages in their careers. Most were based in the US and funded by NSF.
These researchers were knowledgeable about data management and for the most part familiar with the FAIR principles. However, the answers to a questionnaire sent to all participants in preparation for the Hackathon, shows that even a very knowledgeable and interested group of participants, such as this one, struggled when answering detailed questions about the FAIR principles. For example, when asked specific questions about provenance metadata and ontologies and/or vocabularies, many respondents answered they didn’t know. As highlighted in the 2018 State of Open Data report, interoperability, and to a lesser extent re-usability, are the least understood of the the FAIR principles. Interoperability, in particular, is the one that causes most confusion.
Here near DC attending the #MPSFAIRHackathon. Day one really highlighted efforts for various subfields to establish metadata to communicate results digitally. Chemistry folks seem to really have their house in order.
There were many opportunities during the workshop to exchange ideas with the other participants and to learn from each other. There was much optimism and enthusiasm among the participants, but also some words of caution, especially from those who are trying to apply the FAIR principles in practice. The PubChem use case “Making Data Interoperable”, presented by Evan Bolton from the U.S. National Center for Biotechnology Information, was a case in point. It could be said, as noted by one of the participants, that the chemists “seem to really have their house in order” when it comes to metadata standards. Not all communities have such standards. However, when it comes to “teaching chemistry to computers” – or put in other words, to make it possible for datasets to be interrogated automatically, as intended by the FAIR principles – Bolton’s closing slide hit a more pessimistic note. “Annotating and FAIR-ifying scientific content can be difficult to navigate”, Bolton noted, and it can feel like chasing windmills. “Everything [is] a work in-progress” and “what you can do today may be different from tomorrow”.
Closing slide in Evan Bolton’s presentation “Making Data Interoperable: PubChem Demo/Use Case”https://osf.io/6mxrk/
What can individual researchers do?
If service providers, such as PubChem, are struggling, what are individual researchers to do? The best and most practical thing a researcher can do is to obtain a persistent identifier (e.g. a DOI) by uploading data to a trusted repository such as the 4TU.Centre for Research Data archive, hosted at TU Delft, or a more general archive such as Zenodo. This will make datasets at the very least Findable and Accessible. Zenodo conveniently lists on its website how it helps datasets comply with the FAIR principles. The 4TU.Centre for Research Data, and many other repositories, offer similar services when it comes to helping make data FAIR.
I am grateful to the University of Notre Dame for covering my travel costs to the MPS FAIR Hackathon. Special thanks to Natalie Meyers from the University of Notre Dame, and Marta Teperek, Yasemin Turkyilmaz-van der Velden and the TU Delft Data Stewards for making it possible for me to attend.
Maria Cruz is Community Manager Research Data Management at the VU Amsterdam.
Authors: Esther Plomp, Marta Teperek, Maria Cruz, Yan Wang, Yasemin Türkyilmaz-van der Velden
The conference on the 23rd of May on evaluation criteria of Dutch researchers, organised by NWO and ZonMW and held in The Hague, aimed to discuss the current rewards and incentives system and to think about the evaluation criteria of the future.
The meeting was attended primarily by (female) university staff, and to a lesser degree by other stakeholders. The support/research staff ratio was estimated to be 50/50. The main language, Dutch, may have prevented international researchers to attend this meeting, despite live translation efforts in the morning. The morning session was primarily filled with videos, panels and talks by funders and scientists discussing why researcher evaluation criteria needed to change, with the afternoon session focussing on smaller break out discussions looking at how to change the system, and how to practically implement these changes (see here for the full programme).
The day was introduced by a video of Ingrid van Engelshoven, the Minister Education, Culture and Science (OCW). She announced that the impact factor does not say anything about quality and is too narrow as a criterion to evaluate researchers, as well as irrelevant to some scientific fields. Instead, the focus should shift to generating scientific impact through science communication and collaborations between researchers.
Mismatch in the current reward system
The audience gave addressing the current evaluation criteria of researchers a score of 8.6 out of 10 in terms of urgency. This is because there is a mismatch in the current reward system: we do not reward what is necessary to advance scientific research, as highlighted by Rianne Letschert (rector magnificus Maastricht University and member of the VSNU committee that is leading the development of a new evaluation system). According to Rianne, scientists should no longer be expected to excel in all areas. These expectations have led to stress, frustration and burnouts among scientists, increased competition, academic abuse and harassment, and fraud cases. Rianne argued that the focus should shift to a diverse range of career paths which can focus more on the strengths of the individual researchers in different areas (research, teaching, societal impact, or leadership), so that these talents are appreciated and could be embedded better within a team if required. It should also become possible to return to a scientific career after a “break”, to focus on other demands (e.g. administration and management). To enable these changes the VSNU is currently developing a toolkit of new evaluation methods, which are quantitative and qualitative in nature and focus on individual as well as team performances.
What’s your story?
As scientific progress is a team effort, we require more flexibility and variety in the evaluation of scientists. It is unrealistic to evaluate research quality based on a very narrow output: publications. Hanneke Hulst, assistant professor at the VUmc and member of the Young Academy, argued that the story behind the scientist should become more important, as opposed to simply employing standard evaluation criteria. This includes treating science as a “team sport”, since researchers often work within a team, a department or an institute, rather than in an isolated context. This makes it difficult to evaluate who has contributed what specifically, as Stan Bentvelsen, director at Nikhef and professor at the UvA stated. Bas Borsje, assistant professor at the University of Twente, argued that team science is not a right metaphor. Researchers work in communities of practice and there should be more incentives to stimulate collaboration. Collaborative research requires good leadership, and as Beatrice de Graaf, professor at the University of Utrecht, highlighted: competences required from leaders are changing. There should be space for more diversity and leaders should have leadership and interpersonal skills instead of being in a leadership position because of their research capabilities.
Evaluate the process, not just the end result
Sarah de Rijcke, director of the Centre for Science and Technology Studies at Leiden University, argued that the future focus should not shift to only value collaborative work, as we will then generate a new hoop to jump through. Sarah argued that we have to look at the content of scientific research instead. This will require transparency in evaluation criteria and a focus on the process instead of the end result. The question should be whether it is good quality science, no matter what the end result is. Rob van Gassel, PhD student Maastricht University and PhD Network (PNN) representative, echoed these comments and argued that early career researchers want to generate meaningful research and not only focus on the end product of research. Evaluation based on content requires more flexibility in evaluation criteria, such as an additional category within the Veni proposal to evaluate researchers on, as proposed by Bas Borsje. Researchers would decide themselves what they would fill into this category and what they would like to be evaluated on.
Diversify career paths
In his talk, Barend van der Meulen, head of research at the Rathenau Institute, stated that we are currently selling the idea that there is a place for everyone at universities as long as they work hard enough, or when they fulfil the new criteria for scientists. But the number of research positions at universities are limited and, therefore, there needs to be more focus on diverse career paths, including collaboration with industry. Jet Bussemaker, professor at Leiden University, agreed that PhD students should be better prepared during their training for these diverse career paths and that they should also focus on the societal relevance of their research. She highlighted the importance of the Dutch National Research Agenda, which aims to build bridges and stimulate collaboration between scientists and the public.
Importance of societal relevance
Ionica Smeets, Professor in Science Communication at Leiden university, also highlighted the importance of connection with the public and increased appreciation for science communication, which is currently seen as a hobby and not taken seriously in the evaluation of scientists. Ionica also wrote a column about this topic which you can read on the Volkskrant website. It is even unclear where the money allocated to promote science communication will go, it appears to disappear into science festivals rather than to be used to support individuals that are already much involved in science communication. Even Ionica’s own yearly evaluation focuses on her scientific publications rather than her science communication activities. Ionica stated that doing research sometimes “feels like an endless battle across all fronts”. An evolution or slow change of evaluation criteria will take too long and a revolution will result in more battle fronts, so perhaps we should focus on mutations and experiments to test what changes will work. Ionica hopes that the scientist of the future has less battles to fight and can instead focus on the areas in which they excel.
What do the funders think?
Jeroen Geurts, chair of ZonMW, admitted that the funders have contributed to the creation of a very competitive environment that focuses on publications and generates ‘science divas’ (see also here for a more detailed statement). He hopes that a new system will allow for more breathing space. Stan van Gielen, chair of NWO, added that funders do their work but that the reward system from the funders is not evidence based: they do not measure the things that they find important in research proposals.
Jean-Eric Paquet, Director-General Research and Innovation, European Commission, discussed the progress in altmetrics and open science. He argued that scientists should open up their data and make it FAIR, as only 30% of the research data is re-used. The open science movement is also seen in efforts to fight paywalls that hide scientific outputs, such as Plan S, which supports immediate open access. He stated that evaluation needs to focus on the content instead of publication metrics.
How to implement changes?
In the afternoon, the participants of the meeting were invited to join smaller parallel discussion sessions:
Research vs. Education
Teamplayers vs. Superstars
Metrics vs. Narratives
The Netherlands vs. the rest of the world
Inclusivity vs. reality on the workfloor
These sessions aimed at gathering input from the participants on ways to improve the current system. The session chairs collected key messages and will report all outcomes at a later stage. The overall perception from the sessions attended by the authors is that while most of them resulted in engaging and focused discussions there were hardly any suggestions and concrete plans on how to implement necessary changes. The formulation of the questions asked during these sessions could have been more pragmatic and aimed towards action. During the last panel session, the problems with the current system were therefore emphasised instead of placing the focus on how to move forward.
At the same time, some practical suggestions were shared by participants during the last panel session and on twitter:
Terms such as excellence and quality need to be well defined, so that scientists know what the criteria are on which they are evaluated.
The use of metrics such as the h-index and impact factor should be avoided.
Review committees should change in composition to achieve a change in how proposals are evaluated.
Sharing of data and code, open science practices and science communication efforts should be rewarded appropriately, rather than seen as second class citizens in research. (This could be implemented through, for example, adding an additional “free choice” category in research proposals, as mentioned before.)
PhDs should not be required to publish four publications in order to defend their thesis. As Stan Gielen stated, the PhD is a training programme and you can be a good scientist with only one paper. Jeroen Geurts added that we should stop asking about publications. These requirements should be dropped by the universities, as well as the funding agencies.
Supervisors should be chosen for their leadership, social and management skills in order to be able to support future scientists, and there should be consequences for supervisors that are not up to these tasks.
The Spinoza price should not be a price for individuals, but, like the Stevin price, be awarded to teams, as previously suggested by Stan Gielen during the Plan S Consultation meeting in January 2019.
All author lists on publications should become alphabetical so that people are forced to look up the contributions that each other has made to the publication (see the CRediT taxonomy).
The session organised by ZonMW and NWO was an important first step towards rewarding and recognising the diversity among the scientists at Dutch universities. It was the first nation-wide consultation meeting to discuss the importance of introducing changes in the academic rewards system. The fact that the meeting was attended by so many interested colleagues (more than 400 participants from diverse stakeholder groups, so that the meeting had to be held in a building previously used as an aircraft hangar), demonstrates its timeliness and relevance.
Very importantly, ZonmW and NWO already signed the DORA declaration on the 18th of April 2019 and proposed changes and concrete actions aimed at reducing the dependence on bibliometric indicators in the evaluation of research and researchers. Next on the agenda is the publication of a position paper on research evaluation criteria by the VSNU, NFU, KNAW, ZonMw and NWO, which will be made available in September. However, driving an important systemic change which involves multiple stakeholder groups is not easy and it is crucial that the process is done carefully, with thorough stakeholder consultations. Therefore, publication of the position paper will be followed by pilots and smaller consultation meetings in order to discuss the modifications (or “mutations” as described by Ionica Smeets) that are required to change the evaluation system. The Netherlands seems to be leading this change.
During the day we had a full agenda of valuable presentations and discussions on the topic of research assessment. Colleagues from several European universities presented case studies about current efforts at their institutions. All the presentations are available on the event’s website. Therefore, in this blog post, we don’t discuss individual talks and statements but offer some wider reflections.
Extreme pressure is not conducive to quality research
The first notion, repeated by several presenters and participants, was that the extreme work pressure contemporary academics face is not conducive to high-quality research. To succeed under the current rewards and incentives system, focusing on finding answers to explain natural phenomena through series of questions and testing and following the principles of scientific methodology, as 19th century scientists did, is not enough; 21st century researchers need instead to concentrate on publishing as many papers as possible, in certain journals, and on securing as many grants as possible.
You do not need to be a superhero – the importance of Team Science
Extreme work pressure has multiple causes. One significant factor is that academics are currently required to excel at everything they do. They need to do excellent research, publish in high impact factor journals, write and secure grants, initiate industry collaborations, teach, supervise students, lead the field, and much more. Yet it is rare for one person to have all the necessary skills (and time) to perform all these tasks.
Several talks proposed that research assessment shouldn’t focus on individual researchers, but on research teams (‘Team Science’). In this approach, team members get recognition for their diverse contributions to the success of the whole group.
The Team Science concept is also linked to another important aspect of research evaluation, which is leadership skills. In a traditional research career progression, academics who get to the top of their career ladder are those who are the most successful in doing research (traditionally measured in a number of publications in high impact factor venues). This does not always mean that those researchers had the leadership skills (or had the opportunity to develop them) that are necessary to build and sustain collaborative teams.
Rik Van de Walle, Rector of Ghent University in Belgium, emphasised this by demonstrating that in their new way of assessing academics, there will be a strong focus on the development of leadership skills, thereby helping sustain and embed good research.
“Darling, we need to talk”
There was a strong consensus about the necessity of continuous dialogue while revising the research assessment process. Researchers are the main stakeholders affected by any changes in the process, and therefore they need to be part of the discussions around changing the rewards system, rather than change being unilaterally decided by funders, management and HR services. To be part of the process, researchers need to understand why the changes are necessary and share the vision for change. As Eva Mendez summarised, if there is no vision, there is confusion. Researchers need to share this vision, as otherwise, they can indeed become confused and frustrated about attempts to change the system.
In addition, research assessment involves multiple stakeholders, and because of that, all these different stakeholders need to be involved and take actions in order for successful systemic changes to be implemented. Consultations and discussions with all these stakeholders are necessary to build consensus and shared an understanding of the problems. Otherwise, efforts to change the system will lead to distrust and frustration, as summarised by Noemie Aubert Bonn with her ‘integrity football’ analogy, where no one wishes to take the responsibility for the problem.
At the same time, Eva Mendez reminded us that just talking and waiting for someone else to do something else will also lead to disappointment. She thought that more stakeholders should act and start implementing changes in their own spheres of influence. She suggested that everyone should ask themselves the question “What CAN I do to… change the reward system?”. She provided some examples: PlanS as an important initiative by funding bodies, and the consequent pledges by individual researchers on their plans of PlanS adoption, or the FOS initiative – Full Open Science Research Group, which is designed for entire research groups wishing to commit to practising Open Science.
Conclusions – so what are we going to do?
All three of us who attended the event are working in research data support teams at university libraries. We are not directly involved in research evaluation and we are grateful to our libraries who allowed us to participate in this event to broaden our horizons and deepen our interests. That said, we reflected on Eva’s call for action and thought that besides writing a blog post, we could all contribute at least a little bit to a change in the system.
Here are our top resolutions:
Marta will work on better promotion and recognition of our Data Champions at TU Delft – researchers who volunteer their time to advocate good data management practices among their communities;
Alastair will lead the process of implementing an updated repository platform for 4TU.Center for Research Data, which will give researchers better credit and recognition for research data they publish;
In addition, whenever we have a chance, we will keep reminding ourselves and our colleagues about the importance of rewarding quality, and not quantity, in research. An example of that was the VU Library Live “Rethinking the Academic Reward System” talk show and podcast held at the VU Amsterdam on 14 March 2019, which revolved around the question of how to change the academic reward to facilitate research that is open and transparent and contributes to solving key societal issues.