During the day we had a full agenda of valuable presentations and discussions on the topic of research assessment. Colleagues from several European universities presented case studies about current efforts at their institutions. All the presentations are available on the event’s website. Therefore, in this blog post, we don’t discuss individual talks and statements but offer some wider reflections.
Extreme pressure is not conducive to quality research
The first notion, repeated by several presenters and participants, was that the extreme work pressure contemporary academics face is not conducive to high-quality research. To succeed under the current rewards and incentives system, focusing on finding answers to explain natural phenomena through series of questions and testing and following the principles of scientific methodology, as 19th century scientists did, is not enough; 21st century researchers need instead to concentrate on publishing as many papers as possible, in certain journals, and on securing as many grants as possible.
You do not need to be a superhero – the importance of Team Science
Extreme work pressure has multiple causes. One significant factor is that academics are currently required to excel at everything they do. They need to do excellent research, publish in high impact factor journals, write and secure grants, initiate industry collaborations, teach, supervise students, lead the field, and much more. Yet it is rare for one person to have all the necessary skills (and time) to perform all these tasks.
Several talks proposed that research assessment shouldn’t focus on individual researchers, but on research teams (‘Team Science’). In this approach, team members get recognition for their diverse contributions to the success of the whole group.
The Team Science concept is also linked to another important aspect of research evaluation, which is leadership skills. In a traditional research career progression, academics who get to the top of their career ladder are those who are the most successful in doing research (traditionally measured in a number of publications in high impact factor venues). This does not always mean that those researchers had the leadership skills (or had the opportunity to develop them) that are necessary to build and sustain collaborative teams.
Rik Van de Walle, Rector of Ghent University in Belgium, emphasised this by demonstrating that in their new way of assessing academics, there will be a strong focus on the development of leadership skills, thereby helping sustain and embed good research.
“Darling, we need to talk”
There was a strong consensus about the necessity of continuous dialogue while revising the research assessment process. Researchers are the main stakeholders affected by any changes in the process, and therefore they need to be part of the discussions around changing the rewards system, rather than change being unilaterally decided by funders, management and HR services. To be part of the process, researchers need to understand why the changes are necessary and share the vision for change. As Eva Mendez summarised, if there is no vision, there is confusion. Researchers need to share this vision, as otherwise, they can indeed become confused and frustrated about attempts to change the system.
In addition, research assessment involves multiple stakeholders, and because of that, all these different stakeholders need to be involved and take actions in order for successful systemic changes to be implemented. Consultations and discussions with all these stakeholders are necessary to build consensus and shared an understanding of the problems. Otherwise, efforts to change the system will lead to distrust and frustration, as summarised by Noemie Aubert Bonn with her ‘integrity football’ analogy, where no one wishes to take the responsibility for the problem.
At the same time, Eva Mendez reminded us that just talking and waiting for someone else to do something else will also lead to disappointment. She thought that more stakeholders should act and start implementing changes in their own spheres of influence. She suggested that everyone should ask themselves the question “What CAN I do to… change the reward system?”. She provided some examples: PlanS as an important initiative by funding bodies, and the consequent pledges by individual researchers on their plans of PlanS adoption, or the FOS initiative – Full Open Science Research Group, which is designed for entire research groups wishing to commit to practising Open Science.
Conclusions – so what are we going to do?
All three of us who attended the event are working in research data support teams at university libraries. We are not directly involved in research evaluation and we are grateful to our libraries who allowed us to participate in this event to broaden our horizons and deepen our interests. That said, we reflected on Eva’s call for action and thought that besides writing a blog post, we could all contribute at least a little bit to a change in the system.
Here are our top resolutions:
Marta will work on better promotion and recognition of our Data Champions at TU Delft – researchers who volunteer their time to advocate good data management practices among their communities;
Alastair will lead the process of implementing an updated repository platform for 4TU.Center for Research Data, which will give researchers better credit and recognition for research data they publish;
In addition, whenever we have a chance, we will keep reminding ourselves and our colleagues about the importance of rewarding quality, and not quantity, in research. An example of that was the VU Library Live “Rethinking the Academic Reward System” talk show and podcast held at the VU Amsterdam on 14 March 2019, which revolved around the question of how to change the academic reward to facilitate research that is open and transparent and contributes to solving key societal issues.
Authors: Esther Plomp, Nicolas Dintzer, Heather Andrews, Yan Wang
On the March the 18th, the Data Stewards of Applied Sciences – Esther Plomp, Technology Policy and Management – Nicolas Dintzer, Architecture and The Built Environment – Yan Wang, and Aerospace Engineering – Heather Andrews, attended their first Open Science Barcamp at the Wikimedia in Berlin. The typical barcamp session does not have lengthy talks, instead you should look for advice or initiate an interactive discussion. The barcamp provided opportunity to join five sessions of 45 min each (with strict time keeping done with the latest equipment), followed by 15 min breaks, with four parallel choices. In total 20 sessions took place. All sessions were recorded in an etherpad. Some of the sessions were prepared before the start of the barcamp, using the etherpad, others were proposed during the barcamp. The sessions were organised during the barcamp in a morning and afternoon planning session. This was done to stimulate the proposal of new sessions to further discuss topics that were raised during the morning sessions. Below, some of the sessions are outlined and briefly summarised.
Session #1 Help the trainer train a trainer
Helene Brinken and Gwen Frank from the FOSTER project wanted feedback on the train-the-trainer approach, which identifies the gaps in current knowledge/skills and provides materials and training for trainers that promote the use of Open Science practices (bootcamps, the Open Science Training Handbook). As the FOSTER project ends the created materials, which are under an open licence, will likely become available through OpenAIRE, and through the projects of partners.
Among the recommendations discussed during this session, it was agreed that Open Science trainings should be short (to allow attendance of people with crowded schedules) and take place in a context where they connect with the participants (e.g., tailored to a specific field or addressing daily practise problems). Lengthy talks are not advisable (max 20 min), but at the same time the audience cannot be expected to be familiar with the basics and/or be at the same level. If you do not want to go over the basics make sure that participants go over existing courses (FOSTER,Open Science MOOC) to ensure that they are on the same page. Open Science trainings should not be too evangelistic: sometimes the term ‘Open Science’ can scare off people from attending the training/workshop. Other phrasing, such as ‘good scientific practises in the digital age’, may be more attractive to participants. Trainings should provide an environment where participants can learn from each other: for example the Open Science communities in the Netherlands, at the universities of Utrecht, Leiden, Amsterdam and Eindhoven, that follow a bottom-up strategy to promote Open Science practices among researchers
Conclusions of the session were that 1) it is difficult to train people in Open Science practises, as we are still in a transitional phase, 2) there is no training on how to inspire the cultural change required for the transition to Open Science, 3) and although there is plenty course material available, it is not organised in a register. This register should include interactive exercises which are properly licenced and attributed with metadata, so that materials can be found, re-used and adjusted for specific purposes/trainings.
Session #2 Participatory research
This session on participatory research, or “citizen science”, followed up on the inspirational talk in the morning by Claudia Gobel (Museum fur naturkunde Berlin) during the kick-off session. Participatory research involves the engagement of people who are not employed to do research in the production of scientific knowledge. Claudia argued that Open Science discussions usually focus on developments within scientific institutions, but that science should also be open to participation and cooperation of volunteers. This requires research results to be made available to everyone and everyone should be welcome to comment/improve/build on those materials.
During this session, the participants listed what they perceived as challenges to be overcome in order to make “citizen science” possible and accelerate knowledge dissemination. Some of the key challenges identified are scientific literacy of the non-researchers, ethical constraints, ownership and access to data, delayed dissemination of knowledge in academic research, funding for lasting solutions, and lack of awareness of citizen initiatives on the part of researchers. Participatory research is often not specified/defined and can be very narrow in its application (limited to crowd sourcing), which does not explore the full potential of citizen science. It is very important to work together across disciplines and boundaries to improve worldwide public engagement: to transform what is thought of as ‘scientific knowledge’.
Session #5 Humanities = black sheep?
This session was chaired by Erzsébet Tóth-Czifra and Ulrike Wuttke. The discussion focused on disciplinary characteristics in humanities regarding digitalisation and Open Science: are ’digital humanities’ actually the ‘new’ Open Science practices or the ‘black sheeps’ for the humanities research communities?
For researchers in Humanities, Open Science poses specific challenges. They, more than in other fields, publish their findings in books which are rarely open access. The data is often personal and sensitive, and is therefore difficult to share as anonymisation may lead to significant losses of information and conditioning for re-using data are case specific. However, there are also opportunities for humanities to move towards Open Science. Some initiatives in humanities include DARIAH, DARIAH-DE, CLARIN, CESSDA, OPERAS, and the SSH Open Cloud project. The open methods platform highlights all kinds of content types (blogs, preprints, research articles, podcasts etc.) about digital humanities methods and tools. There are also many attempts to promoting Open Access publishing without asking for Article Processing Fees (APCs), such as Open Library of Humanities and Language Science Press. We also see digital leaders in different fields moving the discipline forward, such as German Galleries Libraries Archives Museum (GLAM) institutions which collaborate and share data in the Coding da Vinci project.
Session #6 Federated Databases for sustainability
This session was chaired by Christian Busse from the German Center on Cancer Research (DKFZ) who posed some challenges on sharing research data on repositories. Currently a small number of large repositories is used. Although repositories strive to make their data FAIR (Findable, Accessible, Interoperable and Reusable), the repositories themselves may be difficult to find, which results in data being available in a small number of large repositories.
The proposed solution is an API (Application Programming Interface) which could be implemented in local repositories (the ones researchers actively work with). This API would expose basic information of the repository and offer a unified way of discovering data. This approach would allow researchers to keep their data close (the data could be copied from one repository to another), facilitate backups, and the unified API would support the creation of a meta-search engine. This solution requires a proof of concept to be put in place, and long term funding for its sustainable development and deployment. How such project can be funded in the long run is still unclear.
Session #7 Open Science and IP
This session, organised by Elisabeth Eppinger and Viola Prifti from the IPACST project, focused on Open Science discussions in research where patentable inventions are involved. Increased patenting can potentially harm science as it forces the system to remain closed. During the session it became apparent that there is no ‘one size fits all’ solution, as the degree of openness when collaborating with commercial partners varies between and within institutes, depending on the commercial companies and (inter/national) funders involved. Researchers think that you can either commercialise the data or open it up. This is a matter of perception, however, as there are usually ‘in between options’ available, such as publishing the results using mock data instead of the original data (e.g. when the original data belongs to a commercial partner and it has to remain closed). Publishing the methods may be more important in some cases than publishing the underlying (commercial) data, as the innovative methodology can be applied to independent datasets and it is therefore still a verifiable research output. Another solution is to only allow patent-free research (as done by e.g. Aarhus University, The Montreal Neurological Institute and Hospital).
Issues were raised during this session that also directly concern researchers at TU Delft, such as publishing code on Github and the ownership of data when master students are involved in research projects. Researchers at other universities also have to go through forms to publish code on Github. This is seen as an administrative burden and there is a degree of fear of being sued involved when researchers do not follow the universities/companies valorisation rules. A powerful argument for convincing researchers to make software open source (and publish it on platforms like Github) is that by doing so, they will be able to avoid the questions about who owns the data/software when they switch institutes. With respect to what happens when students are involved in research projects with commercial partners, master students do not fall under the same regulations as university employees. This has implications when discussing data ownership. Master students are generally not aware/notified of who owns the data, which can lead to problematic situations where students take the data with them when they finish their project. At TU Delft a way to prevent this is to let the student sign a waiver before they participate in commercial research projects.
Involvement of commercial partners is not necessarily bad for scientific progress! These collaborations increase the awareness of IP rights (data ownership), patents can boost researchers visibility; and patents can be used as a form of ‘risk management’ by patenting the highly valuable research outcomes before companies do so (and then opening the outcomes up to the public under a Creative Commons licence).
Session #10 Knowledge repo
In this session Michael Rustler from the FAKIN project presented their pilot of a ‘knowledge repository’ which collects information from different sources (e.g. DataCamp, GitHub, Zenodo and Endnote) and establishes and stores links between different objects (e.g. codes, project, people, publication, and tools) in one place. The repository has been built using R, Hugo and GitHub/GitLab, and contents can be added in the form of text file templates. This knowledge repository allows for standardised workflows and can help to implement a pre-producible research process, avoiding the loss of important knowledge as a result of informal workflows that have insufficient documentation of procedures and poor description of decision-making processes by multiple stakeholders.
This 20-day young ‘Knowledge Repo’ has great potential for establishing greater transparency across staff working within the same institutions and increase collaboration opportunities. One current challenge for end users is that the platform is quite tech-savvy and it is not at the stage for user open contributions yet. One participant shared his experience when creating a knowledge repository at ZB med institute by using XWiki. Discussions among the participants also arose about other forms of knowledge exchange, such as writing minutes, providing links and so on.
Session # 11 Exchange between less + more experienced OS people, advice giving – Melanie Imming
This session, organised by Melanie Imming, aimed to establish a way to easily connect people who are looking for advice on Open Science matters, by for example using a Twitter hashtag. In the Netherlands the Open Science speaker registry attempts to connect people for this purpose, but Melanie argued that this platform does not make it easier to approach people and that it is not desirable to pin this connection on a single registry or organisation. Melanie introduced the OSNL meet-up event as an example of such an initiative for alternative forms of connection.
Session #12 How to find Research Software for Open Science
Software has become a powerful tool for research in an era where machine readable data is gathered by the second. Increased automation, high performance computation and high resolution modelling have become a crucial aspect of research, and have led to an exponential development of research software. At the same time, this has increased the demand for more re-use and interoperability of software developed by the community. However, finding the necessary piece of code to re-use or build upon can become a tricky business, and citability of research software is not a well-established practice yet.
During this session it was discussed how researchers look for and discover software to re-use or built upon. Is it necessary to have a unique place where all software could be searched for? Attendants pointed out that software discovery happens mostly by word-of-mouth (through colleagues), by looking at references in research papers, and via services like Gitlab/Github, Google, and Twitter. It was then proposed that instead of creating a mega repository for all available software, discoverability is a matter of making code citable and linking it to datasets with which it has been used. To make code citable an persistent identifier can be assigned to a Github repository by linking it to Zenodo. University of Bielefeld has implemented a similar system linked to their institutional repository. This type of approach would facilitate the link between papers, software and data.
The participants also raised concerns regarding the quality of software tools developed by academics (which is very difficult to assess without trying the software out) and the lack of support when relying on open source solutions. Instead of measuring the quality of software it was proposed that more software metadata should be provided when publishing software, so that re-users are better informed about what the software can/cannot do, and also about the datasets the software has been used on.
Session #13 Vive la Open Science revolution!
This session dealt with an evaluation of the current movements in Open Science. Participants agreed that it is difficult to change current practises, as it feels like you have to completely haul over the current system. The Open Science movement was compared to feminism where intersectionality plays a key role. This means that a single problem, such as ‘closed science’, cannot be tackled alone as it is interconnected with other areas such as the hierarchy and exclusivity in science. The Open Science movement is thus complex and it remains difficult to find a common ground and shared understanding, which is required to revolutionise Open Science.
Session #15 Including OS in current PM activities
This session was attended mainly by support staff of universities and research institutes. Hence the focus was on sharing the problems faced when dealing with project management while complying with Open Science requirements. Most common issues included researchers not allocating project money for Open Science (e.g., costs related to data publishing); researchers not foreseeing data sharing protocols/services when exchanging data with collaborators (particularly when dealing with sensitive data); and the persistent view of data management planning as an administrative task rather than a research-related deliverable.
In order to tackle such issues, participants agreed that a proper approach would be to establish communication with the researcher(s) during the proposal phase. Contacting researchers at an early stage would help solving mainly the unforeseen issues (e.g., budget, data sharing) and can also help in changing the researcher’s culture about data management planning (and Open Science). In this aspect, the Data Stewards of TU Delft present at the session shared their experiences when talking to researchers about creating awareness and trying to change the culture from within. This is definitely a difficult task to accomplish, and constant (and consistent!) communication between the different support services from the respective university/institute is crucial.
Aside what is mentioned above, the lack of support staff itself and the lack of Open Science infrastructure was admitted to become an increasing problem nowadays: projects need more specific support on how to manage their data and practice Open Science. Such type of support is not uniquely related to ‘project management’ nor uniquely related to ‘research’. It is in between, and this is what needs to be understood by both, the support staff and the researchers (and the big bosses of course!).
Session #16 Open vs. closed citations
This session talked about opening up citations, the references in articles, to make them openly available. A practical guide is provided in the etherpad.
Session #17 Researchers Engagement in Open Science
During our own TU Delft Data Stewards session we aimed to bring the researcher’s perspective towards Open Science and Research Data Management (RDM) on the barcamp table. We were interested in knowing how people from various institutions interact with researchers and try to accomplish a ‘change in attitude’ in researchers that moves away from practising Open Science because the funders mandate it, to the new scientific norm.
At the start of the session we briefly explained the Data Stewardship programme at TU Delft, where the Data Stewards are based at the faculties and are coordinated by the TU Delft Library. Each Data Steward has a related research background to their faculty, in order to support researchers with discipline-specific RDM practises. Each Data Steward then acts as a liaison agent to other services within the university, such as the legal office, ICT and the Library whenever necessary. This connection with researchers is vital to promote Open Science as evident from a summary Open Science Radio recording from another session.
Communication between different support services was a key point of the session. Local and central support should work together rather than around each other, to provide high quality support; and it should be made clear where the boundaries between the two support services are. Local support gradually builds up relationships by providing discipline specific support that will lead to a better understanding of researchers needs and earn their trust. Local support can then connect the researcher to the more general services that are provided by the central/library support, while at the same time, increase the awareness of these services amongst researchers.
As researchers are the main players in Open Science, we argued that they should be in the centre of the movement and should be supported when implementing best practises. This opinion was shared with the workshop participants that argued scientists have to take too many things into account: they are expected to have several skills and be aware of all the regulations. Their job should therefore be made as easy as possible, and tools/services for scientists should be made interoperable by the platforms/services themselves. When Open Science guidelines are aligned with the daily practises of researchers, researchers gradually become more aware of the Open Science movement and they can amend and amplify these practises within their own field. New professional profiles are now emerging to facilitate support with Open Science practises, but not every institution will be able to hire a Data Steward per faculty as at TU Delft. The workload of the Data Stewards at TU Delft will also likely expand with the increasing awareness of their presence at the faculties, and the progressively important role of RDM practises at institutional and national levels.
During the wrap up, the meeting was briefly summarised, raising positive and negative aspects. There is a distrust by participants regarding the monitoring of Open Science progression, and a lack of awareness among researchers. However, it is now possible to copy the best practises in Open Science from others as long as their materials are available (such as the FOSTER project and the Open Science MOOC). The barcamp team offered to organise barcamps at other institutions, so perhaps we will meet again soon in the Netherlands!
We strengthen the social cohesion and interaction within the organisation, by:
Supporting mobility across the campus. For example through interfaculty micro-sabbaticals.
Stimulating joint activities and knowledge exchange across the various faculties and service departments.
Strengthening relations between academic staff members and support staff.
This seemed especially relevant to our ICT-Innovation department. We are continually on the look-out for ways to support the primary processes of the university, research and education, by applying IT solutions. I decided to find myself a suitable micro-sabbatical.
Hydrologists are encouraged to use their own local models within a global hydrological model.
In order to test whether their model is working properly, the project team is developing a Python Jupyter notebook that makes it easy for hydrologists to produce the graphs and statistics that they are familiar with.
During my micro-sabbatical, I am contributing to the development of this Jupyter notebook.
What did I learn?
Wi-Fi is an essential service for researchers and needs to be reliable
Standard TU Delft laptops are not adequate for research
Data for this project is hosted in Poland due to the collaboration with many partners and funding from EOSC
The team initially hosted their forecasting site on AWS, because AWS is quick to set up and it works in all the countries involved. For the minimum viable product of the global hydrology model they moved to the SURFsara HPC Cloud
If data is not open, then researchers are hesitant to use it. Their work can’t be reproduced easily, leading to fewer quality checks and less publicity
In the face of bureaucracy, cramped conditions and an ever growing number of extra required activities, our researchers’ determination and passion for their field of expertise is truly magnificent
I shall be using these insights to guide my work within the ICT-Innovation department and to feed our conversations with the Shared Service Center.
From 1st April 2019 I’ll be moving on to my next micro-sabbatical at the Chemical Engineering department of the Applied Sciences faculty. There I shall be installing molecular simulation software on a computer cluster and getting it up and running.
My ambition is to cover all 8 faculties of the TU Delft within 4 years. In October 2019 I shall be available for the next micro-sabbatical. If you have any suggestions, please do not hesitate to get in touch.
About the Author
Susan Branchett is Expert Research Data Innovation in the ICT-Innovation department of the TU Delft. She has a Ph.D. in physics and many years’ experience in software development and IT. Find her at TU Delft or LinkedIn or Twitter or github.
This blog expresses the views of the author.
The image of Jupiter is from here. Image credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill. License.
The hidden gem image is from here and is reproduced by kind permission of Macintyres.
The Sacramento River Delta image is from here and is reproduced under a CC-BY-2.0 license.
On the 14th of March 2019 the fourth VU Library Live talk show and podcast took place at the Vrije Universiteit Amsterdam (VU). By choosing topics that appeal to researchers and that are at the forefront of scholarly communications and research policy, this podcast series aims to bring researchers back to the library in the current age of digitalisation, where university libraries are becoming increasingly invisible to researchers. The topic of this show was the academic award system of the future.
We need to stop being lazy with just counting numbers of papers and citations and actually start reading stuff – Vinod Subramaniam
Vinod Subramaniam, Rector Magnificus of the VU,opened the show and claimed that we have lost sight of “the core business of the university, which is education”. Vinod stated that the academic reward system should not be based solely on research activities, let alone the traditional impact factor. As a researcher you can also have impact by communicating your results through newspapers and by providing guidelines that are used in society (such as medical guidelines or political policies). “We need to stop being lazy with just counting numbers of papers and citations and actually start reading stuff.”Vinod added that universities are at a turning point: “it is a perfect storm that is converging now, where I think a lot of things are happening where we as universities, but also grant agencies and other stakeholders have to start thinking about how do we reshape the reward system in academics”. Vinod also highlighted the timely occurrence of the meeting, right before the demonstration for education in the Hague on the 15th of March (WOinActie) against the increasing funding reductions in higher education and the very high workload of university staff.
it is a perfect storm that is converging now, where I think a lot of things are happening where we as universities, but also grant agencies and other stakeholders have to start thinking about how do we reshape the reward system in academics – Vinod Subramaniam
Maria Cruzopened the podcast with the phrase ‘publish or perish’, a strategy that is increasingly affecting the academic system, leading to high levels of workload and skewing research priorities. The current focus on publications as the golden standards in the evaluation process at Universities and research institutes decreases the value that education and valorisation activities have in scientific careers. The academic performance evaluation system furthermore barely takes any other academic outputs into account, such as software, data and other forms of communicating scientific research. Maria wonders if there is a way out of this system and asked: “Can we change this system to facilitate research that is open and transparent and contributes to solving key societal issues?”
Can we change this system to facilitate research that is open and transparent and contributes to solving key societal issues? – Maria Cruz
To address this question, Maria had four guest speakers around the table with her: Barbara Braams, Stan Gielen, Jutka Halberstadt, and Frank Miedema.
Barbara Braams, Assistant Professor at the VU, who recently wrote an opinion piece for theVolkskrant on the topic, agreed that we should move to a more transparent scientific system, but said that currently the amount of publications on your CV is more important when you write a grant, particularly the number of first author publications and the journal in which they are published. Barbara wants to change how researchers are evaluated and place the focus on not just sharing the articles but also scripts and data sets. “But to do so and to make sure that your data is actually usable by someone else, it takes a lot of time and effort…”, said Barbara. “ I think when you’re moving towards an open science system, we should also think about how can we reward these type of efforts, because it means that if I put a lot of effort in making my data understandable for someone else … it takes time away from my publications.” She argued that it should be clear for Early Career Researchers (ECR) in particular what it is that is expected of them and for what practices they are rewarded, as they have to deal with small time frames in which they have to write grant proposals. This will be important in the coming year, as NWO aims to sign DORA in 2019 but will implement the new indicators from 2020 onwards.
we should also think about how can we reward these type of efforts, because it means that if I put a lot of effort in making my data understandable for someone else … it takes time away from my publications. – Barbara Braams
Stan Gielen, President of The Netherlands Organisation for Scientific Research (NWO), agrees with Barbara that the evaluation of researchers needs to be revisited: “I don’t care where you publish, I want to know what was your contribution to science.” Stan claims that NWO wants to change the system, but that scientists are working in the international scene. This means that, according to Stan, if we want to change the system, this will have to be in cooperation with other funding agencies, at least in Europe. As a member of the Science Europe Governing BoardStan will seek agreement between EU funding agencies and will come up with general guidelines on how to evaluate scientists: “I expect that we will have a document available by the end of this year, which will be open for public consultation…. I hope that in let’s say, first half 2020, we will have some level of agreement among all the funding agencies in Europe about the basic criteria that should be used for evaluating the scientists,” he added. NWO also agreed to sign the San Francisco Declaration on Research Assessment (DORA, a set of recommendations to move away from assessing researchers using the impact factor) in September 2018, but has not yet signed the declaration. This will happen before the 23rd of May, when a meeting will take place to discuss the evaluation of researchers: “We decided to sign DORA and also to make statements what we are going to do before that meeting,” said Stan. “…We will also indicate at that time what [NWO] will do to implement DORA.”
On the question of how ECRs can be more actively involved in the process of change and the evaluation of researchers, Stan answers that The Young Academy is invited to participate actively in the consultation processes in the Netherlands and Europe. Stan added, “we are in close contact with VSNU, the Society of Dutch universities, because they’re talking now with the other communities about academic evaluation system. [NWO] should make sure that our criteria are overlap or are the same as those that are used by universities to evaluate the research component.”
Jutka Halberstadt, Assistant Professor at the VU, is of the opinion that the impact factor should not matter but that we should focus on what benefits society. Her work centres around valorisation. Jutka describes valorisation as “using our knowledge from teaching and research to help build a better society, to have societal impact.” She thinks that research should be available to society, and said that “we should make it understandable, usable to society… we should be in close contact with society to see what the societal needs are so we can translate those back to relevant research questions.” She adds that “in an ideal world [we should] also do the research in close collaboration with partners in society because I think that will make the research better. And for for me having societal relevance is a vital aspect of being an employee of the VU University.”
Jutka developed a national standard for obesity in collaboration with healthcare professionals and patient organisations in a project called ‘Care for Obesity’, funded by the Dutch Ministry of Health. No one knows how to measure the impact of these standards, even though they have enormous societal impact. Instead of prioritising publications in academic journals, the project produces other outputs such as blogs, questionnaires, guidelines and workshops for health care professionals: tools that are scientifically based and practical for people to use. Yet these research outputs are not valued at the same level as scientific publications. She thinks that universities should focus more on collaborative efforts and use altmetrics, e.g. how many times a name is mentioned in the news. “It won’t be a perfect system,” said Jutka, “but it’s something we can develop and work with.” She added that researchers should not be obliged to excel in education, research and valorisation at the same time, as this is “really a lot to ask” from them.
Barbara also highlights the current focus of the reward systems on the excellence of the individual. “I think one of the great things that we can also use from this transition to open science is make more space for other type of scientists and work as a team. But how are we going to do that, for instance, in a grant system?” Stan answers that NWO is not going to implement separate grants for different types of expertise, but recognises the worth of team-effort. For example, the Spinoza price (the highest award in Dutch science) is awarded to an individual, but as Stan puts it: “every person who gets the Spinoza price spends quite a lot of time explaining and thanking everyone who helped [them]”. Stan mentioned that the Spinoza price should therefore become a team award but did not comment on whether this change would actually be implemented. Instead, he thinks that “it’s up to the HRM policy of universities to make sure that they have excellent teams.” This may be difficult for universities to implement when the funders decide where the money flows.
Frank Miedema, Vice Rector Research and chair of the Open Science Programme Utrecht University and UMC Utrecht, thinks we are on the way to recognising the many forms of excellence. “[As a scientist] you want to produce real significant knowledge”, said Frank. According to him there are frustrations because this work is now not rewarded. “…a couple of years ago, this was still considered as taboo, especially at NWO,” he said, “But now we have it on the table”. When Maria asked if we should break free of the bibliometric mind-set, he said that this dependence on the impact factor has to stop, but that there are “many people who are addicted, especially of course, the people who do well in the system.” Frank is also involved in the evaluation of the current research evaluation protocol and thinks that more excellences need to be rewarded. He warned that the road is long and difficult and it may scare some people off, as scientists are very insecure when it comes to changes in the academic system. Frank raised the question whether the review committees of NWO were trustworthy.
It will take I think, 10, maybe 20 years until the mindsets of the reviewers have really changed – Stan Gielen
Stan indicates that it will take multiple years, perhaps even decades, before the new evaluation system is fully into place because the review panels have to incorporate these new instructions. Stan indicated that instructing reviewers has a positive effect, but said that “it will take I think, 10, maybe 20 years until the mindsets of the reviewers have really changed.” Stan indicates that the transitional phase will not restrict the rewards for more traditional scientists, because it would be too soon. Instead they will be implemented as the primary criteria in the next two years. Stan thinks that researchers will be triggered by these questions in the grant proposals: “we ask you to come up with a short narrative explaining first of all, why this is an important problem, what the impact will be scientifically, but also societal impact and why you or your team is qualified to pursue this project.” This can be done by listing your open science track record and shared data sets, next to the publications. Stan said that it is up to the researcher “to explain what you have done for open science and how you will pursue these activities when your grant application will be funded…” Barbara shedded some more light on the complexities of the transitional phase. She needs to ask for informed consent to share the data of her research, and after this it will still take 4-5 years before the data is available. Barbara explained that “there are so many differences in different fields. Some disciplines need more time to open up their data sets and panellist should be made aware of these differences,” she added.
We ask you to come up with a short narrative explaining first of all, why this is an important problem, what the impact will be scientifically, but also societal impact and why you or your team is qualified to pursue this project. – Stan Gielen
When Barbara asks how universities are going to support their staff in the transition to open science, Stan answered that “we need data stewards, because we should not …. put the burden on the scientist.”
We need data stewards, because we should not put the burden on the scientist – Stan Gielen
The answer to Maria’s final question on what they would advise young scientists to do with traditional supervisors is to take matters into their own hands. Young scientist should follow their hearts, as Jutka puts it. Barbara agreed and says that the young generation will move this forward. Stan thinks supervisors should let young people bring in their own expertise: “if you stick to your own principles, you will be lost in four years.” Frank however, has a different view. “I think to put the burden on the young academics … [is] not really fair, because they have the least power in the system,” he said. Frank thinks that that deans and rectors need to set steps in this “power game” and decide on the right incentives for the academic leaders. The move towards a more transparent, open and societal relevant way of practising science thus requires the effort of all stakeholders: researchers at all academic career levels as well as support staff, funding agencies, universities, libraries and the involvement of the general public.
I love open science. Since you are reading a scientific blog, I believe it is likely that you also support many of open science ideas. Indeed, easy access to publications, code, and research data makes research easier to reuse, while also ensuring transparency of the process and better quality control. Unfortunately the academic community is extremely conservative and it just takes forever for new standards to become commonplace.
The push for change in scientific practice comes from many directions.
Many funding agencies now require that all publications funded by them are publicly accessible. The upcoming Plan S would go further and only allow open access publications for all public funded research.
Frequently when submitting a grant proposal these days one also must include a data management plan .
The glossy journals in our field tighten their data publication requirements (see Nature and Science).
At the same time there are multiple grassroots initiatives for setting up open access community-run journals: SciPost and Quantum.
Also as individual researchers we can do a lot. For example, our group routinely publishes the source code and data for our projects. Recently Gary Steele and I proposed to our department that every group pledges to publish at least the processed data with every single publication. This is miles away from the long-term vision of publishing FAIR data, but it is a step in the right direction that does not cost too much effort and that we can do right now. We were extremely pleased when our colleagues agreed with our reasoning and accepted the proposal.
The policy changes and initiatives help improve the practice, but policy changes are slow and grassroots initiatives require extra work and might require convincing skeptically minded colleagues. Interestingly I realized that there is another way to promote open science, which doesn’t have any of those drawbacks. Instead it is awesome from all points of view:
It does not require any effort on your side.
It has an immediate effect.
It helps researchers to do better what they are doing anyway.
Almost too good to be true, isn’t it? I am talking about one situation where every researcher is in a position of power: reviewing papers. The job of a reviewer is to ensure that the paper is correct, and that it meets a quality standard. As soon as the manuscript is even a bit complex, one cannot assert its correctness without examining the data and the code that are used in it. Likewise, if the data and the code comprise a significant part of the research output, the manuscript quality is directly improved if the code and the data is published as well.
Therefore I have decided that a part of my job as a reviewer is to to ensure that the code and the data is available for review as soon as it is sufficiently nontrivial. I have requested the code and the data on several occasions, following this request with a suggestion to also publish the code and the data.
I was pleasantly surprised with the outcome. Firstly, nobody wants to argue against a reasonable request by a referee. Secondly, often the authors are happy to share their work results and do a really decent job. Finally, on more than one occasion already requesting the data was enough for the authors to find a minor error in their manuscript and fix it. In the current system where publishing this supplementary information does not bring any benefit, the authors are seldom motivated to make their code understandable and data accessible. Once a review requests the data and the code, the situation changes: now whether the paper gets published also depends on the result of this additional evaluation.
So from now on, whenever I review a manuscript, in addition to any other topics relevant to the review, I am going to write the following :
The obtained data as well as the code used in its generation and analysis constitute a significant part of the research output. Therefore in order to establish its correctness I request that the authors submit both for review. Additionally, for the readers to be able to perform the same validation I request that the authors upload the data and the code to an established data repository (e.g. Zenodo/figshare/datadryad) or as supplementary material for this submission.
 One has to note that the data management plans are mostly overlooked during the review.  Full disclosure: I’m a member of SciPost editorial college.  Obviously, I’ll adjust this bit if the paper doesn’t have code or data to speak of.  Consider that bit of text public domain and use it as you see fit.
Authors: Heather Andrews, Nicolas Dintzner, Alastair Dunning, Kees den Heijer, Santosh Ilamparuthi, Jeff Love, Esther Plomp, Marta Teperek, Yasemin Turkyilmaz-van der Velden, Yan Wang
From February 2019 onwards and with the appointment of the data steward at the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS), the team of data stewards is complete: there is a dedicated data steward per every faculty in TU Delft. Therefore, the work in 2019 focuses on embedding the data stewards within their faculties, policy development, and also on making the project sustainable beyond the current funding allocation.
The document below outlines high-level plans for the data stewardship project in 2019.
Engagement with researchers
In 2019, the data stewards will (among others) apply the following new tactics to increase researchers’ engagement with research data management:
Meeting with all full professors
Inspired by the successful case study at the faculty of Aerospace Engineering, data stewards will aim to meet with all full professors at their respective faculties.
Development of training resources for PhD students and supervisors
Ensure that appropriate training recommendations and online data management resources are available for PhD students to help them comply with the requirements of the TU Delft Research Data Framework Policy. These should include:
Appropriate resources for PhD students, e.g. support for data management plan preparation, and/or data management training for PhD students
Support for PhD supervisors, e.g. data management guidance and data management plan checklists for PhD supervisors
Online manuals/checklists for all researchers, e.g. information on TU Delft storage facilities, how to request a project drive, how to make data FAIR
Support for data management plans preparation
Ensure that researchers at the faculty are appropriately supported in writing of data management plans:
At the proposal stage of projects, researchers are notified about available support for writing the data paragraph by the contract managers and/or project officers of their department
All new grantees are contacted by the data stewards with an offer of data management and data management plan writing support
Training resources on the use of DMPonline, which will be used by TU Delft for writing Data Management Plans, are available and known to faculty researchers
Coding Lunch & Data Crunch
Organise monthly 2h walk-in sessions for code and data management questions for faculty researchers. Researchers will be supported by all data stewards and the sessions will rotate between the 8 faculties.
The Electronic Lab Notebooks trial
Following up on the successful Electronic Lab Notebooks event in March 2018, a pilot is being set up to test Electronic Lab Notebooks at TU Delft in 2019. The data stewards from the faculties of 3mE and TNW are part of the Electronic Lab Notebooks working group and are in contact with interested researchers who will be invited to get involved in the pilot.
Further develop the data champions network at TU Delft:
Ensure that every department at every faculty has at least one data champion
Develop a community of faculty data champions by organising a meeting every two months on average
Organise two joint events for all data champions at TU Delft and explore the possibility of organising an international event for data champions in collaboration with other universities
Faculty policies and workflows
In 2019, all faculties are expected to develop their own policies on research data management. However, successful implementation of these policies will depend on creating effective workflows for supporting researchers across the research lifecycle. Therefore, the following objectives are planned for 2019:
Draft, consult on and publish faculty policies on research data management.
Develop a strategy for faculty policy implementation
Develop effective connections and workflows to support researchers throughout the research lifecycle (e.g. contacting every researcher who was successfully awarded a grant)
A survey on research data management needs was completed at 6 TU Delft Faculties (EWI, LR, CiTG, TPM, 3mE and TNW). In 2019, the following activities are planned:
Publish the results of the survey conducted in the 6 faculties in a peer-reviewed journal
Conduct the survey at BK and IDE – first quarter of 2019
Re-run the survey at EWI, LR, CiTG, TPM, 3mE and TNW – September 2019
Compare the results of the survey in 2017/2018 with the results from 2019 of the re-run survey and publish faculty-specific reports with their key reflections on the Open Working blog
Survey data visualisation in R or python The visualisation of 2017/2018 RDM survey results was available in Tableau, which is proprietary software. To adhere to the openness principle, and also to practice data carpentry skills (see below), the 2019 data visualisation will be conducted in R.
Training and professional development
On top of specific training on data management, in 2019 data stewards will invest in training in the following areas:
Software carpentry skills
Code management is now an integral part of research and is likely to become even more important in the coming years. Therefore, as a minimum, every data steward should complete the full software carpentry training as an attendee in order to be able to effectively communicate with researchers about their code management and sharing needs. In addition, data stewards are strongly encouraged to complete training for carpentry instructors to further develop their skills and capabilities.
Participation in disciplinary meetings
In order to keep up with the research fields they are supporting, data stewards will also participate in at least one meeting, specific to researchers from their discipline. Giving talks about data stewardship / open science during disciplinary meetings is strongly encouraged.
In addition to dedicated events for the Data Champions, the following activities are planned for 2019:
In addition, the team is planning to organise the following events (no dates yet)
Software Carpentry workshops
March & November 2019 – at TU Delft
May 2019: at Eindhoven
October 2019: at Twente
Workshop on preserving social media data – workshop which will feature presentations from experts in the field of social media preservation, as well as investigative journalists (e.g. Bellingcat)
Conference on effectively collaborating with the industry (managing the tensions between open science and commercial collaborations)
Individual roles and responsibilities
Some data stewards have also undertaken additional roles and responsibilities:
Yasemin: Electronic Lab Notebooks, Data Champions
Esther: Electronic Lab Notebooks, DMP registry
Kees: Software Consultancy Lead
Sustainable funding for data stewardship
The current funding for the data stewardship project (salaries for the data stewards) comes from the University’s Executive Board and is until the end of 2020. However, the importance of the support offered to the research community by the data stewards has been already recognised not only by the academic community at TU Delft but also by support staff.
In order to ensure the continuation of the data stewardship programme and for TU Delft not to lose the highly skilled, trained and sought-after professionals, it is crucial that the source of sustainable funding is identified in 2019.
This is the interview between Maria Cruz and Prof. Bas Teusink, the Scientific Director of the Amsterdam Institute for Molecules, Medicines and Systems (AIMMS) about his experience with having dedicated data management support for his research group.
“I hired the right person at the right time”, says Prof. Bas Teusink , Scientific Director of the Amsterdam Institute for Molecules, Medicines and Systems (AIMMS). His institute was founded in 2010 on the back of major breakthroughs in the fields of molecular, cellular and systems biology. Recently, rapid changes in the pace of data acquisition and data volume in this field asked for the hiring of a dedicated Research Data Manager.
Why has data management become so important in your field? “At AIMMS our focus is on molecular life sciences – the study of molecules in living systems, of how molecules affect living systems, and of the molecular mechanisms of how drugs work, how toxic compounds work, and how cells work. For biologists, the generation of data is getting less and less labour-intensive, and the interpretation of the data is getting more and more complicated.
Does this mean that researchers need to acquire new skills?
“Yes, bioinformatics, data analysis, and data science are becoming more and more prominent in biology and also in chemistry. It would be a good idea for any bachelor programme in the life sciences to include proper data management, data science, and a little bit of programming and maybe bioinformatics in the curriculum. We’re developing such courses for the bachelor students of the Faculty (of Science).”
Why did you think a dedicated research data manager was needed?
“People in the life sciences community have been talking a lot about the importance of Research Data Management (RDM). When you think about biobanks and other types of big data collections, it is obvious that you have to sort out your data management, but what about a PhD student doing simple experiments in the lab using Excel to process data? How do we help them? As a Principal Investigator, I have no idea how to instruct my students in RDM. I’m not an expert. So I needed support. I needed somebody who actually has the time to look up what tools are available and who can translate general policies and general infrastructure into daily practical solutions that fit our local needs. There’s a huge gap between policy and implementation for people doing the daily work. We need discipline-specific support and we need hands-on help.”
What skills did you look for in a data manager?
“I wanted somebody who understands our field of work, who understands the data management side of things, and who also understands the technologies.”
Was it difficult to find the right person for the job?
“I happened to have Brett Olivier in my group and I could convince management that research data support was worth the investment. Brett is a biochemist with a strong theoretical background, but he also knows how to do experiments, so he can talk with everybody. He has also moved into programming and writing scientific software. Having this technical background means he can talk with people in IT. So he is the perfect guy.”
How is this position financed?
“We have found a pragmatic way of financing Brett’s position. And that is by project money. When we write a project proposal, if the funders find data management important, we budget a certain amount for data management, say 20K. If we get 5 projects, then we can afford a data manager just from project money. So far I’ve been able to fund Brett almost completely from my own projects.”
Is this funding model sustainable?
“I think it shouldn’t be difficult to finance somebody with this model for the long term. The university or the institute will have to take the risk, of course. If the money doesn’t come in, if the projects are not funded, then somebody has to pay the salary of the data manager. What is interesting with this model is that the chance of getting your project funded increases, because research data management is being taken more and more seriously by the funding agencies.”
What is Brett doing in concrete terms?
“He writes the Data Management Plans (DMPs) for project proposals and supports their implementation. He has been actively involved in the piloting and implementation of a new data management platform with AIMMS researchers. Brett has developed encoding standards for computational models of biological systems. Because of that, he knows how important it is to annotate data using appropriate ontologies and thereby making them more FAIR (Findable, Accessible, Interoperable and Reusable). Many scientists don’t know what an ontology is, let alone use it. Brett will address this and related RDM issues by providing advice on what the current best standards, tools and practices are in the field.”
“Well implemented data strategies can contribute to the quality and efficiency of a research project.”