Category: conferences

The main obstacles to better research data management and sharing are cultural. But change is in our hands

change-948024_1280

This blog post was originally published by the LSE Impact Blog.


Recommendations on how to better support researchers in good data management and sharing practices are typically focused on developing new tools or improving infrastructure. Yet research shows the most common obstacles are actually cultural, not technological. Marta Teperekand Alastair Dunning outline how appointing data stewards and data champions can be key to improving research data management through positive cultural change.

This blog post is a summary of Marta Teperek’s presentation at today’s Better Science through Better Data 2018 event.


By now, it’s probably difficult to find a researcher who hasn’t heard of journal requirements for sharing research data supporting publications. Or a researcher who hasn’t heard of funder requirements for data management plans. Or of institutional policies for data management and sharing. That’s a lot of requirements! Especially considering data management is just one set of guidelines researchers need to comply with (on top of doing their own competitive research, of course).

All of these requirements are in place for good reasons. Those who are familiar with the research reproducibility crisis and understand that missing data and code is one of the main reasons for it need no convincing of this. Still, complying with the various data policies is not easy; it requires time and effort from researchers. And not all researchers have the knowledge and skills to professionally manage and share their research data. Some might even wonder what exactly their research data is (or how to find it).

Therefore, it is crucial for institutions to provide their researchers with a helping hand in meeting these policy requirements. This is also important in ensuring policies are actually adhered to and aren’t allowed to become dry documents which demonstrate institutional compliance and goodwill but are of no actual consequence to day-to-day research practice.

The main obstacles to data management and sharing are cultural

But how to best support researchers in good data management and sharing practices? The typical answers to these questions are “let’s build some new tools” or “let’s improve our infrastructure”. When thinking how to provide data management support to researchers at Delft University of Technology (TU Delft), we decided to resist this initial temptation and do some research first.

Several surveys asking researchers about barriers to data sharing indicated that the main obstacles are cultural, not technological. For example, in a recent survey by Houtkoop at el. (2018), psychology researchers were given a list of 15 different barriers to data sharing and asked which ones they agreed with. The top three reasons preventing researchers from sharing their data were:

  1. “Sharing data is not a common practice in my field.”
  2. “I prefer to share data upon request.”
  3. “Preparing data is too time-consuming.”

Interestingly, the only two technological barriers – “My dataset is too big” and “There is no suitable repository to share my data” – were among three at the very bottom of the list. Similar observations can be made based on survey results from Van den Eynden et al. (2016) (life sciences, social sciences, and humanities disciplines) and Johnson et al. (2016) (all disciplines).

At TU Delft, we already have infrastructure and tools for data management in place. The ICT department provides safe storage solutions for data (with regular backups at different locations), while the library offers dedicated support and templates for data management plans and hosts 4TU.Centre for Research Data, a certified and trusted archive for research data. In addition, dedicated funds are made available for researchers wishing to deposit their data into the archive. This being the case, we thought researchers may already receive adequate data management support and no additional resources were required.

To test this, we conducted a survey among the research community at TU Delft. To our surprise, the results indicated that despite all the services and tools already available to support researchers in data management and sharing activities, their practices needed improvement. For example, only around 40% of researchers at TU Delft backed up their data automatically. This was striking, given the fact that all data storage solutions offered by TU Delft ICT are automatically backed up. Responses to open questions provided some explanation for this:

  • “People don’t tell us anything, we don’t know the options, we just do it ourselves.”
  • “I think data management support, if it exists, is not well-known among the researchers.”
  • “I think I miss out on a lot of possibilities within the university that I have not heard of. There is too much sparsely distributed information available and one needs to search for highly specific terminology to find manuals.”

It turns out, again, that the main obstacles preventing people from using existing institutional tools and infrastructure are cultural – data management is not embedded in researchers’ everyday practice.

How to change data management culture?

We believe the best way to help researchers improve data management practices is to invest in people. We have therefore initiated the Data Stewardship project at TU Delft. We appointed dedicated, subject-specific data stewards in each faculty at TU Delft. To ensure the support offered by the data stewards is relevant and specific to the actual problems encountered by researchers, data stewards have (at least) a PhD qualification (or equivalent) in a subject area relevant to the faculty. We also reasoned that it was preferable to hire data stewards with a research background, as this allows them to better relate to researchers and their various pain points as they are likely to have similar experiences from their own research practice.

Vision for data stewardship

There are two main principles of this project. Crucially, the research must stay central. Data stewards are not there to educate researchers on how to do research, but to understand their research processes and workflows and help identify small, incremental improvements in their daily data management practices.

Consequently, data stewards act as consultants, not as police (the objective of the project is to improve cultures, not compliance). The main role of the data stewards is to talk with researchers: to act as the first contact point for any data-related questions researchers might have (be it storage solutions, tools for data management, data archiving options, data management plans, advice on data sharing, budgeting for data management in grant proposals, etc.).

Data stewards should be able to answer around 80% of questions. For the remaining 20%, they ask internal or external experts for advice. But most importantly, researchers no longer need to wonder where to look for answers or who to speak with – they have a dedicated, local contact point for any questions they might have.

Data Champions are leading the way

So has the cultural change happened? This is, and most probably always be, a work in progress. However, allowing data stewards to get to know their research communities has already had a major positive effect. They were able to identify researchers who are particularly interested in data management and sharing issues. Inspired by the University of Cambridge initiative, we asked these researchers if they would like to become Data Champions – local advocates for good data management and sharing practices. To our surprise, more than 20 researchers have already volunteered as Data Champions, and this number is steadily growing. Having Data Champions teaming up with the data stewards allows for the incorporation of peer-to-peer learning strategies into our data management programme and also offers the possibility to create tailored data management workflows, specific to individual research groups.

Technology or people?

Our case at TU Delft might be quite special, as we were privileged to already have the infrastructure and tools in place which allowed us to focus our resources on investing in the right people. At other institutions circumstances may be different. Nonetheless, it’s always worth keeping in mind that even the best tools and infrastructures, without the right people to support them (and to communicate about them!), may fail to be widely adopted by the research community.

TU Delft’s presence at the International Data Week 2018 in Botswana

IDW2017Slide1

Yasemin Turkyilmaz-van der Velden and Marta Teperek are very privileged to represent TU Delft at the International Data Week 2018 in Gaborone, Botswana. Yasemin has been awarded a very competitive grant for Early Career researchers to attend the conference.

We are working hard to make sure that we get the most of our attendance. Yasemin is presenting:

Marta’s contribution:

The full programme of the International Data Week can be accessed online.

 

 

Workshop Report: Software Reproducibility – The Nuts and Bolts

Authors (in alphabetical order): Maria Cruz (VU), Marc Galland (UvA), Carlos Martinez (NL eScience Center), Raúl Ortiz (TU Delft), Esther Plomp (VU), Anita Schürch (UMCU), Yasemin Türkyilmaz-van der Velden (TU Delft)

sam-loyd-1066267-unsplash

Based on the contributions from workshop participants (in alphabetical order): Joke Bakker (University of Groningen), Jochem Bijlard (The Hyve), Mattias de Hollander (NIOO-KNAW), Joep de Ligt (UMCU), Albert Gerritsen (Radboud UMC) Thierry Janssens (rivm), Victor Koppejan (TU Delft), Brett Olivier (Vrije Universiteit Amsterdam), Raúl Ortiz (TU Delft), Esther Plomp (Vrije Universiteit Amsterdam), Jorrit Posthuma (ENPICOM), Anita Schürch (UMCU)


On 2 October 2018, Maria Cruz (VU), Marc Galland (UvA), Carlos Martinez (NL eScienceCenter), and Yasemin Türkyilmaz-van der Velden (TU Delft) facilitated a workshop titled “Software Reproducibility – The Nuts and Bolts”, as part of the DTL Communities@Work 2018 event held in Utrecht, the Netherlands.

Besides the four organisers, there were 24 workshop participants, including researchers, research software engineers/developers, data stewards and others in research support roles.

Below we summarise the background and rationale for the workshop, key discussions and insights, and recommendations. The description of the workshop setup, including information about the participants gathered via Mentimeter, can be found at the end of this report.

The listed authors include the four organisers and the workshop participants who actively contributed to the report. Workshop participants who agreed to be acknowledged for their contributions are also listed.

Rationale for the workshop

The starting point for the workshop was a paper published in Water Resources Research by Hut, van de Giesen and Drost (2017), which argues that carefully documenting and archiving code and research data may not enough to guarantee the reproducibility of computational results. Alongside the use of the current best practices in scientific software development, these authors recommend close collaboration between scientists and research software engineers (RSEs) to ensure scientists are aware of the latest computational advances, most notably the use of containers (e.g. Docker) and open interfaces.

As happened in a previous similar workshop held at TU Delft on 24 May 2018, the participants discussed the merits of these recommendations and how they could be put into practice; and also what role the various stakeholders (researchers, research software engineers, research institutions, data stewards and other research support staff) could play in this regard.

In this second edition of the workshop, the participants also made recommendations for actions that could be taken at the national level in the Netherlands to raise the awareness of software sustainability and reproducibility and to implement the advice from the paper and the workshop. The key discussion points and insights from these discussions and the ensuing recommendations are summarised below, based on information recorded during the workshop within a collaborative google document.

IMG_20181002_112817

In this report we define reproducibility and reusability of software as follows. Reproducibility is focused on being able to reproduce results obtained in the past – that is, use the same data and the same software to reach the same result (a docker image may be good enough for this). Reusability is concerned with using the software on a different context than it was used before; this could be as simple as using the same software with different data or it may require modification of the original software (docker images may or may not be sufficient for software reusability).

Key discussion points and insights on the advice by Hut, van de Giesen and Drost

Sound but too technical advice

Overall, the groups felt that the advice was sound but too technically focussed, particularly if it is aimed at researchers. Researchers should not need to concern themselves with containers and open API’s, which are too technical to implement. The advice also fails to consider and recognise deeper cultural issues, such as: the lack of awareness on the topic of reproducibility and reusability of research software; the lack of relevant training, tools and support; and the diversity of code.

Concerns regarding the use of containers

Docker may not necessarily be easy to use if you are not a software developer or research software engineer. There was also the concern that containers, although helpful, should not be used to mask bad coding practices. The use of containers also makes it difficult to upgrade the software. Containers make it easier to distribute software on the short term, but to make software sustainable someone needs to understand how to update and build a new container. This is a role for the research software engineers, not the researchers, as there are no tools that are easy to use that allow for re-use of software in different containers. The other issue that was raised was whether Docker and other platforms would still exist in 20 years’ time.

Not all code is equal

Not all software is meant to be maintained or reused. High software quality, version management, code review, etc. will all help with reproducibility and reusability, but at some point in time the software might not be sustainable anymore. Is this necessarily bad? Code from 10 years ago will probably need to be rewritten in newer languages. Defining the scope of code will help determine the level of reproducibility and reusability requirements. In particular, it is important to differentiate between single-use scripts and pipelines that are used repeatedly and/or by different people. While the former do not need to be highly maintained, the latter need to be extensively reviewed and tested. Commercial software is also an issue. In some fields of research, many scientists use Excel or MATLAB. Commercial software is often closed source, making it difficult to test, review and publish it; and sometimes the publication of the code is also not possible for IP or confidentiality reasons.

Training and raising awareness

How much are researchers aware of the reproducibility crisis? Researchers need to be aware of the key features and concepts behind reproducibility and reusability of research software. These concepts are more important than any particular techniques. The first step should thus be raising awareness of these issues. People who are already aware of the reproducibility crisis and of practices conducive to reproducibility and their practical benefits have a responsibility to raise awareness within their department/group/colleagues. Researchers also need to be aware of the possibilities and best practices in order to apply them. Training is important in this regard. Having the right tools and support is also essential. Researchers need to know who to contact for help and support and how to find the right tools.

Code review

There should be code review sessions involving all the interested parties. Code review could be similar to peer review and be done at the institutional or departmental level. Working together on software increases the quality of code, particularly if it is reviewed by multiple stakeholders. Sharing the experience and the knowledge gained from these code review sessions more widely would provide a way to advertise and advocate for the best practices in software development.

Community building

Building a community behind a particular tool or piece of software was also seen as a good way to ensure that code is maintained and upgraded. If the software is out there and there is interest in it, people will maintain it. Being a part of such community may not necessarily require specific expertise and technical involvement. A user of a tool can very well contribute to the community by raising issues without needing to have specific knowledge about the code.

Good practices in scientific software development

Good coding practices should be publicly available and widely advertised. Building software should start with clearly documented use cases, and these use cases should define the entry points for the code. Materials and methods should include parameters for any executable. The environment configuration should also be added alongside the code to make it reproducible. For software to be redeployable on different platforms (also through time), it needs to be well documented, including open data and workflows. You need to be able to understand what the purpose of the experiment was and how it was done, and how the data was processed, if that is relevant. Version control and releases with DOIs are also important. Testing with proper positive and negative controls, integration, and validation are also critical to re-using software.

The roles of data stewards, RSEs and researchers

rawpixel-653764-unsplash

RSEs as ambassadors for software reproducibility

While researchers should lead when it comes to reproducibility, data stewards could help raise the awareness of this important issue and of the best practices for software reproducibility. RSEs often in support roles, standing between the researchers and their software – have a key role to play as ambassadors and should be part of the driving force behind efforts towards software reproducibility. In particular, they should be creating and maintaining software development guidelines. Research support roles, including those of data stewards and RSEs, should be more clearly defined and rewarded; these roles should not be seen or performed as just a side activity. RSEs should be actively involved in the research design and publication process, and should not been seen solely as a supporter of the researcher, but as a collaborator. Unfortunately, the current funding schemes do not reward these activities.

Cross-expertise speed-networking

Communication and interaction between the three key stakeholders (researchers, RSEs and data stewards) was seen as a shared responsibility. However, setting up cross-expertise speed-networking events could be an easy way to connect researchers, data stewards and RSEs, and to encourage collaboration. This type of initiatives could be implemented at institutional, national and/or even at international level. At the institutional level, a central service desk could work as a hub to connect researchers to research support experts. Encouraging collaboration by helping researchers connect with available experts provides a way to avoid redundant solutions to similar problems. For collaborations to be fruitful, however, researchers need to understand the perspective of RSEs and data stewards, and vice versa. Domain-specificity is another barrier that can block the collaboration between data stewards, RSEs and researchers.

How to encourage reproducibility in computational research?

As said earlier, researchers should lead when it comes to reproducibility. However, they may not always be interested in reproducibility, as reproducibility does not always guarantee good science. Researchers need to be intrinsically stimulated to document and review their code and to follow the best practices in software management and development. Publishing a methods or software paper that includes easy-to-reuse, high-quality software will help researchers get more citations. User friendly tools that help with software management and reproducibility will also stimulate use by researchers. 

Key recommendations

Reproducibility should be enforced from the top down

Journals and funders, in particular NWO, should enforce their policies. There should be funding for reproducibility; there should also be standards and requirements and appropriate audits. Data management plans as well as software sustainability plans are essential to ensure best practices. The funders need to become more aware of software sustainability and the needs for software management. For FAIR data there are funding opportunities, but these are not available for FAIR software. There is a need to make good practices in science the de facto standard. FAIR (both for data and software) should be the rule and no longer the exception. There should also be more recognition about publishing data and code, not only papers.

A leading role for national platforms

National platforms, such as the Netherlands eScience Center, should also be responsible and lead the research community into making software and data sustainability a recognised element of the research process. There is also a need among the research community for more knowledge and awareness about the NL eScience Center and the possibilities for collaborations between researchers and RSEs. In this respect, the Netherlands eScience Center should also take the lead in promoting collaboration between RSEs and researchers.

Community building as a bottom-up approach

Besides a top-down approach, building communities from the bottom up was also recommended as a way to connect researchers with relevant research support experts. The Dutch Techcentre for Life Sciences (DTL), for example, could set up a platform to connect individual researchers with software experts. This could be in the form of national cross-expertise speed-networking events or a forum. The NL-RSE initiative could also play a role in this regard and could help raise awareness of the issues around software reproducibility and sustainability.

Training

It is crucial to educate early career researchers, who have the time and interest. Courses and trainings are needed at the universities and at the national level. Researchers should be made aware of good practices for software development and software engineering at the earliest stages of their careers, including at the bachelor and master level.

Additional information

Workshop setup

The workshop session lasted two hours. It started with the organisers introducing themselves, followed by a short survey of the audience using Mentimeter, led by Yasemin Türkyilmaz-van der Velden. Maria Cruz then gave a presentation setting the scene, providing information on reproducibility and summarising the paper and the suggestions by Hut, van de Giesen & Drost (2017). Marc Galland gave a short presentation on software sustainability from the researcher’s point of view, and Carlos Martinez Ortiz gave his perspective on the same subject from the research software engineer’s point of view.

The audience was then split into four groups, with the organisers each joining a group to help facilitate the discussion. Each groups was allotted 45 minutes to answer the following questions within a collaborative google document:

  1. How can the advice by Hut, van de Giesen & Drost be put in practice?
  2. Any additional advice?
  3. How can researchers, RSEs, and data stewards work together towards implementing the advice?
  4. What needs to happen at the national level in the Netherlands to raise awareness of research software reproducibility and help implement the above or any of your ideas and recommendations?

About the participants

We asked a few questions to the audience, using Mentimeter, to get familiar with their background and their experiences with research software. As seen in the responses below, we had a mixed audience of researchers, research software engineers, data stewards, and people in other research support positions. As expected from a DTL conference, which focussed on the life sciences, most participants had a research background within this area, ranging from biomedical sciences to bioprocess engineering and plant breeding. All participants had experience with research software.

Screenshot_2018-10-26 20181002-mentimeter-Software Reproducibility DTL Utrecht pdf

 

Screenshot_2018-10-26 20181002-mentimeter-Software Reproducibility DTL Utrecht pdf(1)

Screenshot_2018-10-26 20181002-mentimeter-Software Reproducibility DTL Utrecht pdf(2)

Almost all participants agreed that there is a reproducibility crisis in science, reflecting the high level of awareness among the audience of this important issue. Before moving to the presentation about software reproducibility, we asked the participants what came to their mind about this topic. The answers, which ranged from version control, documentation and persistent identifiers to Git, containers, and Docker, clearly show that the audience was already very familiar with the topic of software reproducibility. In line with this, when we asked what they were doing themselves in terms of software reproducibility, we received very similar answers, with version control taking the lead among the answers to both questions.

Screenshot_2018-10-26 20181002-mentimeter-Software Reproducibility DTL Utrecht pdf(3)

Screenshot_2018-10-26 20181002-mentimeter-Software Reproducibility DTL Utrecht pdf(4)

Screenshot_2018-10-26 20181002-mentimeter-Software Reproducibility DTL Utrecht pdf(5)

Resources

GDPR in research: opportunity to collaborate and to improve research processes

https_%2F%2Fcdn.evbuc.com%2Fimages%2F47177643%2F241920949925%2F1%2Foriginal.jpg

On Thursday 30 August and on Friday 31 August TU Delft Library hosted two events dedicated to the new European General Data Protection Regulation (GDPR) and its implications for research data. Both events were organised by the Research Data Netherlands: collaboration between the 4TU.Center for Research Data, DANS and SURF (represented by the National Research Data Management Coordination Point).

First: do no harm. Protecting personal data is not against data sharing

On the first day, we heard case studies from experts in the field, as well as from various institutional support service providers. Veerle Van den Eynden from the UK Data Service kicked off the day with her presentation, which clearly stated that the need to protect personal is not against data sharing. She outlined the framework provided by the GDPR which make sharing possible, and explained that when it comes to data sharing one should always adhere to the principle “do no harm”. However, she reflected that too often, both researchers and research support services (such as ethics committees), prefer to avoid any possible risks rather than to carefully consider them and manage them appropriately. She concluded by providing a compelling case study from the UK Data Service, where researchers were able to successfully share data from research on vulnerable individuals (asylum seekers and refugees).

From a one-stop shop solution to privacy champions

We have subsequently heard case studies from four Dutch research institutions: Tilburg University, TU Delft, VU Amsterdam and Erasmus University Rotterdam about their practical approaches to supporting researchers working with personal research data. Jan Jans from Tilburg explained their “one stop shop” form, which, when completed by researchers, sorts out all the requirements related to GDPR, ethics and research data management. Marthe Uitterhoeve from TU Delft said that Delft was developing a similar approach, but based on data management plans. Marlon Domingus from Erasmus University Rotterdam explained their process based on defining different categories of research and determining the types of data processing associated with them, rather than trying to list every single research project at the institution. Finally, Jolien Scholten from VU Amsterdam presented their idea of appointing privacy champions who receive dedicated training on data protection and who act as the first contact points for questions related to GDPR within their communities.

Lots of inspiring ideas and there was a consensus in the room that it would be worth re-convening in a year’s time to evaluate the different approaches and to share lessons learned.

How to share research data in practice?

Next, we discussed three different models for helping researchers share their research data. Emilie Kraaikamp from DANS presented their strategy for providing two different access levels to data: open access data and restricted access data. Open datasets consist mostly of research data which are fully anonymised. Restricted access data need to be requested (via an email to the depositor) before the access can be granted (the depositor decides whether access to data can be granted or not).

Veerle Van Den Eynden from the UK Data Service discussed their approach based on three different access levels: open data, safeguarded data (equivalent to “restricted access data” in DANS) and controlled data. Controlled datasets are very sensitive and researchers who wish to get access to such datasets need to undergo a strict vetting procedure. They need to complete training, their application needs to be supported by a research institution, and typically researchers access such datasets in safe locations, on safe servers and are not allowed to copy the data. Veerle explained that only a relatively small number of sensitive datasets (usually from governmental agencies) are shared under controlled access conditions.

The last case study was from Zosia Beckles from the University of Bristol, who explained that at Bristol, a dedicated Data Access Committee has been created to handle requests for controlled access datasets. Researchers responsible for the datasets are asked for advice how to respond to requests, but it is the Data Access Committee who ultimately decides whether access should be granted or not, and, if necessary, can overrule the researcher’s advice. The procedure relieves researchers from the burden of dealing with data access requests.

DataTags – decisions about sharing made easy(ier)

Ilona von Stein from DANS continued the discussion about data sharing and means by which sharing could be facilitated. She described an online tool developed by DANS (based on a concept initially developed by colleagues from Harvard University, but adapted to European GDPR needs) allowing researchers to answer simple questions about their datasets and to return a tag, which defines whether data is suitable for sharing and what are the most suitable sharing options. The prototype of the tool is now available for testing and DANS plans to develop it further to see if it could be also used to assist researchers working with data across the whole research lifecycle (not only at the final, data sharing stage).

What are the most impactful & effortless tactics to provide controlled access to research data?

Capture.PNG

The final interactive part of the workshop was led by Alastair Dunning, the Head of 4TU.Center for Research Data. Alastair used Mentimeter to ask attendees to judge the impact and effort of fourteen different tactics and solutions which can be used at research institutions to provide controlled access to research data. More than forty people engaged with the online survey and this allowed Alastair to shortlist five tactics which were deemed the most impactful/effort-efficient:

  1. Create a list of trusted archives for researchers can deposit personal data
  2. Publish an informed consent template for your researchers
  3. Publish on university website a list of FAQs concerning personal data
  4. Provide access to a trusted Data Anonymisation Service
  5. Create categories to define different types of personal data at your institution

Alastair concluded that these should probably be the priorities to work on for research institutions which don’t yet have the above in place.

How to put all the learning into practice?

The second event was dedicated to putting all the learning and concepts developed during the first day into practice. Researchers working with personal data, as well as those directly supporting researchers, brought their laptops and followed practical exercises led by Veerle Van den Eynden and Cristina Magder from the UK Data Service. We started by looking at a GDPR-compliant consent form template. Subsequently, we practised data encryption using VeraCrypt. We then moved to data anonymisation strategies. First, Veerle explained possible tactics (again, with nicely illustrated examples) for de-identification and pseudo-nymisation of qualitative data. This was then followed by a comprehensive hands-on training delivered by Cristina Magder on disclosure review and de-identification of quantitative data using sdcMicro.

Altogether, the practical exercises allowed one to clearly understand how to effectively work with personal research data from the very start of the project (consent, encryption) all the way to data de-identification to enable sharing and data re-use (whilst protecting personal data at all stages).

Conclusion: GDPR as an opportunity

I think that the key conclusion of both days was that the GDPR, while challenging to implement, provides an excellent opportunity both to researchers and to research institutions to review and improve their research practices. The key to this is collaboration: across the various stakeholders within the institution (to make workflows more coherent and improve collaboration), but also between different institutions. An important aspect of these two events was that representatives from multiple institutions (and countries!) were present to talk about their individual approaches and considerations. Practice exchange and lessons learned can be invaluable to allow institutions to avoid similar mistakes and to decide which approaches might work best in particular settings.

We will definitely consider organising a similar meeting in a year’s time to see where everyone is and which workflows and solutions tend to work best.

Resources

Presentations from both events are available on Zenodo:

Workshop Report: Software Reproducibility – How to put it into practice?

Authors (in alphabetical order): Maria Cruz, Shalini Kurapati, Yasemin Türkyilmaz-van der Velden

software workshop

With contribution from workshop participants (in alphabetical order): Patrick Aerts (Netherlands eScience Center + DANS), Kees den Heijer (TU Delft), Jelle de Plaa (SRON),  Jordi Domingo (KNMI), Martin Donnelly (University of Edinburgh), Raman Ganguly (University of Vienna), Rolf Hut (TU Delft), Karsten Kryger Hansen (Aalborg University), Carlos Martinez (Netherlands eScience center), Joakim Philipson (Stockholm University), Wessel Sloof (University Medical Center Groningen), Martijn Staats (Wageningen University & Research), Michael Svendsen (Royal Danish Library), Jan van der Ploeg (University Medical Center Groningen), Ronald van Haren (Netherlands eScience Center), Egbert Westerhof (DIFFER).

How to cite: A citable version of this report is available since July 06, 2018 through the Open Science Framework. DOI: 10.31219/osf.io/z48cm.


On 24 May 2018, Maria Cruz, Shalini Kurapati, and Yasemin Türkyilmaz-van der Velden led a workshop titled “Software Reproducibility: How to put it into practice?”, as part of the event Towards cultural change in data management – data stewardship in practice held at TU Delft, the Netherlands. There were 17 workshop participants, including researchers, data stewards, and research software engineers. Here we describe the rationale of the workshop, what happened on the day, key discussions and insights, and suggested next steps.

Rationale for the workshop

There is no denying about a reproducibility crisis in science. In some fields, over half of published studies fail reproducibility tests. A survey of 1576 scientists conducted by Nature in 2016 revealed that over 90% of the respondents agreed that there was some level of crisis and over 70% said they had tried and failed to reproduce another group’s experiments. Given the ubiquitousness of software in many areas of contemporary scientific research, it could be argued that there can’t be reproducibility in science without reproducible software.

In a recent Comment in Water Resources Research, in response to “Most computational hydrology is not reproducible, so is it really science?”, Hut, Van de Giesen and Drost (2017) argue that documenting and archiving code and data is not enough to guarantee the reproducibility of computational results. They suggest the use of software containers and open interfaces, and that researchers work more closely with research software engineers (RSEs) to learn best practices in software design. This advice is presented in the context of hydrology, but it could be applied more generally.

Inspired by the article and its advice, the workshop aimed to explore the various topics of software reproducibility how some of the advice could be put in practice, and what role could institutions, data stewards, and research software engineers play in this regard.

What happened on the day

The workshop session lasted one hour. It started with the moderators introducing themselves, followed by a short survey of the audience using Mentimeter, led by Yasemin Türkyilmaz-van der Velden. Maria Cruz then gave a presentation setting the scene, providing information on reproducibility, and summarising the paper and the suggestions by Hut, Van de Giesen and Drost (2017). One of the authors of the paper, Rolf Hut, attended the session and also said a few words about his paper and his ideas. Shalini Kurapati then moderated the main activity described below.

The participants

Using Mentimeter, we asked a few questions to the audience to get familiar with their background and their experiences with research software. As seen in the responses below, there was an almost perfectly balanced audience formed by researchers, research software engineers, data stewards, and people in other research support positions. 

Mentimeter 1

There was also a very good balance in terms of the participants’ research backgrounds, which ranged from various disciplines in the physical sciences and medical research to intellectual history and information science. Almost all participants had experience with research software.

Mentimeter 2

Mentimeter 3

The majority (65%) of the participants agreed that there is a reproducibility crisis in science. The reproducibility crisis was a hot topic during the main event (Towards cultural change in data management – data stewardship in practice) and had been already discussed comprehensively earlier in the programme, by the keynote speaker Danny Kingsley. Therefore, a potential bias in the responses of the participants cannot be excluded. Regardless, it was interesting to see that there is an increasing awareness of this important issue.

Mentimeter 4

Before moving to the presentation about software reproducibility, we asked the participants what came to their mind about this topic. The answers, which ranged from sustainability, preservation, and integrity to GitHub, Zenodo, containers, and Docker, clearly show that the audience was already very familiar with the topic of software reproducibility.

Mentimeter 5

The activity

Prior to the workshop, we hoped to have a group of participants with diverse backgrounds and interests. Fortunately, that turned out to be the case, and we could form groups with the ideal representation from all stakeholders of interest. We divided the participants into 4 groups, each containing at least a data steward/research support staff, a research software engineer, and a researcher. The groups were invited to answer the following questions within a collaborative google document:

  • What do you think about the advice of Hut, Van de Giesen and Drost, i.e., use containers (e.g. Docker), use open interfaces, and closely collaborate with Research Software Engineers to improve software reproducibility?
  • Any additional advice to Hut et al., to improve software reproducibility?
  • How can researchers, RSEs and data stewards work together towards implementing the above advice?

The groups were allotted 20 minutes to discuss answers to the questions and record them in the google document. The workshop moderators were able to actively monitor the google document to steer the groups towards timely conclusion of their activity. After the activity was concluded, a representative from each group pitched their activity summary and their key findings for a minute. The contents of the google document and the pitches, which were recorded live in the workshop slides, provide us the insights on the challenges and corresponding solutions for software sustainability and reproducibility, that are reported in the next section.

Key discussion points and insights on the advice by Hut, van de Giesen & Drost

Lack of funding for Research Software Engineers

Lack of (sustainable) funding for hiring RSEs is one of the obstacles to putting the advice of Hut, Van de Giesen and Drost into practice. Larger projects typically already have RSEs on board, but for smaller projects this is not always possible. It is difficult to recruit and hire RSEs across disciplines. However, the Netherlands eScience Center is a good example of a way to centrally fund research software development and to pool developer expertise across disciplines.

Open source software is not always an option

Because of scientific competition, commercial and IP interests, it is not always an option to make research software available as open source software. Dockers (containers) are also not an option for commercial software.

Documentation

High-level documentation is very important. A good README file does part of the job, but documentation and a user manual are also important. Any information (e.g. equations, model) behind the software also needs to be shared.

Software validation

Lack of support for software validation is also a problem. As an addition to the advice by Hut, van de Giesen and Drost, one of the groups suggested that support should also be provided for software validation (in-house code review). In cases where professional software support is limited, it would already be helpful if researchers would review each others’ code, just like they would do with papers. If the goal is to make code understandable to other researchers, then their feedback will be paramount. Organizing code reviews in a research group could improve the quality of the code significantly with only a small time investment.

hitesh-choudhary-562332-unsplash

Photo by Hitesh Choudhary on Unsplash

The role of data stewards, RSEs and researchers

Data stewards – the link between researchers and RSEs?

Two groups saw the role of data stewards as brokers between researchers and RSEs. It was acknowledged that researchers and RSEs should interact more to improve research codes (e.g. review of codes). Data stewards could be the link between the two. Data stewards could monitor possible synergy between projects and link researchers with specialist RSE expertise. One group felt that data stewards should provide the toolbox, with principles (e.g FAIR principles) and guidance, and RSEs should help implement those principles, because they have the knowledge to do so.

Could RSEs do more to promote best practices?

Two groups thought that RSEs could take a more proactive role in providing training for researchers, promoting best practices, and generally propagating their knowledge. Without assigning roles, one of the groups felt that implementing the advice of Hut, Van de Giesen and Drost required programming courses, support staff to help out researchers at departmental level, and the breakdown of problems into smaller problems that could be solved with up-to-date techniques based on expert knowledge. Could RSEs also help with this?

Opportunities and barriers, and the role of institutions

Integrated teams working across university faculties, departments, and institutes, with a single point of contact, could provide a way for researchers, data stewards, and RSEs to work together. Fear of stepping into others’ “working areas” and different working cultures may create barriers, as well as the potential lack of scientific/research expertise from RSEs and software developers.

Sustainable funding is a challenge, so is the lack of recognition for developing research software in the current academic rewards system. There also needs to be a persuasive driver beyond just doing the right thing. This can come from funders, publishers and possibly institutions. Any driver will be most persuasive when it comes from the research community itself.

Universities and institutes should promote good practice for software engineering as part of open science.

Next steps

matthew-kwong-690610-unsplash

Photo by Matthew Kwong on Unsplash

 

The short-term goal of this workshop was to start a conversation on the topic of software reproducibility between researchers, research support staff (data stewards and others with a similar role), and research software engineers, and to make the results of this discussion public via this forum.

The immediate next step is to bring the results of this interaction to the attention of the community Working towards Sustainable Software for Science: Practice and Experiences (WSSSPE), which includes researchers and research software engineers, but lacks a strong connection with data stewards and research data support. We plan to submit a paper for the 9th International Workshop on Sustainable Software for Science: Practice and Experiences, to be held in Amsterdam on 29 October 2018.

The time available for the workshop was limited and not all the issues were discussed or discussed in enough depth. For example, it would be interesting to discuss in more detail what training and resources researchers and research supports need to help software reproducibility become more of a reality and what role could data stewards and research software engineers play in this regard.

Institutions could certainly do more in terms of funding and rewards for software development, and promoting best practices. How to make this happen in a global and concerted manner?

In the long-term we will continue to engage with the necessary stakeholders to keep the discussion alive and to define operational solutions towards improving software reproducibility and sustainability.

Resources

 

Event report: Towards cultural change in data management – data stewardship in practice

Wellcome slides

This blog post is written by Martin Donnelly from the University of Edinburgh and was originally published on the Software Sustainability Institute blog.



Late last month, I took a day trip to the Netherlands to attend an event at TU Delft entitled “Towards cultural change in data management – data stewardship in practice”. My Software Sustainability Institute Fellowship application “pitch” last year had been based around building bridges and sharing strategies and lessons between advocacy approaches for data and software management, and encouraging more holistic approaches to managing (and simply thinking about) research outputs in general. When I signed up for the event I expected it to focus exclusively on research data, but upon arrival at the venue (after a distressingly early start, and a power-walk from the train station along the canal) I was pleasantly surprised to find that one of the post-lunch breakout sessions was on the topic of software reproducibility, so I quickly signed up for that one.

I made it in to the main auditorium just in time to hear TU Delft’s Head of Research Data Services, Alastair Dunning, welcome us to the event. Alastair is a well-known face in the UK, hailing originally from Scotland and having worked at Jisc prior to his move across the North Sea. He noted the difference between managed and Open research data, a distinction that translates to research software too, and noted the risk of geographic imbalance between countries which are able to leverage openness to their advantage while simultaneously coping with the costs involved – we should not assume that our northern European privilege is mirrored all around the globe.

Danny

Danny Kingsley during her keynote presentation

The first keynote came from Danny Kingsley, Deputy Director of Scholarly Communication and Research Services at the University of Cambridge, whom I also know from a Research Data Management Forum event I organised last year in London. Danny’s theme was the role of research data management in demonstrating academic integrity, quality and credibility in an echo-chamber/social media world where deep, scholarly expertise itself is becoming (largely baselessly) distrusted. Obviously as more and more research depends upon software driven processing, what’s good for data is just as important for code when it comes to being able to reproduce or replicate research conclusions; an area currently in crisis, according to at least one high profile survey. One of Danny’s proposed solutions to this problem is to distribute and reward dissemination across the whole research lifecycle, not only attaching credit and recognition/respect to traditional publications, but also to datasets, code and other types of outputs.

Audience

Questions from the audience

After a much-appreciated coffee break, Marta Teperek introduced TU Delft’s Vision for data stewardship, which, again, has repercussions and relevance beyond just data. The broad theme of “Openness”, for example, is one of the four major principles in current TU Delft strategic plan, indicating the degree of institutional support it has as an underpinning philosophy. Marta was keen to emphasise that the cohort of data stewards which Delft have recently hired are intended to be consultants, not police! Their aim is to shift scholarly culture, not to check or enforce compliance, and the effectiveness of their approach is being measured by regular surveys. It will be interesting to see how they have got on in a year or two years’ time: already they are looking to expand from one data steward per faculty to one per department.

There followed a number of case studies from the Delft data stewards themselves. My main takeaways from these were the importance of mixing top-down and bottom-up approaches (culture change has to be driven from the grassroots, but via initiatives funded by the budget holders at the top), and the importance of driving up engagement and making people care about these issues.

Data StewardsData Stewards answering questions from the audience

After lunch we heard from a couple of other European universities. From Martine Pronk, we learned that Utrecht University stripes its research support across multiple units and services, including library and the academic departments themselves, in order to address institutional, departmental, and operational needs and priorities. In common with the majority of UK universities, Utrecht’s library is main driving and coordination force, with specific responsibility for research data management being part of the Research IT programme. From Stockholm University’s Joakim Philipson we heard about the Swedish context, which again seemed similar to the UK’s development path and indeed my own home institution’s. Sweden now has a national data services consortium (the SND), analogous to the DCC in the UK, and Stockholm, like Edinburgh, was the first university in its country to have a dedicated RDM policy.

We then moved into our breakout groups, in my case the one titled “Software reproducibility – how to put it into practice?”, which had a strange gender distribution with the coordinators all female, but the other participants all male. One of the coordinators noted that this reminded her of being an Engineering undergraduate again. We began by exploring our own roles and levels of experience/understanding of research software. The group comprised a mixture of researchers, software engineers, data stewards and ‘other’ (I fell into this last category), and in terms of hands-on experience with research software roughly two-thirds of participants were actively developing software, and another third used it. Participants came from a broad range of research backgrounds, as well as a smaller number of research support people such as myself. We then voted on how serious we felt the aforementioned reproducibility crisis actually was, with a two-thirds/one-third split between “crisis” and “what-crisis?” We explored the types of issues that come to mind when we think about software preservation, with the most popular responses being terms such as “open source”, “GitHub” and “workflows”. We then moved on to the main business of the group, which was to consider a recent article by Hut, van de Giesen and Drost. In a nutshell, this says that archiving code and data is not sufficient to enable reproducibility, therefore collaboration with dedicated Research Software Engineers (RSEs) should be encouraged and facilitated. We broke into smaller groups to discuss this from our various standpoints, and presented back in the room. The various notes and pitches are more detailed than this blog post requires, but those interested can check out the collaboratively-authored Google Doc to see what we came up with. The breakout session will also be written up as a blog post and an IEEE proposal, so keep an eye out for that.

software workshop
Workshop on Software Reproducibility

After returning to the main auditorium for reports from each of the groups, including an interesting-looking one from my friend and colleague Marjan Grootveld on “Why Is This A Good Data Management Plan?”, the afternoon concluded with two more keynote presentations. First up, Kim Huijpen from VSNU (the Association of Universities in the Netherlands) spoke about “Giving scientists more of the recognition they deserve”, followed by Ingeborg Verheul of LCRDM (the Dutch national coordination point for research data management), whose presentation was titled “Data Stewardship? Meet your peers!” Both of these national viewpoints were very interesting from my current perspective as a member of a nationally-oriented organisation. From my coming perspective as manager of an institutional support service – I’m in the process of changing roles at the moment – Kim’s emphasis on Team Science struck a chord, and relates to what we’re always saying about research data: it’s a hybrid activity, and takes a village to raise a child, etc. Ingeborg spoke about the dynamics involved between institutional and national level initiatives, and emphasised the importance of feeling like part of a community network, with resources and support which can be drawn upon as needed.

Closing the event, TU Delft Library Director Wilma van Wezenbeek underlined the necessity of good data management in enabling reproducible research, just as the breakout group emphasised the necessity of software preservation, and in effect confirming a view of mine that has been developing recently: that boundaries between managing data and managing software (or other types of research output) are often artificially created, and not always helpful. We need to enable and support more holistic approaches to this, acting in sympathy and harmony with actual research practices. (We also need to put our money where our mouth is, and fund it!)

After all that there was just enough time for a quick beer in downtown Delft before catching the train and plane back to Edinburgh. Many thanks to TU Delft for hosting a most enjoyable and interesting event, and to the Software Sustainability Institute whose support covered the costs of my attendance.

Several resources from the event are now available:

Why is this a good Data Management Plan?

pencil-1891732_960_720

This blog post reports from a workshop session led by Marjan Grootveld and Ellen Leenarts from DANS. The workshop was part of a larger event “Towards cultural change in data management – data stewardship in practice” organised by TU Delft Library on 24th of May 2018.

This blog post was written by Marjan Grootveld from DANS it was published before on the OpenAIRE blog.


It’s not just colonel Hannibal Smith, who loves it when a plan comes together. Don’t we all? On a more serious note, this also holds for Data Management Plans or DMPs. In a DMP a researcher or research team describes what data goes into a project (reuse) and comes out of it (potential reuse), How the team takes care of the data, and Who is allowed to do What with the data When.

Just like a project plan a DMP undergoes a reviewing process. Often, however, researchers share their draft version and questions with research support staff and data stewards (see the results of this survey by OpenAIRE and the FAIR Data Expert Group). About twenty data stewards shared their review and pre-view experiences in a lively session at the Technical University Delft on May 24th. During the day the organisers and speakers highlighted various aspects of data stewardship with a welcome focus on practice situations, especially in the break-out sessions. (When the presentations are available we will add a link to this blog post.)

In the session called “Why is this a good Data Management Plan?” Marjan Grootveld (DANS, OpenAIRE) and Ellen Leenarts (DANS, EOSC-hub) presented text samples taken from DMPs. By raising their hands – or not! – and subsequent discussion the participants gave their view on the quality of the sample DMP texts. For instance, the majority gave a thumbs-up for “A brief description of each dataset is provided in table 2, including the data source, file formats and estimated volume to plan for storage and sharing”. In contrast, the quote “Both the collected and the generated data, anonymised or fictional, are not envisioned to be made openly accessible.” drew a good laugh and the thumbs went down. Similarly, the information that the length of time for which the data will remain re-usable “may vary for the type of data and <is> difficult to specify at this stage of the project” was found not acceptable; the plan should a least explain why it is difficult, and how and when the project team nevertheless will provide a specific answer. And is it really more difficult than for other projects, whose DMPs do provide this information?

Although it can be hard to be specific in the first version of a DMP, it’s essential to demonstrate that you know what Data Management is about, and that you will deliver FAIR and maximally Open data. Does the DMP, for instance, tell what kind of metadata and documentation will be shared to provide the necessary context for others to interpret the data correctly? Does it distinguish between storing the data during the project and sustainably archiving them afterwards? (Yes, we had a sample text neatly describing the file formats during the data processing stage versus the file formats for sharing and preservation.)

There was consensus in the group on the quality of most of the quotes. Where opinions differed, this had mainly to do with the fact that the quotes were brief and therefore open to more lenient or more picky interpretation. In other cases, a sample text had both positive and negative aspects. For instance, “The source code will be released under an open source licensing scheme, whenever IPR of the partners is not infringed.” was found rather hedging (“whenever”) and unspecific (which licensing scheme?), but the plan to make also source code available is good; too often this seems to be forgotten, when the notion of “data” is understood in a limited way.

The session participants agreed that a plan with many phrases like “where suitable/ where appropriate/ should/ possibly” is too vague and doesn’t inspire much trust. On the other hand, information on who is responsible for particular data management activities is valuable, and so is planning like “The work package leaders will evaluate and update the DMP at least in months 12, 24 and 36”. Reviewers prefer explicit information and commitment to good intentions – which may be something to keep in mind for your “Open A-Team“.