Category: fair principles

The main obstacles to better research data management and sharing are cultural. But change is in our hands


This blog post was originally published by the LSE Impact Blog.

Recommendations on how to better support researchers in good data management and sharing practices are typically focused on developing new tools or improving infrastructure. Yet research shows the most common obstacles are actually cultural, not technological. Marta Teperekand Alastair Dunning outline how appointing data stewards and data champions can be key to improving research data management through positive cultural change.

This blog post is a summary of Marta Teperek’s presentation at today’s Better Science through Better Data 2018 event.

By now, it’s probably difficult to find a researcher who hasn’t heard of journal requirements for sharing research data supporting publications. Or a researcher who hasn’t heard of funder requirements for data management plans. Or of institutional policies for data management and sharing. That’s a lot of requirements! Especially considering data management is just one set of guidelines researchers need to comply with (on top of doing their own competitive research, of course).

All of these requirements are in place for good reasons. Those who are familiar with the research reproducibility crisis and understand that missing data and code is one of the main reasons for it need no convincing of this. Still, complying with the various data policies is not easy; it requires time and effort from researchers. And not all researchers have the knowledge and skills to professionally manage and share their research data. Some might even wonder what exactly their research data is (or how to find it).

Therefore, it is crucial for institutions to provide their researchers with a helping hand in meeting these policy requirements. This is also important in ensuring policies are actually adhered to and aren’t allowed to become dry documents which demonstrate institutional compliance and goodwill but are of no actual consequence to day-to-day research practice.

The main obstacles to data management and sharing are cultural

But how to best support researchers in good data management and sharing practices? The typical answers to these questions are “let’s build some new tools” or “let’s improve our infrastructure”. When thinking how to provide data management support to researchers at Delft University of Technology (TU Delft), we decided to resist this initial temptation and do some research first.

Several surveys asking researchers about barriers to data sharing indicated that the main obstacles are cultural, not technological. For example, in a recent survey by Houtkoop at el. (2018), psychology researchers were given a list of 15 different barriers to data sharing and asked which ones they agreed with. The top three reasons preventing researchers from sharing their data were:

  1. “Sharing data is not a common practice in my field.”
  2. “I prefer to share data upon request.”
  3. “Preparing data is too time-consuming.”

Interestingly, the only two technological barriers – “My dataset is too big” and “There is no suitable repository to share my data” – were among three at the very bottom of the list. Similar observations can be made based on survey results from Van den Eynden et al. (2016) (life sciences, social sciences, and humanities disciplines) and Johnson et al. (2016) (all disciplines).

At TU Delft, we already have infrastructure and tools for data management in place. The ICT department provides safe storage solutions for data (with regular backups at different locations), while the library offers dedicated support and templates for data management plans and hosts 4TU.Centre for Research Data, a certified and trusted archive for research data. In addition, dedicated funds are made available for researchers wishing to deposit their data into the archive. This being the case, we thought researchers may already receive adequate data management support and no additional resources were required.

To test this, we conducted a survey among the research community at TU Delft. To our surprise, the results indicated that despite all the services and tools already available to support researchers in data management and sharing activities, their practices needed improvement. For example, only around 40% of researchers at TU Delft backed up their data automatically. This was striking, given the fact that all data storage solutions offered by TU Delft ICT are automatically backed up. Responses to open questions provided some explanation for this:

  • “People don’t tell us anything, we don’t know the options, we just do it ourselves.”
  • “I think data management support, if it exists, is not well-known among the researchers.”
  • “I think I miss out on a lot of possibilities within the university that I have not heard of. There is too much sparsely distributed information available and one needs to search for highly specific terminology to find manuals.”

It turns out, again, that the main obstacles preventing people from using existing institutional tools and infrastructure are cultural – data management is not embedded in researchers’ everyday practice.

How to change data management culture?

We believe the best way to help researchers improve data management practices is to invest in people. We have therefore initiated the Data Stewardship project at TU Delft. We appointed dedicated, subject-specific data stewards in each faculty at TU Delft. To ensure the support offered by the data stewards is relevant and specific to the actual problems encountered by researchers, data stewards have (at least) a PhD qualification (or equivalent) in a subject area relevant to the faculty. We also reasoned that it was preferable to hire data stewards with a research background, as this allows them to better relate to researchers and their various pain points as they are likely to have similar experiences from their own research practice.

Vision for data stewardship

There are two main principles of this project. Crucially, the research must stay central. Data stewards are not there to educate researchers on how to do research, but to understand their research processes and workflows and help identify small, incremental improvements in their daily data management practices.

Consequently, data stewards act as consultants, not as police (the objective of the project is to improve cultures, not compliance). The main role of the data stewards is to talk with researchers: to act as the first contact point for any data-related questions researchers might have (be it storage solutions, tools for data management, data archiving options, data management plans, advice on data sharing, budgeting for data management in grant proposals, etc.).

Data stewards should be able to answer around 80% of questions. For the remaining 20%, they ask internal or external experts for advice. But most importantly, researchers no longer need to wonder where to look for answers or who to speak with – they have a dedicated, local contact point for any questions they might have.

Data Champions are leading the way

So has the cultural change happened? This is, and most probably always be, a work in progress. However, allowing data stewards to get to know their research communities has already had a major positive effect. They were able to identify researchers who are particularly interested in data management and sharing issues. Inspired by the University of Cambridge initiative, we asked these researchers if they would like to become Data Champions – local advocates for good data management and sharing practices. To our surprise, more than 20 researchers have already volunteered as Data Champions, and this number is steadily growing. Having Data Champions teaming up with the data stewards allows for the incorporation of peer-to-peer learning strategies into our data management programme and also offers the possibility to create tailored data management workflows, specific to individual research groups.

Technology or people?

Our case at TU Delft might be quite special, as we were privileged to already have the infrastructure and tools in place which allowed us to focus our resources on investing in the right people. At other institutions circumstances may be different. Nonetheless, it’s always worth keeping in mind that even the best tools and infrastructures, without the right people to support them (and to communicate about them!), may fail to be widely adopted by the research community.

Launch of the ‘FAIR Data Advanced Use Cases’ report by SURF


SURF – as the national collaborative ICT organisation for the Dutch education and research environment – has joined the effort to support the FAIR data principles implementation and application in the Netherlands.  The first product of this endeavor is a report of the six case studies that were conducted by Melanie Imming.  The interviewed institutions span from support services of various universities, over to research institutions, and ending with the national health care institute.

The purpose of this report is to build and share expertise on the implementation of FAIR data policy in the Netherlands. The six use cases included in this report describe developments in FAIR data, and different approaches taken, within different domains. For SURF, it is important to gain a better picture of the best way to support researchers who want to make their data FAIR.  – Melanie Imming. (2018, April 23). FAIR Data Advanced Use Cases: from principles to practice in the Netherlands (Version Final). Zenodo.

On 22nd May 2018 the report was officially launched, accompanied by a lovely workshop in the SURF venue in Utrecht.

How does a data archive remain relevant in a rapidly evolving landscape: the case of the 4TU.Centre for Research Data

This week, we are presenting at the International Digital Curation Conference 2018 in Barcelona.

This presentation can be downloaded from Zenodo.

The pre-print version of the practice paper accepted for the conference is available on OSF Preprints.

Title: From Passive to Active, From Generic to Focused: How Can an Institutional Data Archive Remain Relevant in a Rapidly Evolving Landscape?

Authors: Maria J. Cruz, Jasmin K. Böhmer, Egbert Gramsbergen, Marta Teperek, Madeleine de Smaele, Alastair Dunning

Abstract: Founded in 2008 as an initiative of the libraries of three of the four technical universities in the Netherlands, the 4TU.Centre for Research Data (4TU.Research Data) provides since 2010 a fully operational, cross-institutional, long-term archive that stores data from all subjects in applied sciences and engineering. Presently, over 90% of the data in the archive is geoscientific data coded in netCDF (Network Common Data Form) – a data format and data model that, although generic, is mostly used in climate, ocean and atmospheric sciences. In this practice paper, we explore the question of how 4TU.Research Data can stay relevant and forward-looking in a rapidly evolving research data management landscape. In particular, we describe the motivation behind this question and how we propose to address it.

Rethinking Reusability. A rakish recap of the ePLAN Workshop FAIR: Facts and Implementations, September 2017

Picture Credits: Daria Nepriakhina

On a rainy Thursday a couple of weeks ago, 14th September 2017, the national Platform for eScience/Data Research (ePLAN) had invited to exchange the latest news about FAIR dataat the eScience Centre in the Amsterdam Science Park. Close to 30 people from different Dutch universities, research support services, research institutions, and ventures followed the workshop appeal. Thus the recaps of Wilco Hazelger (ePlan), Barend Mons (GoFAIR), Peter Doorn (DANS) and Gareth O’Neill (Eurodoc) were received by ears of a quite diverse group of attendees.

For me this event was a good chance to refresh my knowledge about current FAIR processes here in the Netherlands, and to receive some confirmations or contradictions of my interpretation of the FAIR data principles. After nearly half a year of absence on my own FAIR project at TU Delft library, I hoped to get some inspiration out of the conversation with likeminded people on how to implement these principles in everyday research (support) life.

Before I shortly rehash the discussion and consensus of the break out session, I want to share some brain teaser I’ve noted down of the key speakers insights:

Aspects of FAIRness by Barend Mons

∴ Much to my relief Barend confirmed that FAIR is nothing measured in binary but rather a spectrum.

∴ TCP / IPv4 protocols are the current bottle necks of the hourglass design of the soon to be ‘internet of fair’.

∴ Interoperability never can exist without a purpose. Therefore rather assess it in that way: interoperability with what and not just interoperability on its own.

∴ The origin of FAIR emphasizes the machine action-ability of (meta)data.

∴ When talking about a FAIRness evaluation, declare the assessed matter as “re-useless” rather than calling it “unfair”.

∴ The goal of FAIR is R. However, technically I is the key thing of FAIR. “I without F+A makes no sense for R”.

∴ FAIR data can be achieved with FAIR metadata and closed data files.

∴ New perspective on data sharing: establish data visitation instead of data sharing, i.e. your workflow visits the data instead of you receiving data files that were sent to you. To me that is a thrilling shift of perspective: forget sending data files directly via whatever channel, rather establish a platform where the interested people are redirected to landing page of the data-set. Don’t get me wrong, of course this is what we are doing with our archive already. But I also still hear researcher saying, that they share their data via email by request.

∴ A new GoFAIR website is currently under construction and will be launched till end of the year, with a complete makeover and more functionalities as future forming European platform for FAIR work. I am intrigued and will keep an eye out for its launch!

The I in FAIR by Peter Doorn

∴  DANS has 2.6 million pictures as top data category (65% of the archive). Therefore, interoperability of images needs to be tackled. Unfortunately, interoperability of images is hard to determine.

∴  Side note: 4TU.Centre for Research Data has nearly 6.500 datasets in netCDF format as top data category (>90% of the archive). Perhaps this data-format has more advantages in terms of interoperability? Want to know more about our current work with netCDF? Leave a bookmark for the category on this blog.

∴ Barend’s remark about the image interoperability threshold mentioned by Peter: the rich metadata of images makes the interoperability of pictures possible.

∴  The self-assessment tool for FAIR data created by DANS is also connected to the FAIR metrics group.

The Open Science Survey 2017 by Gareth O’Neill

∴  My conclusion of the open science survey by Gareth: the need to improve awareness about open science /access /data /education etc. and the already existing support services will highly likely never decrease.

∴  But who is responsible for increasing the awareness? The university board? The faculties? The research support staff from e.g. the library?

∴  ‘Research visibility’ seems to be the main driver to comply to open science.

∴  The final report and survey analysis will be published in the next 3-6 months. Keep an eye on the Eurodoc website.

A few bits from the group session

∴  What’s the incentive to re-use existing data (where the origin might be untrustworthy) vs. regenerating the data oneself?

∴  Is metadata sufficient for reusability or is there a need for linked data?

∴  Incentives for researcher to create FAIR data needs to be improved asap.

∴  Better distinctions between “data stewards”, “data managers”, “data scientist”; and improved appreciation for researchers doing these jobs.

∴  Biggest nut to crack: what does FAIR data mean in terms of data quality? The data-set (metadata, documentation, and data files) could be perfectly fair, but the actual content of the data files are rubbish. My thoughts on this: first establish certified and trusted data archive / repository that enables FAIR data-sets; secondly gather critical mass of FAIR research data; lastly: enable peer-review of these data-sets to get an actual evaluation of the data quality.

Current FAIR work in the Netherlands, September 2017

∴  The Dutch Tech Centre for Life Science (DTLS) in Utrecht provides a lot of valuable information about FAIR in the life science context. In more detail, DTLS is also focussing on the semantic side of the FAIR data principles and how to implement them.

∴  Data Archiving and Network Services (DANS) in Den Haag are covering the work on these principles predominantly for the humanities and social sciences. One of their practical approaches is a FAIR data assessment tool with subsequent rating of each FAIR facet.

∴  TU Delft Library and 4TU.Centre for Research Data are concentrating on the FAIR data guidance for technological data. A first practical approach was the evaluation of Dutch data repositories and data archives to determine their maturity for the FAIR data demands by funding bodies. The consecutive work is investigating researchers sentiment of the FAIR data principles in relation to their research subject.

∴  In reaction to the individual development of research support and research institutions regarding the FAIR data principles, the European Commission enabled an Expert Group on FAIR data to review these evolvements and received feedback. The report produced by this Expert Group will be delivered first quarter 2018.

∴  The Conference of European Schools for advanced engineering education and research (CESAER) features a task force for Open Science, including a research data management group that also explores FAIR data.

Feedback, input or questions about this blog post? Feel free to comment.

Building a FAIRer world: Encouraging Researchers to Share their Data

This is an updated version of the slides given on the FAIR principles in Edinburgh at the International Data Curation Conference in February 2017.

The presentations was given at Open Research Data Management: policies and tools, May 24-25, Milano, Università Statale

It also included break out groups looking at how FAIR is interpreted in different subject areas. The 4TU.ResearchData team will be following this up looking at discipline-specific guidance can be published.