Tagged: fair principles

Another year over!

Author: Esther Plomp

tl;dr: Overview of 2022 for Esther Plomp.

For 2021 I wrote an extensive overview of what I did, which I found a helpful process. I also got motived to do this again thanks to Yanina and Danielle with their overviews. So here we go again!


Last January I started with my activities for the PhD duration team at the Faculty of Applied Science. The team consists of myself, Ans van Schaik (Faculty Graduate School) and Pascale Daran-Lapujade (director of the Faculty Graduate School). This year we set up a procedure to reduce the PhD duration (PhD-in-4 policy). This procedure required a communication plan (for which I gave an interview – as in house expert on taking too long on your PhD..).

I also gave a crash course on Open Science, together with Emmy Tsang (then Community Engagement manager at the TU Delft Library). This included a presentation on Open Data (made in R Markdown!).

The participants of the AIMOS discussion session wrote up their experiences in a blogpost ‘Moving Open Science forward at the institutional/departmental level’. I repeated this session in March for the Open Science Barcamp, which is summarized in another blog.

And I presented on Sharing Mortuary Data!


Last February I started with mentoring activities for the Carpentries and Open Life Science. This year I continued this for Open Life Science and co-mentored Adarsh Kalikadien, together with Maurits Kok from the Digital Competence Center. The Faculty of Applied Sciences continues to provide PhD candidates that participate in Open Life Science with credits (read more on intranet). Later this year I had the honour of co-mentoring Saranjeet Kaur Bhogal, with Fotis Psomopoulos.

In February we also started with our Faculty’s Publication Task Force. We had two main goals:

  1. What journals does our Faculty publish in?
  2. Raising awareness of (sustainable) Open Access options that researchers at our Faculty have

As part of my efforts for The Turing Way, more information on data articles was added. Many Turing Way Community members contributed, as well as Lora Armstrong (Data Steward CitG).


I was invited by the 3mE PhD council to talk about Metrics in academia, based on a blog that I co-wrote with Emmy Tsang and Antonio Schettino in 2021. In July I co-organised a similar session on Metrics in Academia with the Applied Science Faculty’s PhD council.

I had the honour to be one of the panellists of ‘The Turing Way Fireside Chat: Emergent Roles in Research Infrastructure & Technology’.


This month marked the official start of our Faculty’s Open Science Team! This team consists of at least a member of each of the Faculty’s departments (Flore Kruiswijk, Jean Marc Daran, Xuehang Wang, Sebastian Weingärtner, Sabrina Meindlhumer, Anton Akhmerov). This year we discussed how to increase awareness of Open Science and how to determine the focus of every department is for the upcoming years. Each of the team members engaged their department in a discussion or send out a survey in the months October-December. We will discuss the results with the Faculty management team in the next year.

I gave a lightening talk on The Turing Way for the Collaborations Workshop 2022 (save the date for 2023!).

Together with Zafer Öztürk, I discussed my experiences as a Data Steward for the Essentials 4 Data course. I wrote a summary of my Data Steward Journey in a blogpost.

I was on the FAIR data podcast and discussed several things FAIR (Findable, Accessible, Interoperable, Reusable) that I’m involved in with Rory Macneil. (Can recommend to reach out to Rory if you have anything FAIR to discuss!)

Together with Chris Stantis we organised an IsoArcH workshop on responsible data sharing.

Thanks to Valerie Aurora, I was able to follow the Ally Skills train-the-trainer workshop.


Our article on Taking the TU Delft Carpentries Workshops Online was published and was one of the most popular articles that month in JeSLIB!

I was involved in several presentations:


In June I gave a repeat of the Data Management Plan workshop for the DCC Spring Training Days.

I was involved as a Subject Matter Expert for Open Data for TOPS (Transform to Open Science).

And I presented the ‘Open Science Buffet’ poster for the Faculty of Applied Sciences Science Day.

Next to this, I followed a training on change management. This was very helpful in my efforts for the Open Science & PhD duration teams.


For the TU Delft BioDay I presented two posters on Open Science (the Buffet one mentioned earlier and one on the Open Life Science programme).

I was one of the panellist of the IFLA open data infrastructures panel organised by Emmy Tsang.

July was the month where I started to record the things that I am saying no to (since tracking things motivates me to actually work on them!). I also managed to get corona in August..


In August I learned how to use Quarto by making the materials of the RDM 101 course available online. I’m organising a faculty version of this course in March 2023.

I co-organised one of the workshops by The Turing Way for Carpentry Con: Git Good: Using GitHub for Collaboration in Open Open Source Communities. Many thanks to Anne Lee Steele, Hari Sood and Sophia Batchelor for this collaboration!

Together with Yan Wang we presented on Data Stewardship at TU Delft for a swissuniversities webinar.


Co-organised a session on FAIR discussions for the VU Open Science Festival, for which we’re currently writing a checklist article.

I described my career trajectory in an interview for the NWO magazine.

Presented a poster on the Removing Barriers to Reproducible Research work I did with Emma Karoune for the BABAO 2022 conference.

September was a busy article month:

Also, my husband defended his thesis!


I attended the NWO BioPhysics conference, where I coordinated the data/software workshopPlan ahead: practical tools to make your data and software more FAIR’. We gave a similar workshop in May for NWOlife2022.

I gave an invited talk on Open Science for the Tools, Practices and Systems programme. The presentation was based on the blogpost : ‘Open Science should not be a hobby‘ (written in May).


I again participated in AcWriMo (write 500 words each day for blogs, articles etc, based on the novel writing month NaNoWriMo). (I learned from last year and did not include a drawing each day…)

I gave my first in person Ally Skills training for the How are You week. There may be more of that in the upcoming years!

November is also the month for the second The Turing Way Book Dash. This (currently mostly online) event takes place over four days in May and November. Participants contribute to The Turing Way during the event and join social discussions related to data science. I reviewed a lot of pull requests! Thanks to my AcWriMo I managed to write something on Cultural Change, Code Review for journals, updating the RDM checklist, and Open Peer Review.

I also met the team of Young Science in Transition in person for the first time!

I followed a course on policy writing. This has hopefully improved my writing.

The article I co-wrote with Emma Karoune, on Removing Barriers to Reproducible Research in Archaeology, got recommended!

I also finalised my review activities for swissuniversitiesOpen Research Data calls.

November was again the busiest month for researcher requests (n=27), comparable to last year (n=26). In total I had 196 requests this year, a bit lower compared to 2021 (n=211), but more than 2020 (n = 186).

And I managed to figure out how Mastodon works (follow me @toothFAIRy@scholar.social)


I used December to recover from November, and round up some things for the year. This included updating the Open Science Support Website, which now has over 72 posts that answer frequently asked questions by researchers. Not all posts are finalised, and feedback is always welcome.

I’d also like to add a couple of things that I didn’t manage this year: Work on some of the older research data management survey data, reach inbox zero, write an article based on one of my thesis chapters, and get through my to-do list. I guess we have 2023 for that!

Happy New Year!

PS: check this Mastodon thread for my favourite books of 2022.

Share, Inspire, Impact: TU Delft DCC Showcase Event

Author: Ashley Cryan, Data Manager , TU Delft Digital Competence Centre

Tuesday, October 12 was a momentous day for the TU Delft Digital Competence Centre (DCC). A little more than a year after the new research support team of Data Managers and Research Software Engineers came together for the first time, the Share, Inspire, Impact: TU Delft DCC Showcase Event took place, co-hosted by the TU Delft Library’s Research Data Services team, ICT- R&D / Innovation and the TU Delft High Performance Computing Centre (DHPC). 

Researchers from across all faculties at the University joined the virtual live event, aimed at sharing results achieved and lessons learned from collaborating with members of the DCC during hands-on support of projects involving research data and software challenges. The exchange of experiences and ideas that followed was a true reflection of the ingenuity and collaborative spirit that connects and uplifts the entire TU Delft community. 

Inspiring opening words from TU Delft Library Director Irene Haslinger invited researchers, staff and representatives from academic communities like Open Science Community Delft and 4TU.ResearchData to reflect together and help distill a common vision for the future of the DCC. The DCC’s core mission is clear: to help researchers produce FAIR (Findable, Accessible, Interoperable, and Reusable) data, improve research software, and apply suitable computing practices to increase the efficiency of the research process. Event chairperson Kees Vuik and host Meta Keijzer-de Ruijter guided the discussion based on the fundamental question, how best can researchers and support staff work together to achieve these important goals in practice? 

“In the effort to promote and support FAIR data, FAIR Software, and Open Science, everyone has a role.”
– Manuel Garcia Alvarez, TU Delft DCC


Manuel Garcia Alvarez began with a presentation on the DCC working model and approach to the above question. After a year of trialling process-in-practice, the support team model is defined by four building blocks based on observed researchers’ needs: Infrastructure and Resources, Training, Hands-On Support, and Community. Researchers require sufficient access to and understanding of IT infrastructure and resources available through the University – robust computing facilities, secure data storage solutions, platforms for digital collaboration – in order to facilitate analysis workflows and achieve their research goals. The DCC support team works closely with staff in the ICT department to ensure that researchers can select, deploy, and manage computational resources properly to support their ongoing needs. Hands-on support is offered by the DCC in the form of support projects which last a maximum of six months, working closely in collaboration with research groups. This type of support blends the domain expertise of the researchers with the technical expertise of the DCC support team members to address specific challenges related to FAIR data/software and computational needs. Researchers can request this type of dedicated support by submitting an application through the DCC website (calls open several times per year).

Of course, the DCC support team came into existence as part of a broader community focused on supporting researchers’ digital needs: one that is made up of the faculty Data Stewards, ICT Innovation, Library Research Services, the DHPC, and the Library team for Research Data Management. The DCC contributes to ongoing training initiatives like Software and Data Carpentry workshops that equip researchers with basic skills to work with data and code, as well as designs custom training in the context of hands-on support provided to research groups. One such example is the “Python Essentials for GIS Learners” workshop, designed by the DCC during support of a project in ABE focused on shifting to programmatic and reproducible analysis of historical maps (the full content of this course is freely available on GitHub). 

The program featured a lively Round Table discussion between researchers who received hands-on support from the DCC and the DCC members that supported them, focusing on the DCC model of co-creation to help researchers solve complex and pressing data- and software-related challenges. Researcher panelists Omar Kammouh, Carola Hein, and Liedewij Laan shared their experience working alongside DCC members Maurits Kok, Jose Urra Llanusa, and Ashley Cryan in a spirited hour of moderated discussion. Each researcher panelist was invited first to introduce the project for which they received DCC support in the context of the challenges that inspired them to submit an application to the DCC. Then, DCC members were invited to elaborate on these challenges from their perspective and highlight the solutions implemented in each case. The DCC style of close collaboration over a period of six months was positively received by researchers who found the engagement productive and supportive of their research data management and software development process. The need to develop a kind of “common language” between members of the DCC and research group across domain and technical expertise was highlighted in several cases, and served to clarify concepts, strengthen trust and communication, and build knowledge on both sides that aided in the delivery of robust solutions. Practical benefits from the application of the FAIR principles to researchers’ existing workflows and outputs were also mentioned across cases. Collaboration with the DCC enabled researchers to share their data and software more broadly amongst direct collaborators and externally to the wider international research community. The last question of this discussion was whether Omar, Carola and Liedewij would recommend that other researchers at TU Delft apply for hands-on support from the DCC: the answer was an emphatic yes!

Attendees then had the option to join one of the four thematic breakout sessions: Community Building; Digital Skills and Training; Looking Ahead: Impactful Research Competencies of the Future; and Infrastructure, Technology and Tooling for Scientific Computing. Moderators Connie Clare and Emmy Tsang in the Community Building breakout room invited research support professionals from across universities and countries to share their experience being part of scientific communities, and found that recurring themes of knowledge sharing, inclusivity, friendship and empowerment wove throughout most people’s positive experiences. The discussion in the Digital Skills and Training room, moderated by Meta Keijzer-de Ruijter, Paula Martinez Lavancy, and Niket Agrawal, touched upon existing curricula and training programs available at TU Delft to help researchers and students alike develop strengths in fundamental digital skills like programming and version control. In the Looking Ahead room, moderators Alastair Dunning and Maurits Kok led a lively discussion on challenges related to rapidly advancing technology, and how the provision of ICT services and infrastructure solutions can avoid becoming a kind of “black box” to researchers. The Infrastructure, Technology and Tooling room, led by Jose Urra Llanusa, Kees den Heijer, and Dennis Palagin, discussed researchers’ need for IT infrastructure and technical support in the specific context of their research domain, including specialised tools and security measures that can help facilitate international collaboration. When the group came together in the main room to share summaries of each room’s discussion, the common themes of scalability, collaboration, and a balanced approach to centralised support emerged. 

“Support staff need to always work in partnership with researchers. In the future, we need both central and local DCC support and collaboration to continue learning from each other.”
Marta Teperek, Head of Research Data Services and 4TU.ResearchData at TU Delft

The closing words delivered by Rob Mudde, Vice-Rector Magnificus and Vice President Education, were a fitting end to a spirited day of reflection and discussion. Acknowledging the work of many to bring the TU Delft Digital Competence Centre into reality and its ethos as a hug of knowledge, connection and inclusivity, he stated, “As a university, we are a big community – we stand on one another’s shoulders. It’s collective work that we do. You can see how the DCC engages across disciplines to help all go forward.”

The DCC extends its warm gratitude to all those who made the “Share, Inspire, Impact DCC Showcase Event” happen, in particular event planning leads Deirdre Casella and Lauren Besselaar, and all of the panelists, speakers, session leaders, and participants who made the discussion so engaging and memorable. The team looks forward to continuing to work with researchers in the TU Delft community and building capacity toward a shared vision for the future we can all be proud of.

Visit the TU Delft | DCC YouTube playlist to view testimonials of researchers and the DCC Event aftermovie (forthcoming).

Who are the DCC?

The Digital Competence Centre (DCC) is an on-campus initiative set up to coordinate the support for the data management and software engineering required for 21st-century research within TU Delft.

Not to be confused with the Design Conceptualization and Communication section within Industrial Design Engineering or the UK-based Digital Curation Centre.

The DCC is the home of the DCC Support Team consisting of data managers and research software engineers based in the TU Delft Library and ICT/Innovation department respectively. Their core aim is helping researchers to develop skills to apply the FAIR principles to their research activities and software. This means that as well as your Faculty’s ICT Managers and Data Stewards there is also more hands-on and involved support available.

Findable, Accessible, Interoperable, Reuseable

This FAIR support pilot has initially been funded for a two-year run. During this time the team aims to consult with researchers to provide hands-on support for projects, whilst we evaluate whether such support mechanism works and in which ways it can be valuable for TU Delft research community.

Why care about FAIR? 

One of the goals of the pilot is to help researchers apply the FAIR principles, why? As part of the TU Delft Open Science Programme, our University is committed to its efforts for making open research and education a standard part of scientific practice. 

“It is our ambition to be the frontrunner in this area.  Our aim is that Open Science becomes the default setting for research and education at TU Delft.”

Prof. dr. Rob Mudde, vice-rector magnificus of TU Delft

Beyond TU Delft, funding bodies and journals are increasingly asking researchers to make their research outputs FAIR. So the DCC is formed to lend a helping hand to researchers, knowing that learning the skills that dedicated data managers or research software engineers have takes time.

Make your datasets findable (even if this is only to you or your group). Research today often produces large datasets, and as making these datasets easier to find and reuse benefits the wider research community overall, the importance of the management and storage of data can extend beyond the individual researcher.

Make your software or code reproducible. Interpreting these datasets can require a lot of skill, and often the use and development of specialised research software. Benefits to open source research software include increasing reproducibility, allowing other researchers to test their own data with your software, and help you further co-develop and improve your own tools. Even without datasets a research software engineer can help in many other ways including with simulations and developing models.

How can I get their support? 

Any researcher at TU Delft, including postdoctoral researchers and PhD students, can apply for the support from a Data Manager or Research Software Engineer. The next call for projects is expected to go out at the end of November.

Visit https://dcc.tudelft.nl/ for more information.

During this pilot initiative, our data managers and research software engineers will support successful project applications for a maximum of two days a week for six months.

Should you have smaller questions surrounding data management or research software the team may be able to help via email!

Got questions? 

You can reach the DCC and FAIR Support Team through the following email address: dcc@tudelft.nl 

Alternatively, you can reach out in the DCC Community Group on Microsoft Teams.

Comments on the EOSC Strategic Implementation Plan 2017 – 2020 and the EOSC Declaration


Alastair Dunning (TU Delft Library / 4TU.Centre for Research Data)
Marta Teperek (TU Delft Library)

1 November 2017

On 26 October 2017, the European Open Science Cloud published the EOSC Declaration, which is available to all scientific stakeholders, for their endorsement and commitments to the realisation of the EOSC by 2020. Below are our thoughts and comments on the EOSC declaration and the EOSC Strategic Implementation Plan 2017-2020.

European vs Global?

Research nowadays is more and more cross-border; in-line the idea of transparency, sharing and interoperability championed by EOSC. What is unclear in the current EOSC proposal is the relationship between “European” vs “global” accessibility of EOSC. The EOSC Strategic Implementation Plan 2017-2020 suggests that EOSC will provide access to all European research data and that free access to the platform will be offered to all European researchers. The very last point of the EOSC declaration states that EOSC will be “open to the world” and “reaching out over time to relevant global research partners”.

There are already too many tools and resources burdened with institutional and national constraints, which limit collaboration and exchange. Therefore, given that research happens internationally, it is important that the openness to the world and coordination with global research partners is planned as one of the top strategic priorities of EOSC, stated from the outset in the Declaration.

Efforts required for development of standards, community consultation and service integration should not be underestimated

The EOSC Strategic Implementation Plan 2017-2020 notes that in the Federated model for EOSC development, common service standards will need to be established for all federated resources. It is also stated that the costs of Federated model for EOSC were only “marginally higher than baseline”.

However, functional interoperability will depend on agreeing on common interoperability standards. The EOSC Declaration stresses on several occasions the importance of a disciplinary approach to FAIR principles and standards development. Inevitably, this will lead to tensions between granular, community-driven approach to standards (relevant for making research outputs FAIR), and the need for deciding on minimal requirements which could make the whole service interoperable.

Overall, the EOSC Declaration tend to overstate the simplicity of technical implementation and underplays the necessary community development and engagement efforts. The document rightfully states that “research data must be both syntactically and semantically understandable, allowing meaningful data exchange and reuse among scientific disciplines and countries”. However, getting consensus across the various communities and across all scientific stakeholders will require considerable work (which means both time and financial investment). In particular, local subject-specific communities will need to be engaged, consulted and they will most likely require dedicated support to achieve interoperability and integration with EOSC.

This should also be taken into account in financial planning for EOSC development. The EOSC Strategic Implementation Plan 2017-2020 states that no/little fresh money is needed in implementation until 2020. However, for the project to be successful, it is important that work on defining common standards and assisting the various scientific stakeholders in integration efforts starts as early as possible, and this will require both time and financial investment.

In addition, it might be desirable to think about the inclusion of disciplinary stakeholders within the different EOSC Working Groups to ensure that their views are taken into account from the outset.

Research outputs other than datasets need to be recognised

The current EOSC Declaration, as well as the EOSC Strategic Implementation Plan 2017-2020, are mainly focused on research data as an output. However, software, supporting methods and protocols are equally important for effective reuse of research data. In fact, most research projects have now a computational element, therefore a more holistic approach to all research outputs other than traditional publications is needed.

In the current EOSC Declaration the idea that EOSC should support the whole research lifecycle and that “software sustainability should be treated on an equal footing as data stewardship” is only mentioned in the context of Service development. It is key that the Declaration emphasizes all research outputs from the outset to ensure that other objects crucial for research reusability and interoperability, such as code and protocols, do not become second-class category objects.

The need to reward open practices

We welcome the Commission’s view that researchers’ commitments to open practices need to be appropriately rewarded: both at the recruitment stage and during career progression. However, and as already expressed in our comments on the EC’s report on “Evaluation of Research Careers fully acknowledging Open Science Practices”, we believe that examples of immediate adoption are needed.

Researchers who are currently trying to implement Open Science practices need a new reward system now. A pilot grant scheme with the requirement to demonstrate a commitment to Open Science practices as one of the eligibility criteria would be a good starting point. Not only would such approach provide immediate recognition and reward for researchers already practising Open Science, but would also contribute necessary information and feedback for possible further implementation of Open Science practices in future funding schemes (such as FP9).

In addition, there are currently no actions committed for the “[Rewards and incentives]” capacity in the Action list and it would be helpful if some declarations and commitments in this area were solicited.

Data management plan requirements

We welcome the suggestion that data management plans (DMPs) should become an integral part of every research project and we welcome the wish to align funders’ and institutional requirements for DMPs in the EOSC Declaration. However, the EOSC Declaration also mentions that “researchers’ host institutions have a responsibility to oversee and complete the DMPs and hand them over to data repositories”.

We believe that making institutions responsible for overseeing and completion of DMPs removes the responsibility for good data stewardship from researchers. In addition, it poses a risk that DMPs will be perceived by the research community as yet another administrative burden. Making researchers responsible for data management plans provides a good opportunity for them to embrace the idea of good data management practice as an integral part of their research. Institutions could provide advice, expertise and training for researchers, but they should not be the ones responsible for overseeing and completing the DMPs.

In addition, we are unsure about the statement that institutions should be handing over DMPs to data repositories. Which data repositories are meant? Multiple data repositories? Would it not create an unnecessary administrative burden? Perhaps instead there could be a central, European level registry of data management plans, which would also facilitate the alignment of information collected in data management plans and allow more efficient reuse of information collected in data management plans (for example, about the needs for specific IT infrastructure, training support etc.).

Rethinking Reusability. A rakish recap of the ePLAN Workshop FAIR: Facts and Implementations, September 2017

Picture Credits: Daria Nepriakhina

On a rainy Thursday a couple of weeks ago, 14th September 2017, the national Platform for eScience/Data Research (ePLAN) had invited to exchange the latest news about FAIR dataat the eScience Centre in the Amsterdam Science Park. Close to 30 people from different Dutch universities, research support services, research institutions, and ventures followed the workshop appeal. Thus the recaps of Wilco Hazelger (ePlan), Barend Mons (GoFAIR), Peter Doorn (DANS) and Gareth O’Neill (Eurodoc) were received by ears of a quite diverse group of attendees.

For me this event was a good chance to refresh my knowledge about current FAIR processes here in the Netherlands, and to receive some confirmations or contradictions of my interpretation of the FAIR data principles. After nearly half a year of absence on my own FAIR project at TU Delft library, I hoped to get some inspiration out of the conversation with likeminded people on how to implement these principles in everyday research (support) life.

Before I shortly rehash the discussion and consensus of the break out session, I want to share some brain teaser I’ve noted down of the key speakers insights:

Aspects of FAIRness by Barend Mons

∴ Much to my relief Barend confirmed that FAIR is nothing measured in binary but rather a spectrum.

∴ TCP / IPv4 protocols are the current bottle necks of the hourglass design of the soon to be ‘internet of fair’.

∴ Interoperability never can exist without a purpose. Therefore rather assess it in that way: interoperability with what and not just interoperability on its own.

∴ The origin of FAIR emphasizes the machine action-ability of (meta)data.

∴ When talking about a FAIRness evaluation, declare the assessed matter as “re-useless” rather than calling it “unfair”.

∴ The goal of FAIR is R. However, technically I is the key thing of FAIR. “I without F+A makes no sense for R”.

∴ FAIR data can be achieved with FAIR metadata and closed data files.

∴ New perspective on data sharing: establish data visitation instead of data sharing, i.e. your workflow visits the data instead of you receiving data files that were sent to you. To me that is a thrilling shift of perspective: forget sending data files directly via whatever channel, rather establish a platform where the interested people are redirected to landing page of the data-set. Don’t get me wrong, of course this is what we are doing with our archive already. But I also still hear researcher saying, that they share their data via email by request.

∴ A new GoFAIR website is currently under construction and will be launched till end of the year, with a complete makeover and more functionalities as future forming European platform for FAIR work. I am intrigued and will keep an eye out for its launch!

The I in FAIR by Peter Doorn

∴  DANS has 2.6 million pictures as top data category (65% of the archive). Therefore, interoperability of images needs to be tackled. Unfortunately, interoperability of images is hard to determine.

∴  Side note: 4TU.Centre for Research Data has nearly 6.500 datasets in netCDF format as top data category (>90% of the archive). Perhaps this data-format has more advantages in terms of interoperability? Want to know more about our current work with netCDF? Leave a bookmark for the category on this blog.

∴ Barend’s remark about the image interoperability threshold mentioned by Peter: the rich metadata of images makes the interoperability of pictures possible.

∴  The self-assessment tool for FAIR data created by DANS is also connected to the FAIR metrics group.

The Open Science Survey 2017 by Gareth O’Neill

∴  My conclusion of the open science survey by Gareth: the need to improve awareness about open science /access /data /education etc. and the already existing support services will highly likely never decrease.

∴  But who is responsible for increasing the awareness? The university board? The faculties? The research support staff from e.g. the library?

∴  ‘Research visibility’ seems to be the main driver to comply to open science.

∴  The final report and survey analysis will be published in the next 3-6 months. Keep an eye on the Eurodoc website.

A few bits from the group session

∴  What’s the incentive to re-use existing data (where the origin might be untrustworthy) vs. regenerating the data oneself?

∴  Is metadata sufficient for reusability or is there a need for linked data?

∴  Incentives for researcher to create FAIR data needs to be improved asap.

∴  Better distinctions between “data stewards”, “data managers”, “data scientist”; and improved appreciation for researchers doing these jobs.

∴  Biggest nut to crack: what does FAIR data mean in terms of data quality? The data-set (metadata, documentation, and data files) could be perfectly fair, but the actual content of the data files are rubbish. My thoughts on this: first establish certified and trusted data archive / repository that enables FAIR data-sets; secondly gather critical mass of FAIR research data; lastly: enable peer-review of these data-sets to get an actual evaluation of the data quality.

Current FAIR work in the Netherlands, September 2017

∴  The Dutch Tech Centre for Life Science (DTLS) in Utrecht provides a lot of valuable information about FAIR in the life science context. In more detail, DTLS is also focussing on the semantic side of the FAIR data principles and how to implement them.

∴  Data Archiving and Network Services (DANS) in Den Haag are covering the work on these principles predominantly for the humanities and social sciences. One of their practical approaches is a FAIR data assessment tool with subsequent rating of each FAIR facet.

∴  TU Delft Library and 4TU.Centre for Research Data are concentrating on the FAIR data guidance for technological data. A first practical approach was the evaluation of Dutch data repositories and data archives to determine their maturity for the FAIR data demands by funding bodies. The consecutive work is investigating researchers sentiment of the FAIR data principles in relation to their research subject.

∴  In reaction to the individual development of research support and research institutions regarding the FAIR data principles, the European Commission enabled an Expert Group on FAIR data to review these evolvements and received feedback. The report produced by this Expert Group will be delivered first quarter 2018.

∴  The Conference of European Schools for advanced engineering education and research (CESAER) features a task force for Open Science, including a research data management group that also explores FAIR data.

Feedback, input or questions about this blog post? Feel free to comment.

FAIR Principles – Connecting the Dots for the IDCC 2017

In order to follow through our presentation at the IDCC17 in Edinburgh on the 22nd February 2017 we are providing some useful documents and links here.

Here you can find the pre-print and not peer reviewed version of the practice paper with the title ‘Are the FAIR Data Principles fair?’.

The corresponding Excel Spreadsheet with the evaluation overview of 37 data repositories, statistical analysis and graphical figures is available in our data archive under the name ‘Evaluation of data repositories based on the FAIR Principles for IDCC 2017 practice paper’.

Our very first approach on reviewing a data repository and using the FAIR principles as scoring matrix resulted in the following overview about the 4TU.Centre for Research Data called ‘FAIR Principles – review in Context of 4TU.ResearchData’.

The review in context of 4TU.Research Data helps to understand how we approached this quantitative evaluation of these repositories. Additionally we blogged about our interpretation of the FAIR principles and facets, to display the exact features the repositories have been measured against.

The initial spark for this research project was lit by the European Commission and their updated demands on data management for the Horizon 2020 projects. There are two versions of the FAIR Principles available online: a short list of the principles and appropriate facets, and the extended and guided version. The Nature article by the contributors and authors of the FAIR principles recaps the rationale behind the principles and the experiences of implementing them.


Guidelines on FAIR Data Management in Horizon 2020‘ by the European Commission.

The short version of the ‘FAIR DATA PRINCIPLES’.

The extended version of the ‘FAIR DATA PRINCIPLES’.

Read the Nature article ‘The FAIR Guiding Principles for scientific data management and stewardship‘.


FAIR Principles – 4TU.Research Data Interpretation of the Facets

The FAIR Principles are available online in two versions:
the short (https://www.force11.org/group/fairgroup/fairprinciples) and
the extended version (https://www.force11.org/fairprinciples).

We used the short version as scoring matrix for our FAIR Data Principles research project, resulting in an IDCC17 Practice Paper and a Excel Spreadsheet that includes an overview of 37 evaluated research data repositories.

You can also see this document as stand alone file here.