Category: Open Science

New Landscapes on the Road of Open Science: 6 key issues to address for research data management in the Netherlands

Marta Teperek, Wilma van Wezenbeek, Han Heijmans, Alastair Dunning

The road to Open Science is not a short one. As the chairman of the Executive Board of the European Open Science Cloud, Karel Luyben, is keen to point out, it will take at least 10 or 15 years of travel until we reach a point where Open Science is simply absorbed into ordinary, everyday science.

Within the Netherlands, and for research data in particular, we have made many strides towards that final point. We have knowledge networks such as LCRDM, a suite of archives covered by the Research Data Netherlands umbrella, and the groundbreaking work done by the Dutch Techcentre for Life Sciences.

But there is still much travel to be done; many new landscapes to be traversed. Data sharing is still far from being the norm (see here for a visualisation of these results).

The authors of this blog post have put together six areas that, in their opinion, deserve attention on our Open Science journey.

1. Cultural and Technical Infrastructure for Confidential Data

At a recent workshop on data privacy the event ended with a doctor stating that “all data is personal”. This is going too far – much technical data is free from any personal details. Nevertheless, there are many reasons to see personal data everywhere – the increasing quantities of interdisciplinary work that make use of sensor or social media data; legal mechanisms such as the GDPR; the growing possibilities for retrospective de-anonymisation; and the accumulation and analysis of personal data via machine learning. Increasingly, researchers need sophisticated mechanisms for sharing and publishing data based on humans. 

And it’s not just personal data. Increasing engagement with third parties (at TU Delft roughly a third of all research funding is with commercial partners) means that we need to consider how best to safeguard data with a commercial aspect.  We need an infrastructure for sharing commercial data with our industrial partners and protecting potentially economically valuable resources from bad actors. 

The amount of work (tools and services, advice, standards) to be done is huge. We need:  

  • trusted infrastructures for sharing data between universities, medical centres, research units, commercial entities; similar infrastructures for publishing personal data (with different access levels)
  • a national network of disciplinary access committees who can approve requests for access to restricted data; and perhaps a national body that can act as an access point for researchers for sensitive data from third parties (eg similar to the role the CBS has for government statistical data)
  • a national consent service for handling and accessing consent forms
  • national advice (or even specific tools) for anonymising data
  • nationally agreed terms for data access (perhaps a colour coded system from green for open access to black for closed archive)
  • a network of trainers and research data supporters across the country who can guide and advise researchers tiptoeing down the path of personal data 
  • agreed principles by which higher education and private companies should abide by when co-creating research outputs (articles, data etc) 

In many cases, individual research organisations are developing their own solutions. And these issues are being partially discussed within LCRDM groups. But these are generally exploratory discussions. To create a systematic infrastructure (of both digital tools and human expertise) we need a clear plan, a broad nation-wide coalition of partners,  all of whom have clearly defined roles and responsibilities. And of course to embed this in the wider international context. 

2. Encouragement for discipline-specific guidance and standards

Early analysis of the usage of the FAIR principles focussed on how FAIR repositories are. How FAIR was DANS, or 4TU.ResearchData or subject based repositories?

But the FAIR principles apply not just to metadata and repositories but to the data itself. Above all, we need to make datasets interoperable, using harmonious standards, terminologies, ontologies etc. so that researchers from all over the world can immediately reuse data without having to interpret and reconfigure each discovered dataset. 

In some fields, this is already happening (microscopy data, material science, the life sciences, hydrology). But in many sub disciplines, there is no real momentum. Developing this momentum is important, but it is a tricky task, because such standards need to be developed at a disciplinary, international level. 

Nevertheless, we can start to make some small steps. Encouraging disciplinary communities to come together and start discussing the challenges and possibilities for FAIR data would be a great start. This is not just a technical discussion; it is about building networks of engaged people to discuss these topics.  Workshops, discussion papers, critical engagement will all help push the discussion into first gear – something that can be accelerated by international collaboration via RDA, CODATA and, crucially, international subject societies.  

3. Creating a Web of Incentives 

The University of Bristol recently revised its promotion criteria to include open research practices. This is obviously great news for those who believe in Open Science.

But it’s worth looking at how this came about. The decision has not been taken unilaterally. Rather: 

“Including data sharing in promotion criteria is a requirement of institutions signing the Concordat on Open Research Data. Including open research practices in its promotion criteria allows the University of Bristol to sign the Concordat, which will in turn enhance the environment component of its submission to the Research Excellence Framework. There is a web of incentives.

So change here has been because universities worked together at a national level. Strategic leadership has collaborated to create the principles behind the Concordat.

Bristol is not the only example.  The University of Ghent has made a broader overhaul of rewards and recognition,  while the Swiss Academies of Arts and Sciences see the broader ecosystem effects of Open Science.

Within the Netherlands, we need more innovative, nation-wide tactics from our national bodies to implement the ‘web of incentives’ needed to implement Open Science. It’s more than funding bodies simply demanding that projects share their data.

4. Building Capacity for Training

Barend Mons’ claim that we need 500,000 data stewards may have had a touch of hyperbole but it should not mask a key fact: the path to data-intensive science requires new roles (data stewards, managers, research software engineers) as well as data-savvy researchers themselves. This creates an immediate pressure. How do we find such people? How do we train them? How do we get researchers up to speed? From a TU Delft perspective, we have published our Vision on Research Data Management training but do we scale up to train the c.500 new PhD students per year so that they are in a position to publish their data along with their final thesis?

To deal with this common problem, we need to work out a way to train-the-trainer, make use of existing materials, share workshops and generally be smart. This won’t work by institutions working by themselves. Rather, as Celia van Gelder suggested in a recent presentation, we need to have serious investment in capacity building programmes and establish a network of digital research support desks throughout the Netherlands and Europe.

5. Transparent Governance / Coordinated Action

The responsibilities for research data management are often shared between different departments within a university – the library, ICT, legal, research support. These existing silos make it difficult for universities to provide the frictionless support for their research communities. All of us working in support services should be collaborating to see how we can make workable connections between these silos.

But these institutional boundaries also manifest themselves at a national level. Many of the librarians congregate around LCRDM, the Surf CSC group rounds up the ICT managers, while the big decisions at NPOS are taken by senior policy players. Nevertheless, these stakeholders are still dealing with the same fundamental concerns about Open Science and research data – all of them are travelling the same road.  

So we need much better coordination, and smarter routes of governance. We can start by being more transparent. What is each organisation doing, what is its role and responsibilities, where is it going? This is the first milestone in openness. And once we have that we can move on with the coordination and governance issues. Do we look to leadership from our government (OCW), or at least make firm proposals to them, perhaps in exchange for more financial stimulus? Or do we develop grass-roots communities of governance that move more quickly but risk leaving some stakeholders behind?

6. Open Infrastructures for Research

In recent years we have seen numerous acquisitions of various elements of scholarly communication infrastructure by two major commercial players: Elsevier and Digital Sciences. This allows these two companies to offer fully integrated workflows to support researchers in almost the entire research lifecycle (reference management tools, electronic lab notebooks, data repositories, current research information systems, various research analytics tools). A dream come true! No need to develop unsustainable local solutions by universities themselves; no need to constantly struggle to recruit and maintain talented developers and system administrators; bags of money saved with better quality products.

But is that so? Outsourcing the most crucial pieces of scholarly communication infrastructure to commercial providers is risky. Among others, institutions are under threat of vendor lock-in: once investment has been made in an integrated infrastructure (both in terms of the actual effort of the tender process, integrating the provider within the university system, but also communication efforts to various stakeholders) who would want to change things? That’s despite companies often promising that customers own their data and can cancel their contracts anytime. 

Also, commercial providers are often excellent at providing integration, but only within their own plethora of services. Dare try and integrate services offered by different big players! Then there is the obvious threat of market domination: it is difficult for smaller businesses to compete against the big players. Lack of competition is a way forward to price elevation and reduction of quality. 

Finally, by handing over crucial assets (research outputs), academia loses its control. Not only over the actual development of products and services, but, more crucially, over what happens with the data and metadata (commercial companies tend to be very eager to lock down and monetise the latter in particular), but also over the measurement, citation, analytics, discovery, etc. 

Meantime, due to lack of alternative options, more and more Dutch institutions are subscribing to services offered by the two big players. For example, “subscriptions [to Pure – Elsevier’s current research information system] amount to an annual €2.3 million nationwide as compared to €14 million for [Elsevier] journal subscriptions”.

So we desperately need viable, sustainable open source alternatives: Open Scholarly Infrastructures. Ideally developed in collaboration between consortia of academic institutions. There are already some efforts, such as the Invest in Open Infrastructure initiative. However, we desperately need better coordination, more strategic support, resources and investment to make it happen and to make these efforts a priority – not only nationally, but also internationally.

Openness and Commercialisation: could the two go together?

light-bulb-2631864_960_720.jpg

Between 14 and 17 October 2019 I attended the Beilstein Open Science Symposium. As always, excellent, inspiring talks. This year’s talks related to openness and commercialisation were particularly interesting to me, so I would like to share some of my thoughts and observations.


Collaboration with industry is at the core of many research projects at Delft University of Technology. However, working with industry and commercialisation often entails secrecy and close protection of knowledge. At the same time, the University is also a public body, and a substantial proportion of its funding comes from taxpayers’ money. Research funded by the public should be shared as broadly as possible with the public. So how do these two come together? Is openness inherently antagonistic to commercialisation? Can there be a middle ground?

Industry, academia and the public as allies

Chas Bountra, the Pro-Vice-Chancellor for Innovation at the University of Oxford and the Chief Scientist at the Structural Genomics Consortium (SGC) provided a compelling example of how industry and academia can work together to find new medicines and address some of the most pressing healthcare problems in the society. 

Bringing a new drug to market typically costs pharma companies several billion dollars. To ensure return on investment, pharmaceutical companies need to make successful drugs appropriately priced. This, in turn, might make life-saving medicines unaffordable to patients and healthcare providers. Why does it cost so much money to make new drugs? Chas explained that everyone seems to be working on similar drug targets: both industry and academia read the same papers, attend the same conferences, and come up with the same ideas in parallel. Secrecy of the research process means that no one shares negative outcomes of their studies (true for both academia and industry). As a result, only about 7.5% of potential cancer drugs which enter Phase I of clinical trials, make it to the market. This also means that successful drugs need to compensate in their price for all the unsuccessful ones.

Structural Genomics Consortium was created as a collaboration between academia, public funders and industry (nine big pharma companies) out of a desire to accelerate to find new medicines and to improve discovery of new drug targets. Resources from all partners are being pooled to make these to processes more efficient. In addition, the consortium works only on novel ideas – novel targets, which are not explored elsewhere. The consortium purifies human proteins, builds assays, works out 3D structures and creates tools: highly specific inhibitors against these new targets. And how to identify these novel targets? Members of the SGC consortium work with committees composed of experts in academia, industry and clinicians who donate their free time to help SGC decide which new targets to work on. Patient groups not only provide precious human material to work on (patient tissue) but also help identify the experts, as they know well which labs all over the world work on cures for their disease. 

Why would all these stakeholders do all this work for the consortium? Because all the results, all the tools and molecules developed by SGC are made available for free to anyone willing to work on them. For academics this means new, robust research tools enabling innovative research. Pharma companies benefit because they get the chance to take these novel, highly specific molecules and turn them into successful drugs. Clinicians and patients are motivated by the collaboration as it brings hope for new medicines. 

In the end, everyone benefits from openness and collaboration. By now over 70 molecules have been generated by SGC, which are made available to anyone interested in working on them.

Collaboration and openness at any scale speeds up innovation

The example of SGC is certainly inspiring. At the same time, perhaps a bit intimidating for others to follow. Establishing an open collaboration with nine big pharma companies and numerous academics and clinicians is certainly not an easy task to achieve, which must require a lot of trust and relationship building. What if you don’t yet have such connections? Or what if you are an early career researcher, who doesn’t yet have such connections? 

I was greatly inspired by the talk of Lori Ferrins from Northeastern University. Lori is part of Michael Pollastri’s lab, which is working on neglected tropical diseases (NTDs). NTDs are a group of parasitic diseases, such as malaria or sleeping sickness, that disproportionately affect those living in poverty. Pharmaceutical companies are not interested in developing drugs for these diseases because there is no commercial incentive (return on investment rather unlikely). To address this issue, Lori and her colleagues collaborate with pharma companies and with other academic labs. Pharma companies provide access to their existing molecules and are then trying to repurpose these existing molecules into effective parasite growth inhibitors. Academics join in driven by their research interest.

However, not everyone in such collaboration is comfortable with going fully open. To address this issue and to enable cooperation nonetheless, the lab developed a shared database where all data and results are shared within the group of collaborators. In addition, various levels of sharing and collaboration are allowed to ensure that investigators are comfortable to work together. Lori’s story is, therefore, a beautiful example that flexibility can be essential and sharing can occur at various levels and scales. What’s most important is that collaboration and information exchange happens. This helps reduce duplication of effort (collaboration and division of labour instead of competition) and speeds up innovation. 

Open source and commercialisation

Lastly, Frank Schuhmacher spoke about his impressive open hardware endeavour, which is to create an automated oligosaccharide synthesizer. An automated oligosaccharide synthesizer is a machine able to automate the multi-step synthesis reaction of longer saccharide molecules. Self-made synthesizer offers researchers a lot of flexibility: they can add and remove various components of the synthesizer, as necessary for a particular reaction. In addition, researchers are also fully in control if anything goes wrong (without relying on obscure block box mechanisms provided by commercial companies). Moreover, the automation of chemical reactions means more reproducible research.

Frank’s talk sparked a discussion about whether open hardware projects can become self-sustainable and whether they offer any commercialisation potential. And here inspiration from my TU Delft colleague Jerry de Vos, who is involved in Precious Plastics, came in very handy. Precious Plastics started as a collaboration between people who wanted to help recycle the ever-growing amount of plastic waste. They have built a series of machines, which are all modular and consist of simple components. Designs for these machines are available openly – meaning that anyone interested can re-use the design, build their own machines and contribute to plastic recycling. So where’s the money? The fact that everything is open, means that money can be anywhere one can think of. Some business might be started by making the machines needed to process plastics commercially available (in the end, not everyone will be interested in building them themselves). Others might want to create products for sale made from recycled plastics. In fact, lots of businesses have been started with this very idea and Precious Plastics website already has its own Bazaar where myriad of pretty things made from recycled plastics are sold to customers worldwide. 

The philosophy behind is that the more people join in (driven by commercial prospects or not), the more plastic is recycled.

Mix and match

Concluding, while the view that commercialisation must entail secrecy seems to still dominate in academia, the three examples above are clear demonstrations that sharing and openness do not have to go against commercialisation. To the contrary, collaboration can speed up and facilitate innovation and provide new commercial opportunities. What is therefore needed is perhaps a will to experiment and to be flexible to come up with a value proposition which would be interesting enough to all partners to join in. 

And importantly, effective sharing does not mean that everything must be made publicly available – any collaboration, at any level, is better than competition.

Reflections on Research Assessment for Researcher Recruitment and Career Progression – talking while acting?

planner-3481595_1280

Written by: Marta Teperek, Maria Cruz, Alastair Dunning

On 14 May 2019, we (Marta Teperek & Alastair Dunning from TU Delft, and Maria Cruz from VU Amsterdam) attended the “2019 workshop on Research Assessment in the Transition to Open Science”, organised in Brussels by the European University Association. The event accompanied the launch of the Joint Statement by the European University Association and Science Europe to Improve Scholarly Research Assessment Methodologies.

During the day we had a full agenda of valuable presentations and discussions on the topic of research assessment. Colleagues from several European universities presented case studies about current efforts at their institutions. All the presentations are available on the event’s website. Therefore, in this blog post, we don’t discuss individual talks and statements but offer some wider reflections.

Extreme pressure is not conducive to quality research

The first notion, repeated by several presenters and participants, was that the extreme work pressure contemporary academics face is not conducive to high-quality research. To succeed under the current rewards and incentives system, focusing on finding answers to explain natural phenomena through series of questions and testing and following the principles of scientific methodology, as 19th century scientists did, is not enough; 21st century researchers need instead to concentrate on publishing as many papers as possible, in certain journals, and on securing as many grants as possible.

Such pressure, the panellists at the event continued, limits the time available for creative thinking; selects for work that advances career progression to the detriment of work that benefits society and truly advances scholarly knowledge; and drives out young researchers, with adverse effects on the equality and diversity in science.

Capture

Figure showed by Eva Mendez during her presentation comparing a 19th century scientist with a 21st century academic. Source: https://www.euroscientist.com/?s=current+reward+system

You do not need to be a superhero – the importance of Team Science

Extreme work pressure has multiple causes. One significant factor is that academics are currently required to excel at everything they do. They need to do excellent research, publish in high impact factor journals, write and secure grants, initiate industry collaborations, teach, supervise students, lead the field, and much more. Yet it is rare for one person to have all the necessary skills (and time) to perform all these tasks.

Several talks proposed that research assessment shouldn’t focus on individual researchers, but on research teams (‘Team Science’). In this approach, team members get recognition for their diverse contributions to the success of the whole group.

The Team Science concept is also linked to another important aspect of research evaluation, which is leadership skills. In a traditional research career progression, academics who get to the top of their career ladder are those who are the most successful in doing research (traditionally measured in a number of publications in high impact factor venues). This does not always mean that those researchers had the leadership skills (or had the opportunity to develop them) that are necessary to build and sustain collaborative teams.

Rik Van de Walle, Rector of Ghent University in Belgium, emphasised this by demonstrating that in their new way of assessing academics, there will be a strong focus on the development of leadership skills, thereby helping sustain and embed good research.

“Darling, we need to talk”

There was a strong consensus about the necessity of continuous dialogue while revising the research assessment process. Researchers are the main stakeholders affected by any changes in the process, and therefore they need to be part of the discussions around changing the rewards system, rather than change being unilaterally decided by funders, management and HR services. To be part of the process, researchers need to understand why the changes are necessary and share the vision for change. As Eva Mendez summarised, if there is no vision, there is confusion. Researchers need to share this vision, as otherwise, they can indeed become confused and frustrated about attempts to change the system.

In addition, research assessment involves multiple stakeholders, and because of that, all these different stakeholders need to be involved and take actions in order for successful systemic changes to be implemented. Consultations and discussions with all these stakeholders are necessary to build consensus and shared an understanding of the problems. Otherwise, efforts to change the system will lead to distrust and frustration, as summarised by Noemie Aubert Bonn with her ‘integrity football’ analogy, where no one wishes to take the responsibility for the problem.

Integirty football

“Integrity football” by Noemie Aubert Bonn. Source: https://eua.eu/component/attachments/attachments.html?task=attachment&id=2176

… while acting!

At the same time, Eva Mendez reminded us that just talking and waiting for someone else to do something else will also lead to disappointment. She thought that more stakeholders should act and start implementing changes in their own spheres of influence. She suggested that everyone should ask themselves the question “What CAN I do to… change the reward system?”. She provided some examples: PlanS as an important initiative by funding bodies, and the consequent pledges by individual researchers on their plans of PlanS adoption, or the FOS initiative – Full Open Science Research Group, which is designed for entire research groups wishing to commit to practising Open Science.

Conclusions – so what are we going to do?

All three of us who attended the event are working in research data support teams at university libraries. We are not directly involved in research evaluation and we are grateful to our libraries who allowed us to participate in this event to broaden our horizons and deepen our interests. That said, we reflected on Eva’s call for action and thought that besides writing a blog post, we could all contribute at least a little bit to a change in the system.

Here are our top resolutions:

  • Marta will work on better promotion and recognition of our Data Champions at TU Delft – researchers who volunteer their time to advocate good data management practices among their communities;
  • Alastair will lead the process of implementing an updated repository platform for 4TU.Center for Research Data, which will give researchers better credit and recognition for research data they publish;
  • Maria will kick off the Data Conversations series at VU Amsterdam, which will provide a forum for researchers to be recognised for the research data stories they share with others.

In addition, whenever we have a chance, we will keep reminding ourselves and our colleagues about the importance of rewarding quality, and not quantity, in research. An example of that was the VU Library LiveRethinking the Academic Reward System” talk show and podcast held at the VU Amsterdam on 14 March 2019, which revolved around the question of how to change the academic reward to facilitate research that is open and transparent and contributes to solving key societal issues.

Additional resources

I need your data, your code, and your DOI

This blog is originally written and posted by one our Data Champions 

I love open science. Since you are reading a scientific blog, I believe it is likely that you also support many of open science ideas. Indeed, easy access to publications, code, and research data makes research easier to reuse, while also ensuring transparency of the process and better quality control. Unfortunately the academic community is extremely conservative and it just takes forever for new standards to become commonplace.

The push for change in scientific practice comes from many directions.

  • Many funding agencies now require that all publications funded by them are publicly accessible. The upcoming Plan S would go further and only allow open access publications for all public funded research.
  • Frequently when submitting a grant proposal these days one also must include a data management plan [1].
  • The glossy journals in our field tighten their data publication requirements (see Nature and Science).
  • At the same time there are multiple grassroots initiatives for setting up open access community-run journals: SciPost [2] and Quantum.

Also as individual researchers we can do a lot. For example, our group routinely publishes the source code and data for our projects. Recently Gary Steele and I proposed to our department that every group pledges to publish at least the processed data with every single publication. This is miles away from the long-term vision of publishing FAIR data, but it is a step in the right direction that does not cost too much effort and that we can do right now. We were extremely pleased when our colleagues agreed with our reasoning and accepted the proposal.

The policy changes and initiatives help improve the practice, but policy changes are slow and grassroots initiatives require extra work and might require convincing skeptically minded colleagues. Interestingly I realized that there is another way to promote open science, which doesn’t have any of those drawbacks. Instead it is awesome from all points of view:

  • It does not require any effort on your side.
  • It has an immediate effect.
  • It helps researchers to do better what they are doing anyway.

Almost too good to be true, isn’t it? I am talking about one situation where every researcher is in a position of power: reviewing papers. The job of a reviewer is to ensure that the paper is correct, and that it meets a quality standard. As soon as the manuscript is even a bit complex, one cannot assert its correctness without examining the data and the code that are used in it. Likewise, if the data and the code comprise a significant part of the research output, the manuscript quality is directly improved if the code and the data is published as well.

Therefore I have decided that a part of my job as a reviewer is to to ensure that the code and the data is available for review as soon as it is sufficiently nontrivial. I have requested the code and the data on several occasions, following this request with a suggestion to also publish the code and the data.

I was pleasantly surprised with the outcome. Firstly, nobody wants to argue against a reasonable request by a referee. Secondly, often the authors are happy to share their work results and do a really decent job. Finally, on more than one occasion already requesting the data was enough for the authors to find a minor error in their manuscript and fix it. In the current system where publishing this supplementary information does not bring any benefit, the authors are seldom motivated to make their code understandable and data accessible. Once a review requests the data and the code, the situation changes: now whether the paper gets published also depends on the result of this additional evaluation.

So from now on, whenever I review a manuscript, in addition to any other topics relevant to the review, I am going to write the following [3]:

The obtained data as well as the code used in its generation and analysis constitute a significant part of the research output. Therefore in order to establish its correctness I request that the authors submit both for review. Additionally, for the readers to be able to perform the same validation I request that the authors upload the data and the code to an established data repository (e.g. Zenodo/figshare/datadryad) or as supplementary material for this submission.

I hope you join me and do the same [4].

 

[1] One has to note that the data management plans are mostly overlooked during the review.
[2] Full disclosure: I’m a member of SciPost editorial college.
[3] Obviously, I’ll adjust this bit if the paper doesn’t have code or data to speak of.
[4] Consider that bit of text public domain and use it as you see fit.

A Subjective Assessment of Research Data in Design


van Leeuwenhoek’s microscopes by Henry Baker (Source: Wikimedia Commons)

In the autumn of 2018 I took up the post of Data Steward in the Faculty of Industrial Design Engineering (IDE). As I am not a designer myself (my academic background is in historical literature), a significant portion of my time is dedicated to understanding how research is conducted in the realm of design, in particular trying to compose an overview of the types of data collected & used by designers, as well as how current and upcoming ideas & tools for research data management might potentially benefit their activities. This is no mean feat, and at present I cannot lay claim to more than a superficial understanding of the inner workings of design research. Through day-to-day data steward activities – attending events, reading papers and, perhaps most revealing, conversations with individual researchers, to name but a few – the landscape of design research data gradually becomes more intelligible to me. Cobbling together a coherent picture from these disparate sources requires a modicum of dedicated thought, so it was my good fortune to have recently been invited to an event arranged by the Faculty of Health, Ethics & Society (HES) at Maastricht University to present my experiences with design data thus far. Here we discussed and compared research data practices, and my preparation for this discussion afforded me the opportunity to reflect a bit on what research data means in the field of design, how design methodology relates to other academic fields and what kinds of challenges and opportunities exist for handling data and making it more impactful within the discipline and beyond.

The HES workshop, organized early in February of this year, was a forum for the group to discuss how their work and the data they produce intersect with some of the issues currently being debated within academic communities. A specific goal was to evaluate some of the arguments originating in the (at times competing) discourses of Open Science and personal privacy. Topics of discussion included how one should make sociological and healthcare data FAIR, especially given that the materials collected in HES are often predominantly qualitative in nature: personal interviews, ethnographic field notes, etc. Questions surrounding these topics are broadly applicable to some qualitative types of data in design as well, e.g. the extent to which data should be shared, in what format and under what conditions. The slides from my talk are available here: https://doi.org/10.5281/zenodo.2592280, and this blog post is intended to give them some context.

Research Data in Design

Maintaining an overview of the various types and amounts of data produced, analyzed and re-used within the Faculty of Industrial Design Engineering is a core aspect of my work as a data steward, but it is an ongoing challenge due to the heterogeneity of data used by designers and the quantity of different projects simultaneously active. Some designers do market research involving i.a. surveys, others take sensor readings and yet others develop algorithms for improving the manufacturing process. Each of these, along with the many other efforts within IDE, merit their own suite of questions and concerns when it comes to openness and privacy. The more we understand data types and usage in a field, the better we can judge the impact of present and future actions germane to research data – open access initiatives, legislation (esp. the GDPR), shifts in policy or practice, etc. More importantly, we can predict how we might turn some of these to our advantage.

For instance, TU Delft recently instituted a policy that all PhD students will be required to deposit the data underlying their thesis. For new PhD students, this will simply be a part of the process, one step among the many novel activities they experience on the way to earning their PhD. The real challenge lies with members of my faculty, the experienced researchers and teachers, as well as myself, who will have to identify the value in applying this new policy to research data in their field. To do this we must ask ourselves a series of questions. In addition to the aforementioned ‘what kind of data do we have and use?’, we must determine what should be made public as well as to what degree. Underlying all of this is a more fundamental question is, of course: how does sharing this information improve the production of knowledge in design and the fields which it touches? Some of these queries have clear answers, but the majority require further discussion and reflection.

Data Sharing and Data Publishing

One common question I receive in various forms is why designers and design researchers should share their data more widely than they presently do. In many instances I find this returns to the aforementioned issue of diverse types of data. For some designers who have a clear definition of what their data is, why it is collected and how others can use the data, such as the DINED anthropometrics group, a conversation on what data to share and how can be fairly straightforward. But what are the actual benefits of sharing design notes or other types of context-bound qualitative data? In the data management community we have a set of commonly purveyed answers to this query, and I have been trying to see how they match up to existing practice in design.

The first is idealistic, that publishing data will further the field, improve science through increased transparency, accuracy and integrity. Reactions to this argument often take the form of a slow nod, a sign I take to be cautious optimism (one which I happen to share). This outcome is difficult to measure. I was once asked who would be interested in seeing the transcripts of x number of their interviews. A legitimate question, and one with an inscrutable answer – it is difficult to tell who will use your data if they do not know it exists in the first place. A corollary to this is that we ask people to weigh the requisite time investment in making materials publishable (sometimes substantial if working with qualitative and/or sensitive data) against this unpredictable benefit. I believe we need more evidence of the positive impact of making design data FAIR, whether this be figures of dataset citations (currently a desideratum) or anecdotal evidence of new contacts and collaborations resulting from data sharing. Essentially this means a few interested volunteers willing to learn the tools, put in some extra time and test the waters. Will sharing my sensor data attract the attention of a new commercial partner? Will my model be taken up and improved upon by the community using the product or service we design? These are certainly possibilities, but at present they remain a future less vivid.

For PhD students and early career researchers I frequently posit the possibility that publishing data, making their publications Open Access and other actions to make their work more transparent could yield direct career opportunities. This ties into efforts promoting expansion in the interpretation of research assessment such as DORA. In my current position, I feel that designers may be ahead of the curve when it comes to evaluating research impact. In addition to research papers published in journals boasting various impact factors, desirable results from design projects include engagement tools, reflections from projects, and prototypes to name only a few. The weighting of these outputs is unclear to me when it comes to, e.g. obtaining a research position, but I suspect there is room here for alloting credit to demonstrations of open working. This is certainly the case in some fields where lectureship advertisements include explicit language supporting Open Science. As far as I have been able to determine (in my extremely casual browsing of job postings) this is not yet an element of the narrative designers weave to present their work to potential employers nor one sought by employers themselves. However, data publications as part of CVs attached to grant applications may indeed have some cache, as funding agencies such as the NWO and ZonMw presently stress the importance of such activities in the pursuit of maximizing investment returns in the grants they award. Here is an opportunity to serve the interests of many.

Food for Thought

One of my takeaway messages from these debates is that there is a need for a community – in design, in many research areas – an opportunity to convene and discuss issues and test some of the options being afforded or demanded under the umbrella of Open Science. Some design research shares a number of data issues in common with social sciences – questions of consent, of data collection and access – while others are more aligned with mathematics or medicine. Furthermore I’d be interested to hear whether any RDA outputs have an application in design, as well as whether repositories for design materials would be desirable and how they should be arranged. From my admittedly biased position, I believe there is much that designers stand to gain from picking up versioning tools or sharing data more widely, and I think designers’ methods and the iterative nature of design thinking, as I understand them, could in turn only benefit Open Science communities.

Data Stewardship – goals for 2019

bulletin-board-3127287_1280.jpg

Authors: Heather Andrews, Nicolas Dintzner, Alastair Dunning, Kees den Heijer, Santosh Ilamparuthi, Jeff Love, Esther Plomp, Marta Teperek, Yasemin Turkyilmaz-van der Velden, Yan Wang

From February 2019 onwards and with the appointment of the data steward at the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS), the team of data stewards is complete: there is a dedicated data steward per every faculty in TU Delft. Therefore, the work in 2019 focuses on embedding the data stewards within their faculties, policy development, and also on making the project sustainable beyond the current funding allocation.

The document below outlines high-level plans for the data stewardship project in 2019.


Engagement with researchers

In 2019, the data stewards will (among others) apply the following new tactics to increase researchers’ engagement with research data management:

Meeting with all full professors

Inspired by the successful case study at the faculty of Aerospace Engineering, data stewards will aim to meet with all full professors at their respective faculties.

Development of training resources for PhD students and supervisors

Ensure that appropriate training recommendations and online data management resources are available for PhD students to help them comply with the requirements of the TU Delft Research Data Framework Policy. These should include:

  1. Appropriate resources for PhD students, e.g. support for data management plan preparation, and/or data management training for PhD students
  2. Support for PhD supervisors, e.g. data management guidance and data management plan checklists for PhD supervisors
  3. Online manuals/checklists for all researchers, e.g. information on TU Delft storage facilities, how to request a project drive, how to make data FAIR

Support for data management plans preparation

Ensure that researchers at the faculty are appropriately supported in writing of data management plans:

  1. At the proposal stage of projects, researchers are notified about available support for writing the data paragraph by the contract managers and/or project officers of their department
  2. All new grantees are contacted by the data stewards with an offer of data management and data management plan writing support
  3. Training resources on the use of DMPonline, which will be used by TU Delft for writing Data Management Plans, are available and known to faculty researchers

Coding Lunch & Data Crunch

Organise monthly 2h walk-in sessions for code and data management questions for faculty researchers. Researchers will be supported by all data stewards and the sessions will rotate between the 8 faculties.

The Electronic Lab Notebooks trial

Following up on the successful Electronic Lab Notebooks event in March 2018, a pilot is being set up to test Electronic Lab Notebooks at TU Delft in 2019. The data stewards from the faculties of 3mE and TNW are part of the Electronic Lab Notebooks working group and are in contact with interested researchers who will be invited to get involved in the pilot.

Data Champions

Further develop the data champions network at TU Delft:

  1. Ensure that every department at every faculty has at least one data champion
  2. Develop a community of faculty data champions by organising a meeting every two months on average
  3. Organise two joint events for all data champions at TU Delft and explore the possibility of organising an international event for data champions in collaboration with other universities

Faculty policies and workflows

In 2019, all faculties are expected to develop their own policies on research data management. However, successful implementation of these policies will depend on creating effective workflows for supporting researchers across the research lifecycle. Therefore, the following objectives are planned for 2019:

  1. Draft, consult on and publish faculty policies on research data management.
  2. Develop a strategy for faculty policy implementation
  3. Develop effective connections and workflows to support researchers throughout the research lifecycle (e.g. contacting every researcher who was successfully awarded a grant)

RDM survey

A survey on research data management needs was completed at 6 TU Delft Faculties (EWI, LR, CiTG, TPM, 3mE and TNW). In 2019, the following activities are planned:

  1. Publish the results of the survey conducted in the 6 faculties in a peer-reviewed journal
  2. Conduct the survey at BK and IDE  – first quarter of 2019
  3. Re-run the survey at EWI, LR, CiTG, TPM, 3mE and TNW – September 2019
  4. Compare the results of the survey in 2017/2018 with the results from 2019 of the re-run survey and publish faculty-specific reports with their key reflections on the Open Working blog
  5. Survey data visualisation in R or python
    The visualisation of 2017/2018 RDM survey results was available in Tableau, which is proprietary software. To adhere to the openness principle, and also to practice data carpentry skills (see below), the 2019 data visualisation will be conducted in R.

Training and professional development

On top of specific training on data management, in 2019 data stewards will invest in training in the following areas:

Software carpentry skills

Code management is now an integral part of research and is likely to become even more important in the coming years. Therefore, as a minimum, every data steward should complete the full software carpentry training as an attendee in order to be able to effectively communicate with researchers about their code management and sharing needs. In addition, data stewards are strongly encouraged to complete training for carpentry instructors to further develop their skills and capabilities.

Participation in disciplinary meetings

In order to keep up with the research fields they are supporting, data stewards will also participate in at least one meeting, specific to researchers from their discipline. Giving talks about data stewardship / open science during disciplinary meetings is strongly encouraged.

Events

In addition to dedicated events for the Data Champions, the following activities are planned for 2019:

In addition, the team is planning to organise the following events (no dates yet)

  • Software Carpentry workshops
    • March & November 2019 – at TU Delft
    • May 2019: at Eindhoven
    • October 2019: at Twente
  • Workshop on preserving social media data – workshop which will feature presentations from experts in the field of social media preservation, as well as investigative journalists (e.g. Bellingcat)
  • Conference on effectively collaborating with the industry (managing the tensions between open science and commercial collaborations)

Individual roles and responsibilities

Some data stewards have also undertaken additional roles and responsibilities:

  • Yasemin: Electronic Lab Notebooks, Data Champions
  • Esther: Electronic Lab Notebooks, DMP registry
  • Kees: Software Consultancy Lead

Sustainable funding for data stewardship

The current funding for the data stewardship project (salaries for the data stewards) comes from the University’s Executive Board and is until the end of 2020. However, the importance of the support offered to the research community by the data stewards has been already recognised not only by the academic community at TU Delft but also by support staff.

In order to ensure the continuation of the data stewardship programme and for TU Delft not to lose the highly skilled, trained and sought-after professionals, it is crucial that the source of sustainable funding is identified in 2019.

Data Stewardship at TU Delft – 2018 Report

Capture.PNG

Authors: Marta Teperek, Yasemin Turkyilmaz-van der Velden, Shalini Kurapati, Esther Plomp,  Heather Andrews, Robbert Eggermont

TU Delft has been leading the way in fostering a good research data management culture to uphold the quality, transparency and reproducibility of research. Since 2017, TU Delft has piloted the Data Stewardship programme with the aim to provide disciplinary specific data management support to TU Delft researchers. The focus on disciplinary support is motivated by the belief that in research data management (RDM), there are no one-size-fits-all solutions.

TU Delft has eight faculties with a wide range of research topics. In order to provide dedicated disciplinary support to researchers, a Data Steward was appointed at every faculty. Each Data Steward has a PhD degree in research are relevant for the faculty.

This is a condensed 2018 annual report describing the progress, activities, achievements and future prospects of the project.


Team building and laying the groundwork for the programme

In 2017 the majority of work focused on the recruitment of Data Stewards at three faculties: Electrical Engineering, Mathematics and Computer Sciences (EEMCS), Aerospace Engineering (AE) and Civil Engineering and Geosciences (CEG), and laying the groundwork of the programme. In 2018 Data Stewards were appointed at the remaining faculties, which concluded the team building work and brought the programme to its full speed. Since the beginning of 2019, the team of Data Stewards is at its full capacity, with a dedicated Data Steward per faculty.

The Data Stewards meet weekly for training, information sessions, and knowledge and practice exchange. The weekly meetings focus on the RDM needs of TU Delft researchers and keeping up to date with the most recent trends in RDM such as the FAIR principles, General Data Protection Regulation (GDPR) law, research and software reproducibility. Dedicated experts from TU Delft, as well as national and international scene are regularly invited to these meetings. Communication channels and information sharing spaces have been also created and are now effectively used by all team members. To increase the visibility of the programme and to openly share its progress, a Data Stewardship webpage and a dedicated section on Open Working blog were launched. While the Data Stewards are embedded at each faculty, the Research Data Services (RDS) team operate centrally at the TU Delft library. To establish strong links between these two teams, a joint Away Day is organised once a year. Additionally, members of the RDS team are also attending weekly Data Stewards meetings and participate in some of the joint projects and undertakings (e.g. roll out of a new data management plan template). In addition, connections with faculty secretaries were developed through dedicated meetings to talk about Data Stewardship hosted by the Library and attended by all faculty secretaries. All of these activities were overseen and coordinated by the Data Stewardship Coordinator who is located at the TU Delft library.

Day to day activities of the Data Stewards

The role of the Data Steward at TU Delft is relatively new, so one of the first tasks of the Data Stewards was to become visible to researchers and gather intelligence on the type of support and advice researchers require within the faculty. In the first couple of months, Data Stewards engaged with researchers during faculty meetings, interviews, graduate school seminars, open science roadshows and by sending out a survey on the data management needs (see below for more details).

After researchers were sufficiently aware of the help they could receive, Data Stewards started receiving questions and requests for data management support.  The requests varied across the 8 faculties, but there were a few common topics on which Data Stewards were regularly consulted, such as: advice on data management plans, information about data archiving options, data sharing possibility, GDPR concerns, cross-border data transfers, commercially sensitive data, or data licensing.

Data stewards are also the linking pin to the broader TU Delft research support ecosystem.  Pragmatically speaking, Data Stewards act as general practitioners to all data related questions and issues. If there is a need for a specific intervention from a university wide legal, ethics or ICT specialist, Data Stewards know where to direct the researcher to get the most specific and useful answers.

In addition to advice and consultation, Data Stewards provide and/or facilitate on-request training and workshops on data management topics for researchers and PhD students. Agreements are made with faculty graduate schools to allocate credit points for participation.

At the moment all the Data Stewards are involving in leading the RDM policy development at their respective faculties.

Data Champions

Although embedding Data Stewards at each faculty is a prerequisite for creating awareness and achieving cultural change in RDM, community building efforts are essential to fully accomplish these goals. Additionally, it is impossible for a single Data Steward to have all the necessary disciplinary background to understand and support all types of research carried out in one faculty. Therefore the Data Champions programme was launched in September 2018.

Data Champions are researchers who voluntarily act as local community-based advocates for good data management and sharing practices. In return, they are provided with opportunities to showcase their activities during meetings at the department, faculty and TU Delft level as well as (inter)national conferences to offer increased impact and visibility. Additionally, the Data Champions are offered travel grants to join meetings and conferences to showcase their Data Champion activities, and trainings and workshops to learn new RDM skills to share with their local community members.

Suitable candidates for the programme are identified by faculty Data Stewards and are encouraged to become Data Champions. The general communication with the Data Champions is carried out by the Data Steward at the Faculty of Mechanical, Maritime and Materials Engineering (3mE), who took on the role of the Data Champions Community Manager. The first meeting to officially kick off the programme was on 14 December 2018. This meeting took place in an informal setting to encourage interactive discussions, knowledge exchange and networking. Overall, it was very well received by the Data Champions as well as the research support professionals.  As of December 2018 we already had 27 Data Champions (at least one Data Champion per faculty) and this number is still growing. The AE Faculty, as well as the Faculty of Technology, Policy and Management (TPM), already have at least one Data Champion at every department.

The Dean of the Faculty of Applied Sciences (AS) has recognised the importance of Data Champions for advocating for good data management and sharing practices and aims to also have at least one Data Champion per department. The AS faculty already has six Data Champions and two of them, Anton Akhmerov and Gary Steele, took the lead in creating a dedicated policy on Open Data for their department (Quantum Nanoscience). The importance of the Data Champions programme has been recognised also at a strategic level at TU Delft, evidenced by the wish of Prof. Rob Mudde, the Vice Rector Magnificus of TU Delft, to attend the next meeting of the Data Champions.

RDM Survey

To be able to offer dedicated RDM support, it is necessary to first define the problems and the needs of the researchers. Our survey on research data management needs, which was initiated in 2017 at three faculties (EEMCS, CEG and AE), has been extended and completed in three other faculties in 2018 (TPM, 3mE, AS). The survey gathered 680 responses in total and the data visualisation is publicly available. The survey provided important information on the state of data management practices at TU Delft. The survey will be repeated yearly and this way the results will serve as a benchmark to indicate the effects of the work of Data Stewards on data management awareness and practices at the faculties.

The joint presentation summarising survey results at LIBER conference in July 2018 by the Data Stewards from LT and 3mE faculties was very positively received by the community and downloaded 187 times. Based on this presentation, we got invited to submit a paper about the survey results to LIBER Quarterly. The survey will be run at the two remaining faculties (Architecture and the Built Environment – ABE, and Industrial Design Engineering – IDE) and re-run at the other faculties in 2019.

Data Stewardship in numbers

Summarising, in 2018 the Data Stewards have received at least 245 requests for help with data management (note that not all the requests are recorded, given that it involves manual copy-pasting of the requests received by emails). In addition, in 2018 Data Stewards conducted 68 dedicated interviews with researchers about their data management practices. Notably, the Data Steward at the AE Faculty has met with all the full professors at the faculty, which was positively received by TU Delft’s ex-Rector Magnificus Karel Luyben.

In addition, Data Stewards adhere to the principle “practice as you preach” and therefore share their work as openly as possible. In 2018 the team published 29 blog posts and other publications on the Open Working blog. Our top viewed blog post in 2018 is by the Data Steward at EEMCS, describing the results of the RDM survey (viewed 844 times).

Furthermore, the team have attended 46 national and international conferences and meetings in 2018, including 33 occasions were Data Stewards were presenting as invited speakers or keynote speakers. The Data Steward from the 3mE Faculty was awarded the competitive Research Data Alliance Early Career Researcher Grant to attend the International Data Week 2018 conference in Botswana in November 2018. Again, in adherence with the openness principles, all presentations are publicly shared in a dedicated Data Stewardship at TU Delft community in Zenodo.

Data Stewardship event

On 24 of May 2018 the team has organised a dedicated event “Engaging researchers with research data – Data Stewardship in practice” to showcase the work of Data Stewards at TU Delft and to exchange views and practices on Data Stewardship with other universities. The event was attended by over 120 individuals (with 35% of the participants  from countries other than the Netherlands). All participants judged the event as “good” or “excellent” and responses to open questions were overwhelmingly positive.

All the photos (taken by Jan van der Heul from the RDS team, our Chief Photographer), videos and presentations from the event are publicly available. In addition, three participants wrote blog posts with their reflections and take-home messages (Marjan Grootveld, Danny Kingsley and Martin Donnelly).

Projects

Data stewards have also been involved in many diverse projects. For example, the Data Stewards from the AE and CEG faculties took part in developing domain data protocols, which aim to provide researchers with disciplinary standards for data management in their research domains. The Data Stewards from the 3mE and AS faculties are part of the Electronic Lab Notebooks working group, which, following up on the successful Electronic Lab Notebooks event in March 2018, is now setting up a pilot to test Electronic Lab Notebooks at TU Delft in 2019.

Data stewards from the faculties of TPM, 3mE, AS and CEG have been involved in providing support for researchers working with software in order to improve code management practices and to make software more reproducible. Several workshops on software sustainability were organised, which resulted in a dedicated research paper that got accepted to be presented during the IEEE eScience 2018 conference and got published in the conference proceedings. The preprint of this paper is already downloaded 227 times.

These efforts eventually resulted in 4TU.Center for Research Data joining in December 2018 The Carpentries which is a non-profit organization teaching foundational coding, and data science skills to researchers worldwide. On 29 and 30 November, the first Software Carpentry workshop took place at TU Delft. The tickets got sold out just in a matter of days and we had around 30 researchers participating and another 45 on the waiting list, showing the huge interest and need for such training. Two more Carpentry workshops will take place in TU Delft in 2019. In addition, the Data Steward from the CEG faculty took the lead in the organisation of walk-in coding consultations for researchers wishing to get tailored support on their code management practices, which, due to its success and positive feedback from researchers, will continue to be organised on a regular basis. Moreover, a meeting with TU Delft researchers took place to discuss community building efforts for good programming practices. To this meeting, a representative from the Carpentries and a researcher from the University of Amsterdam was invited to learn lessons from their community building efforts.

Data Stewards have been also instrumental in driving forward the Open Science agenda. Dedicated Open Science roadshows (information sessions on research data management and on Open Access) have taken place at AE, TPM, IDE and CEG faculties. In addition, the TPM faculty organised a dedicated workshop on Open Science to their PhD students. The presentation “Open Science in a nutshell: what’s in it for me?” which was uploaded to Zenodo, has been downloaded 324 times and viewed 1,815 times.

In the current changing funding landscape where the researchers are expected to publish their papers and data openly, it is not feasible to evaluate researchers based on high impact journal publications alone for funding and promotion criteria. This is why, the TPM Faculty was also actively involved in discussions about academic rewards and how to make open science count in academic careers. Prof. Bartel Van De Walle was the keynote speaker at the event on Open Science skills which was co-organised by the Data Stewards, 4TU.Centre for Research Data and the EOSCPilot. There were two separate blog posts highlighting the key aspects of the event (one blog post about the event as a whole and another one about the interactive workshop).

Following the principle that good data management should start as early as possible, the Data Steward from the AE Faculty opiloted the use of Dataverse for keeping research data of master students. Valuable and curated datasets can be subsequently easily published with 4TU.Center for Research Data.

Recognising the need for disciplinary support and for community building, Data Stewards from the ABE and IDE faculties identified the need for Digital Humanities community at TU Delft and are currently discussing with researchers across TU Delft to scope their interests and needs. A bottom-up approach is taken to encourage researchers to take lead in forming their own communities and exchange research ideas, resources and challenges. The first community-driven meeting will take place in early January 2019 at ABE faculty.

Since 25 May 2018, GDPR has came into effect in Europe. In August, two events dedicated to GDPR and its implications for research data were co-organised by the Data Stewards and the Research Data Netherlands. An important aspect of these two events was that representatives from multiple institutions and countries were present to talk about their individual approaches and considerations.

Policy Development

On 26 June 2018, the TU Delft Research Data Framework Policy was approved by TU Delft’s Executive Board. The Framework Policy is an overarching policy on research data management for TU Delft as a whole and it defines the roles and responsibilities at the University level. In addition, the Framework provides templates for faculty-specific data management policies. It is important to develop the faculty policies according to discipline specific RDM needs of the researchers, so they can use this policy as a roadmap for good RDM practices.

Currently, the deans and the faculty management teams, together with the Data Stewards, are busy with the development of faculty-specific policies on data management which will define faculty-level responsibilities. Any interested researcher and research supporter will be invited to give feedback and therefore contribute to the development of the faculty policy. In AS and 3mE faculties, which have around 1000 researchers each, a single meeting would not be feasible, therefore the Data Stewards of these faculties will join to the meetings of every individual department to introduce the policy and ask for feedback. The Data Champions are particularly encouraged to get involved in the development of the policy in their faculties in order to fine tune the policy based on their disciplinary needs.

Future Prospects

As can be seen in this report, 2018 has been a very fruitful year for the TU Delft Data Stewardship programme and with a full team of Data Stewards from the beginning of 2019, we expect 2019 to be even more productive. The faculty policies are expected to be rolled-out and published 2019. As one of the requirements of the policy is all PhD candidates starting from 2019 to attend data management training, currently the Data Stewards are busy with the development of a dedicated training suitable for the disciplinary needs of the PhD candidates. For this, the Data Stewards are in close contact with the central and faculty graduate schools, PhD councils and colleagues from TU Delft Library.

We already have three events planned in 2019: a seminar titled as Limits of Reproducibility: Strategies for Transparent Qualitative Research which will be followed by a hands-on workshop about Managing Qualitative Data for Sharing and Transparency on 28 January, open science seminars kick off on 27 February and a seminar on publishing reproducible research on 16 May.

Additionally, we will also have a one-day event for all TU Delft’s Data Champions,
one workshop on working with software and High Performance Computing (HPC), a conference on collaboration with industry and open science and two more software carpentry workshops.

In addition, a dedicated blog post about out plans for 2019 is going to be published soon, so watch this space!