Presentation by Shalini Kurapati and Michiel de Jong for PhD students at TPM faculty at TU Delft (presented on 7 September 2018): https://doi.org/10.5281/zenodo.1409027
Authors: Wilma van Wezenbeek, Alastair Dunning, Marta Teperek
Date: 3 August 2018
- Report: Prompting EOSC in Practice (Rules of Participation are on page 28): https://www.eudat.eu/sites/default/files/prompting_an_eosc_in_practice_eosc_hleg_interim_report.pdf
- EOSC Open Consultation website: https://eoscpilot.eu/open-consultation
Structure of our comments
- Overarching comments
- Specific comments on the Report Prompting EOSC in Practice
- Comments on specific Rules of Participation
- Overall, the report is a useful outline of the vision for the European Open Science Cloud (EOSC), with concrete proposals for the rules of participation.
- Recommendations for working with commercial partners need to be carefully thought through. In particular, mechanisms need to be in place to ensure that commercial partners participating in the EOSC do so on the same rights as non-commercial partners.
- Care needs to be taken when deciding on recommendations for Member States and for research communities. The latter are always international in nature.
- It is not clear why additional intermediaries are required for managing financial contributions to the EOSC partners. Couldn’t transactions be arranged without additional intermediaries, which increase the costs and complexity of the ecosystem?
- At some places the recommendations are unclear, lacking structure and alignment. It feels as if the founding vision for the EOSC is not clear enough to create harmonised rules for participation.
- The chapters on business model and financing feel like “far away” from today’s practice.
Specific comments on the Report Prompting EOSC in Practice:
Page 8 and 9 (Executive Summary)
“To help drive forward and implement the EOSC, the main thread of the report is to understand how the EOSC can effectively interlink People, Data, Services and Training, Publications, Projects and Organisations.”
- Should “labs and instruments” and “places” be added there as well?
“The EOSC should implement “whatever works” and do “whatever it takes” to increase the availability and volume of quality & user-friendly scientific information on-line.”
- Why should EOSC focus on “ volume”? This is an odd phrase to use.
“Define an EOSC Quality of Service (QoS) standards, separate for all elements of the ecosystem (data, data access services, software, etc.), to develop a trustable ecosystem.”
- Publications should also be part of the EOSC ecosystem
“Introduce, as part of EOSC’s mission, that a state of the art analysis is carried out on a national level within the Member States for assessing statistics and key assets around the composition and relevant clustering of the community of users, with the respective eInfrastructures & research infrastructures & scientific communities.”
- Why do this on a national (Member State) level? That is not the way these communities are constructed, or how science works.
“The universal entry point to the EOSC should provide access to a marketplace of efficient and effective services, with lightweight integration (authentication and authorization infrastructure, order management, etc…) where service providers can find service users and vice versa. Nothing is wrong with a number of multiple entry points which should be seen as a plus rather than a negative fragmentation.”
- This recommendation is unclear. We want a universal entry point, but promote decentralised ones?
“Introduce a regular assessment of EOSC against other alternatives, including commercial providers. This could be made to either enhance an EOSC Service, or to support new Services;”
- Alternatives to EOSC? This recommendation is unclear.
“Build a workforce able to execute the vision of the EOSC by ensuring data stewards, data and infrastructure technologists and scientific data experts who are trained and supported adequately.”
- I notice here and also in the six action lines on page 14 an expansion of what the EOSC should be or become, and envisions. The question whether that actually will help pace and clarity. Is EOSC not slowly taking over all the topics laid down in the OSPP?
“All activities mentioned above have a stronger focus on research data as opposed to services for research data management.” (page 17)
- Why would “research data” and “research data management” be presented as opposing activities?
“Flexible ways to access and share data and direct access to fast networks to do so are at the top of the agenda for researchers.” (3.2, page 19)
- This is not true at all. What about inability to find datasets because of lack of interoperability and integrated resources? Or the lack of recognition for good data management? Or not being rewarded for doing thorough and reproducible research?
3.4. Governance (page 21)
“Cooperation is needed between end user, service providers, and funding agencies / policy makers.”
- And what about the organisations that represent the end users?
- How the depicted layers and other existing temporary or structural governing bodies will work together (e.g. OSPP, Science Europe, ALLEA, EUA, etc.)
4.2 Business model (page 23)
“The EOSC Business model is a critical non-technical element that will determine the success of the EOSC vision.”
- Scoping principles around the business model requirements are also needed, to outline governance structure (of the infrastructure or service itself), community involvement, sustainability (see the paper by Bilder and Neylon), ownership and openness
“The currents model for provisioning access to Research Infrastructures is based on the guidelines contained in the Charter for Access, where three main models are described” and “a model based on the Wide Access mode modulated by a negotiated, agreeable Access restriction, is the pragmatic way to start moving with the EOSC. Private providers willing to provide resources within the EOSC framework will envision a Market-Driven approach to support users.”
- Also in reference to the guiding principles, along with the business model, it seems good to set Wide Access as the default, and jointly decide where exceptions are allowed. Simply saying that private providers will envision a Market-Driven approach seems to be against the Rule of Participation 5.1. that “Private sector users should be considered stakeholders in the EOSC as well as participants from the start, not added after (…). By participating, private sector may want to invest in the long-term development and sustainability of the EOSC, along with the public sector and not just serve to exploit public data for free.”
- Also, while Excellence-Driven Access model and Market-Driven Access are well defined, the principles behind the Wide Access model need to be better articulated.
“To coordinate acquirement, the EOSC and member states would also certify one or more brokers to manage the acquisition, distribution and payment for EOSC vouchers. These brokers could be government agencies in member states, entities within member states, transnational governments or private firms” (p.25)
- Why would it be necessary to involve brokers in the process?
4.3 Funding Model and Payment Mechanisms (page 25)
“…similarly to how YouTube pays people who upload videos based on how many times they are viewed.”
- You need registration of your account and be compliant to get paid by YouTube, so that is not the default situation. We would plea for a null or onset situation based on reciprocity, not immediately starting with payments.
- Difficult to judge what would be the best fit. Also, guiding principles are needed here, e.g. transparency, efficiency and simplicity. Is there a way to avoid the giant profit margins being made by some players in the scientific publications industry? What principle should be used to achieve this? What have we learned from the big deals? We want researchers (end users) to be cost-aware, without stressing them with workflow troubles and micropayments.
- In Direct Support: the disadvantage “Resources can have internal foci, reducing access from outside stakeholders” could be easily overcome by establishing clear funding rules demanding equal access rights to internal and external stakeholders
- In Direct Support: the disadvantage “Burdensome for commercial entities, even where they could provide significant cost savings and be incentivized to innovate.” – this is unclear to me – why burdensome, and why specifically for commercial entities?
Comments on specific Rules of Participation (from p. 28 of the report Prompting EOSC in Practice )
5.1 Federating the existing infrastructures
“Private sector users should be considered stakeholders in the EOSC as well as participants from the start, not added after (…). By participating, private sector may want to invest in the long-term development and sustainability of the EOSC, along with the public sector and not just serve to exploit public data for free.
Brokers would be obliged to behave in a disinterested fashion with all providers. Entities that establish brokers must require that the broker does not establish a monopoly, or fall under the control of a service provider that then uses their influence to exclude other service providers from the marketplace.”
- How is this going to be achieved in practice?
5.2 Eligibility criteria for actors
“Key rules for participants therefore will include”
- These rules for actors are also interlinked with the eligibility criteria for data and for service providers. Perhaps it would be valuable to map them.
- Identifiers and Metadata:
“While maintenance of this metadata is fundamentally the responsibility of the submitter of data or other digital objects…”
- Why would maintenance of metadata be the responsibility of the submitter of data and not of the data service provider/repository?
5.3 Participation according to the business model
“The development of novel capabilities, long-term storage/maintenance of data resources and fixed cost capabilities are likely to be provided using direct payments to organisations setting up nodes in the EOSC. By contrast, numerous research activities by individual investigators may be supported via EOSC vouchers. Nodes in the EOSC will have to be able to engage with the business model. This will probably imply a business arrangement with the brokers set up by funding agencies in order to accept these vouchers as payment.”
- Why need brokers? Couldn’t transactions be arranged without additional intermediaries, which increase the costs and complexity of the ecosystem?
“As the submitters control access, they retain liability for data leakage and to ensure that relevant individuals accessing information meet the necessary requirements.”
- Why would submitters, and not the service providers, be responsible for access control and be liable for data leakage?
“As regards to data quality and warranties as to fitness for purpose, the EOSC MVE would need to operate under the principle of caveat emptor. That is, while submitters may be liable for outright fraudulent data, the nature of scientific research data determines that EOSC data should probably be provided with no warranties for any particular purpose, although Section 5.5 section below, on assessing data quality, should be also taken into consideration.”
- What does “the nature of scientific research data determines that EOSC data should probably be provided with no warranties for any particular purpose” mean? Is it not contradictory with the statements which follow straight after that: “Data should be: »» processed lawfully, fairly and in a transparent manner in relation to the data subject (principle of ‘lawfulness, fairness and transparency’); »» collected for specified, explicit and legitimate purposes;”
- The GDPR is about data “processing”, not about data “collection” only. Data re-use is also form of data processing. The statements above seem contradictory to “no warranties for any particular purpose”
5.5 Data quality
- Suggestions are made that search results for datasets could be arranged based on reviews, views etc., and a comparison is made to TripAdvisor. I find this rather worrying: 1. Isn’t there a risk that this would lead to a high risk of data/score manipulation? 2. Wouldn’t it lead to self-perpetuation of certain objects/datasets? (similarly to what happened with journal impact factors)? 3. This could also be very detrimental to certain disciplines
We talked with Dr. Riccardo Riva, an assistant professor at the TU Delft Faculty of Civil Engineering and Geosciences who has published several datasets via the 4TU.Centre for Research Data. We spoke about his recent paper in the open access journal The Cryosphere on the surprising effects of melting glaciers and ice sheets on the solid Earth though the last century and how this affects reconstructions of past sea level from sparse observations.
The data underlying Riva’s paper were made publicly available through the 4TU.Centre for Research Data. Riva believes that sharing data “helps progress in science” and that “if you get public money to do research, then the results should be public”.
“When data are open, then anybody can use it. There will be some competition, but that’s only good. Competition leads to new ideas, which in turn lead to even more ideas and to progress in science.”
The 4TU.Centre for Research Data, hosted by the TU Delft Library, offers researchers a reliable long-term archive for technical and scientific research data. It creates opportunities for linking publications to underlying data thereby promoting improved findability and citability for research data. Over 90% of the data stored in the archive are environmental research data coded in netCDF – a data format and data model that, although generic, is mostly used in climate, ocean and atmospheric sciences. Therefore, 4TU.ResearchData has a special interest in this area and offers specific services and tools to enhance the access to and the use of netCDF datasets. TU Delft Library also offers Research Data Management Support during all stages of the research lifecycle.
On 26 June 2018, the new TU Delft Research Data Framework Policy was approved by TU Delft’s Executive Board. The Framework Policy is an overarching policy on research data management for TU Delft as a whole and it defines the roles and responsibilities at the University level. In addition, the Framework provides templates for faculty-specific data management policies.
From now on, the deans and the faculty management teams, together with the Data Stewards, will lead the development of faculty-specific policies on data management which will define faculty-level responsibilities.
If you are working at TU Delft and if you would like to be involved in the development of faculty-specific policies, please do get in touch with the relevant Data Steward.
The full text of the policy (pdf) is available below.
Written by: Rinze Benedictus, staff advisor, UMC Utrecht & PhD candidate at CWTS, Leiden University
On 31 May 2018 Marta Teperek spoke with Rinze Benedictus about changing the academic reward system at the Utrecht Medical Center. The blog post below was written by Rinze Benedictus to summarise the main points of the discussion for the readers of the Open Working blog.
University Medical Centers: places where the society enters the building
In the early 2000s in the Netherlands, medical faculties merged with university hospitals to become university medical centers (UMCs), with a triple task: research, healthcare and teaching. The aim was to better integrate biomedical research and healthcare and improve healthcare through research. This coincided with an international rise of an indicator-based view on scientific quality, as expressed by rankings, and bibliometric indicators like the H-index and other citation measures.
On one hand, this meant the creation of organisations where prestige was built on academic values that were increasingly informed by an indicator-based view on scientific quality. UMCs became apt producers of scientific knowledge that ‘counted’ bibliometrically. Papers published by UMCs accounted for around 40 percent of all Dutch scientific output and they were internationally cited well above average. This led to a remarkable publishing rate in certain medical subfields, e.g. in cardiology, the highest-producing Dutch professor authored more than 100 papers per year.
On the other hand, in terms of staff and funding, UMCs are organisations where delivering healthcare is the primary activity. They are, simply put, large hospitals where many, many patients are treated. So, from an academic perspective, “society entered the building”. This created a need to look at biomedical research conducted at UMCs from the patients’ perspective.
At the same time, increasingly, the premise that producing “high quality” biomedical knowledge would more or less automatically benefit patients, came under scrutiny. There was believed to be a mismatch between the mission of UMCs and the incentive and reward system for researchers.
On the road to changing academic rewards at UMC Utrecht
At the UMC Utrecht over the course of many years steps have been taken to address this issue. In 2010 six strategic multidisciplinary research programs were formed that focused on a limited number of disease targets. During the first evaluation of these research programs, societal stakeholders were involved. In addition, support for innovation and valorisation was increased.
In 2015 the UMC Utrecht decided that societal impact of the research programs should be one of the overarching goals. To bring incentives and rewards in line with this goal, the UMC Utrecht used the nation-wide Standard Evaluation Protocol (SEP) in academic evaluation in order to further emphasize the societal relevance of research (on a group level). Next, changes were also introduced on the individual level: portfolios were introduced for aspiring professors and associate professors. In these documents, researchers described themselves in five different aspects. Portfolios replaced traditional CVs that were often centred around publications.
The debate about incentives and rewards was significantly shaped by dean prof. dr. Frank Miedema and three other academics, who started the Science in Transition initiative in 2013. This fuelled the debates about academic evaluation systems across the Netherlands, and it triggered a range of meetings and discussions at the UMC Utrecht. All these events created an atmosphere where alternatives to existing incentives and rewards could be discussed.
The debate was also boosted by the reproducibility crisis in academia and the discussion about “research waste”. That made it very difficult to ignore the issues or continue saying that publications are always up to high standards because they are peer reviewed. Specifically, in the field of health research, the report from the Health Council, and the recent minister’s response was a wake-up call. The Health Council urged UMCs not just try to be “excellent” but to pursue “research questions which are relevant to practice”.
Of course, the new approach is not without critique. Researchers are concerned about the “transportability” of their CVs to other institutes or other countries: will they be recognized elsewhere for their outside of publications achievements? More specifically, researchers engaging in basic research feel they are under pressure to demonstrate societal relevance.
Change is coming
The debate has gained momentum and is increasingly leading to actual change at universities. At Utrecht University the portfolio is now used during all hiring/promotion procedures for professors. In addition, the faculty of Geosciences uses the “impact pathways” approach to evaluate its research according to the SEP. The narrative-based Impact Case Studies from the UK Research Excellence Framework were an obvious inspiration.
Signs of change are also visible at the Nijmegen Donders Institute for Brain, Cognition and Behavior (part of Radboud UMC) that introduced the programme of Sustainable science. And the Free University Amsterdam published a manifesto for “gross academic value”, which has now been linked to Open Science (Dutch).
From a science policy perspective, the changes at UMC Utrecht resonate very well with the current push for Open Science in Europe and in The Netherlands. The Openness of the research agenda, combined with open data and open access, implies a new way of doing research that requires fitting incentives and rewards.
- Open Science podcasts at the University of Utrecht: https://openscience-utrecht.com/oscu-podcast/
- “Fewer numbers, better science” – article about UMC Utrecht’s change in academic rewards published in Nature: https://www.nature.com/news/fewer-numbers-better-science-1.20858
- “Do our measures of academic success hurt science?” – article in Inside Higher Ed: https://www.insidehighered.com/blogs/rethinking-research/do-our-measures-academic-success-hurt-science
Late last month, I took a day trip to the Netherlands to attend an event at TU Delft entitled “Towards cultural change in data management – data stewardship in practice”. My Software Sustainability Institute Fellowship application “pitch” last year had been based around building bridges and sharing strategies and lessons between advocacy approaches for data and software management, and encouraging more holistic approaches to managing (and simply thinking about) research outputs in general. When I signed up for the event I expected it to focus exclusively on research data, but upon arrival at the venue (after a distressingly early start, and a power-walk from the train station along the canal) I was pleasantly surprised to find that one of the post-lunch breakout sessions was on the topic of software reproducibility, so I quickly signed up for that one.
I made it in to the main auditorium just in time to hear TU Delft’s Head of Research Data Services, Alastair Dunning, welcome us to the event. Alastair is a well-known face in the UK, hailing originally from Scotland and having worked at Jisc prior to his move across the North Sea. He noted the difference between managed and Open research data, a distinction that translates to research software too, and noted the risk of geographic imbalance between countries which are able to leverage openness to their advantage while simultaneously coping with the costs involved – we should not assume that our northern European privilege is mirrored all around the globe.
Danny Kingsley during her keynote presentation
The first keynote came from Danny Kingsley, Deputy Director of Scholarly Communication and Research Services at the University of Cambridge, whom I also know from a Research Data Management Forum event I organised last year in London. Danny’s theme was the role of research data management in demonstrating academic integrity, quality and credibility in an echo-chamber/social media world where deep, scholarly expertise itself is becoming (largely baselessly) distrusted. Obviously as more and more research depends upon software driven processing, what’s good for data is just as important for code when it comes to being able to reproduce or replicate research conclusions; an area currently in crisis, according to at least one high profile survey. One of Danny’s proposed solutions to this problem is to distribute and reward dissemination across the whole research lifecycle, not only attaching credit and recognition/respect to traditional publications, but also to datasets, code and other types of outputs.
Questions from the audience
After a much-appreciated coffee break, Marta Teperek introduced TU Delft’s Vision for data stewardship, which, again, has repercussions and relevance beyond just data. The broad theme of “Openness”, for example, is one of the four major principles in current TU Delft strategic plan, indicating the degree of institutional support it has as an underpinning philosophy. Marta was keen to emphasise that the cohort of data stewards which Delft have recently hired are intended to be consultants, not police! Their aim is to shift scholarly culture, not to check or enforce compliance, and the effectiveness of their approach is being measured by regular surveys. It will be interesting to see how they have got on in a year or two years’ time: already they are looking to expand from one data steward per faculty to one per department.
There followed a number of case studies from the Delft data stewards themselves. My main takeaways from these were the importance of mixing top-down and bottom-up approaches (culture change has to be driven from the grassroots, but via initiatives funded by the budget holders at the top), and the importance of driving up engagement and making people care about these issues.
Data Stewards answering questions from the audience
After lunch we heard from a couple of other European universities. From Martine Pronk, we learned that Utrecht University stripes its research support across multiple units and services, including library and the academic departments themselves, in order to address institutional, departmental, and operational needs and priorities. In common with the majority of UK universities, Utrecht’s library is main driving and coordination force, with specific responsibility for research data management being part of the Research IT programme. From Stockholm University’s Joakim Philipson we heard about the Swedish context, which again seemed similar to the UK’s development path and indeed my own home institution’s. Sweden now has a national data services consortium (the SND), analogous to the DCC in the UK, and Stockholm, like Edinburgh, was the first university in its country to have a dedicated RDM policy.
We then moved into our breakout groups, in my case the one titled “Software reproducibility – how to put it into practice?”, which had a strange gender distribution with the coordinators all female, but the other participants all male. One of the coordinators noted that this reminded her of being an Engineering undergraduate again. We began by exploring our own roles and levels of experience/understanding of research software. The group comprised a mixture of researchers, software engineers, data stewards and ‘other’ (I fell into this last category), and in terms of hands-on experience with research software roughly two-thirds of participants were actively developing software, and another third used it. Participants came from a broad range of research backgrounds, as well as a smaller number of research support people such as myself. We then voted on how serious we felt the aforementioned reproducibility crisis actually was, with a two-thirds/one-third split between “crisis” and “what-crisis?” We explored the types of issues that come to mind when we think about software preservation, with the most popular responses being terms such as “open source”, “GitHub” and “workflows”. We then moved on to the main business of the group, which was to consider a recent article by Hut, van de Giesen and Drost. In a nutshell, this says that archiving code and data is not sufficient to enable reproducibility, therefore collaboration with dedicated Research Software Engineers (RSEs) should be encouraged and facilitated. We broke into smaller groups to discuss this from our various standpoints, and presented back in the room. The various notes and pitches are more detailed than this blog post requires, but those interested can check out the collaboratively-authored Google Doc to see what we came up with. The breakout session will also be written up as a blog post and an IEEE proposal, so keep an eye out for that.
After returning to the main auditorium for reports from each of the groups, including an interesting-looking one from my friend and colleague Marjan Grootveld on “Why Is This A Good Data Management Plan?”, the afternoon concluded with two more keynote presentations. First up, Kim Huijpen from VSNU (the Association of Universities in the Netherlands) spoke about “Giving scientists more of the recognition they deserve”, followed by Ingeborg Verheul of LCRDM (the Dutch national coordination point for research data management), whose presentation was titled “Data Stewardship? Meet your peers!” Both of these national viewpoints were very interesting from my current perspective as a member of a nationally-oriented organisation. From my coming perspective as manager of an institutional support service – I’m in the process of changing roles at the moment – Kim’s emphasis on Team Science struck a chord, and relates to what we’re always saying about research data: it’s a hybrid activity, and takes a village to raise a child, etc. Ingeborg spoke about the dynamics involved between institutional and national level initiatives, and emphasised the importance of feeling like part of a community network, with resources and support which can be drawn upon as needed.
Closing the event, TU Delft Library Director Wilma van Wezenbeek underlined the necessity of good data management in enabling reproducible research, just as the breakout group emphasised the necessity of software preservation, and in effect confirming a view of mine that has been developing recently: that boundaries between managing data and managing software (or other types of research output) are often artificially created, and not always helpful. We need to enable and support more holistic approaches to this, acting in sympathy and harmony with actual research practices. (We also need to put our money where our mouth is, and fund it!)
After all that there was just enough time for a quick beer in downtown Delft before catching the train and plane back to Edinburgh. Many thanks to TU Delft for hosting a most enjoyable and interesting event, and to the Software Sustainability Institute whose support covered the costs of my attendance.
Several resources from the event are now available:
- Presentations and other materials (via Zenodo)
- Recorded presentations (via TU Delft website)
- Recorded presentations (via YouTube)
- Photos (via Flickr)
- Tweets (via Twitter)
- Blog posts:
Authors (in alphabetical order; underlined are the main authors of the blog post): Charlotte Buus Jensen, Valentino Cavalli, Maria Cruz, Raman Ganguly, Madeleine Huber, Mojca Kotar, Iryna Kuchma, Peter Löwe, Inge Rutsaert, Melanie Stummvoll, Gintare Tautkeviciene, Marta Teperek, Hannelore Vanhaverbeke
On 1 December 2017 Maria Cruz and Marta Teperek facilitated a workshop titled Evaluation of Research Careers fully acknowledging Open Science Practices. This was part of a larger conference – Digital Infrastructures for Research 2017 in Brussels, Belgium. The workshop was attended by about 15 people from various backgrounds: library professionals, repository managers, research infrastructure providers, members of international networks for research organisations and others. Below is the summary of what happened at the workshop, key discussions and suggested next steps.
Rationale for the workshop
The workshop was inspired by a report published by the European Commission’s Working Group on Rewards under the Open Science Policy Platform “Evaluation of Research Careers fully acknowledging Open Science Practices”. Noting that “exclusive use of bibliometric parameters as proxies for excellence in assessment (…) does not facilitate Open Science”, the report concludes that “a more comprehensive recognition and reward system incorporating Open Science must become part of the recruitment criteria, career progression and grant assessment procedures…” However, in order to make this a reality, multiple stakeholders need to be involved and make appropriate steps to recognise and implement open science practices. The workshop aimed at developing roadmaps for some of these stakeholders and at identifying ways of effectively engaging with them, and discussing their possible goals and actions.
What happened on the day
The initial plan was to look into four different stakeholder groups: research institutions, funding bodies and governments, principal investigators, and publishers. However, in order to ensure group work and interaction between workshop participants given that only about 15 people attended the workshop, it was decided the focus would be solely on the first two stakeholder groups: research institutions and funding bodies and governments. These stakeholders were also identified in the original EC’s report.
The participants split into two teams, each trying to create a roadmap for a different stakeholder group using collaborative google documents. To start with, the teams tried to address the following four questions for their stakeholders:
- What methods could be used to effectively engage with this stakeholder group and to ensure that they are willing to implement Open Science Practices in their research evaluation?
- What should be the goals for this stakeholder to fully implement Open Science Practices in research evaluation? What are the key milestones?
- What will be the main barriers to implementation of these goals and how to overcome them?
- Propose metrics which could be used to assess this stakeholder’s progress towards implementation of Open Science Practices in their research evaluation practices.
Subsequently, the groups swapped stakeholders, and reviewed and commented on the work of the other group, enriching the roadmaps and adding a broader perspective. The workshop concluded with a reporting session which brought the two groups together and allowed the attendees to engage in discussion.
Key observations about successfully engaging with research institutions
The participants identified internal and external drivers important to engage with research institutions and to encourage them to change their academic rewards systems to ones based on open science practices. Not surprisingly, requirements for open science from funding bodies and governments were at the very top of external drivers’ list. If funders start identifying commitment to open science practices as funding criteria, institutions will have no choice but to reward researchers for open science in order to continue securing funding bids.
One of the most appealing internal drivers discussed was lobbying within institutions by prominent researchers who are themselves committed to open science: this could not only help institutions roll out policy changes, but also demonstrate to younger researchers that commitment to open science might be valuable for their careers.
Key observations about successfully engaging with funding bodies and governments
Quite interestingly, external drivers were also seen as important factors to engage with funding bodies and governments. Joint statements from several academic institutions were mentioned as tangible ways to establish effective collaborations with funding bodies. Therefore, there seems to be a need for synergy between institutions and funding bodies/governments. In addition, it has been stressed that better networks between international funding agencies and governments might also lead to cross-fertilisation of ideas and good practice exchange. For example, the European Commission could advise Member States to develop national policies on open science.
Lack of credible metrics to measure the commitment to open science practices was discussed as one of the main barriers, which might discourage funders and governments from changing academic rewards systems.
Can quality be measured with quantitative metrics?
The initial discussion about the lack of credible evaluation metrics as a potential barrier preventing funding bodies and governments from changing their academic rewards systems led to a longer debate about the usefulness of metrics in open science in general. One of the participants mentioned that a new metric, analogous to journal’s impact factor but tailored to research data, could potentially offer a solution to the problem. However, others felt that it might be simply inappropriate to measure qualitative outcomes with quantitative metrics, and such approach risks replicating all the flaws of metrics based on journal’s impact factor. It was proposed that instead high-quality peer-review on selected outputs should be emphasised and promoted (and rewarded as well).
The short-term aim is to share the outcomes of this workshop with the authors of the European Commission’s Working Group on Rewards under Open Science “Evaluation of Research Careers fully acknowledging Open Science Practices”.
In addition, roadmaps for the two remaining stakeholder groups (publishers and principal investigators) need to be drafted. Moreover, and as pointed out by participants of this workshop, even though it could be impossible (or not desirable) to create metrics for commitment to open science practices, it would be still valuable to develop frameworks for the different stakeholders to provide them with broad guidelines as to what kind of achievements could be rewarded. The same frameworks could be also used by researchers as a source of inspiration and motivation for open science.
Finally, one of the key drivers for change, identified during the workshop, were funding bodies and pilot funding schemes to which only researchers able to demonstrate commitment to openness could apply. Such funding schemes would not only allow the community to learn about suitable ways of assessment of open science practices, but would also provide researchers practising open science with immediate benefits and much needed recognition.
- Slides in support of the workshop
- Roadmaps prepared by the workshop participants for the two different stakeholder groups
- European Commission’s Working Group on Rewards under Open Science “Evaluation of Research Careers fully acknowledging Open Science Practices”