Authors: Yasemin Turkyilmaz – van der Velden, Santosh Ilamparuthi, Marta Teperek
On 8th of June, we had an online meeting to discuss “How to champion our OS community?” with 32 participants and a very active discussion. The why, what and how of this meeting can be found below.
Why did this meeting happen?
The community started as Data Champions but has since evolved. Data Champions have brought in other topics such as reproducible computational workflows, open-source software, open access to publications, citizen science, open hardware etc. So, does the “data” in “data champions” properly reflect what the community has become?
The community is also very inclusive, and people who join sometimes do so because they want to learn from others. Is the name “champion” inclusive enough?
In parallel, there was the emergence of Open Science Communities in the Netherlands and most Dutch universities have one… Would we also like to have an Open Science Community, or is our Data Champions community the Open Science Community at Delft?…
What happened during this meeting?
After a brief introduction, we heard about the Open Science Communities in the Netherlands from Loek Brinkman, co-founder of the Open Science Community Utrecht which is the first Open Science Community in the Netherlands.Then Marta Teperek explained the pros and cons of possible ways of going forward which are:
Stay as “Data Champions”
Rebrand to ‘Open Science Community Delft’
Join an umbrella ‘Open Science Community Delft’*
* – if it comes to exist in the future
Then we split into smaller groups to discuss the pros and cons of each option. This was followed by each group reporting the outcomes of the group discussions.
What are the outcomes of this meeting?
The reporting of group discussions was followed by voting. 29 people participated and here are the results:
Stay “Data Champions”: 0 votes
Rebrand to ‘Open Science Community Delft’: 16 votes
Join an umbrella ‘Open Science Community Delft’: 13 votes
What are the next steps?
The outcomes of the discussions and voting results suggested that all participants agree with the TU Delft Data Champions community to be rebranded as Open Science Community Delft.
There was some confusion about the option “Join an umbrella ‘Open Science Community Delft’, as there is not yet an existing ‘Open Science Community Delft’ and why this option would be different than “Rebrand to Open Science Community Delft’’, as the members of the Open Science Community Delft can anyhow start up member initiatives focused on a specific practice or discipline. We already have examples of this with the TU Delft Data Champions Community:
TPM Data Champion Anneke Zuiderwijk has initiated and is regularly organizing Open Data Meetings.
Therefore Open Science Community Delft can be an umbrella community with member initiatives focused on a specific practice or discipline. Anyone is welcome to start such a subgroup and we are happy to support those interested in doing this.
All these outcomes, together with the meeting notes and recording were shared with the community. The community was given the opportunity to share their doubts, questions or feedback until 22 June. As there were no objections, on 23 June the final decision of rebranding the Data Champions community to Open Science Community Delft was shared with the community.
There will be a branding effort involved in this, which needs to be discussed. The community will be kept abreast about the next steps.
The recent contract signed between the Dutch research institutions and the publishers Elsevier mentions the possibility of an Open Knowledge Base (OKB), but the details are vague. This blog post looks some more about definitions of an OKB within the context of scholarly communications and elements that need to be taken into account in building one.
Authors: Alastair Dunning, Maurice Vanderfeesten, Sarah de Rijcke, Magchiel Bijsterbosch, Darco Jansen (all members of above taskforce)
Definition of an Open Knowledge Base
An Open Knowledge Base is a relatively new term, and liable to multiple interpretations. For clarification, we have listed some of the common features of an Open Knowledge Base (OKB):
it hosts collections of metadata (descriptive data) as opposed to large collections of data (spreadsheets, images etc)
the metadata is structured according to triples of subject object and predicate (eg The Milkmaid (subject) is painted by (predicate) Vermeer (object))
each point of the triple is usually related to an identifier elsewhere, for example Vermeer in the OKB could be linked with reference to Vermeer in the Getty Art and Architecture thesaurus
The highly structured nature of the metadata makes it easier for other computers to incorporate that data; OKBs have an important role to play for search engines such as Google as well as a basis for far-reaching analysis
All the data (whether source or derived) is open for others to access and reuse, whether via an API, SPARQL endpoint, a data dump, or a simple interface, typically via a CC0 licence
The data is described according according to existing standards, identifiers, ontologies and thesauri
the rules for who can upload and edit the data will vary between OKB. All OKBs need to deal with a a tension between data extent, richness and quality
The technical infrastructure is usually hosted in one place – however, the OKB will link to other OKBs to make a larger network of open metadata. In essence, this creates a federated infrastructure
In some, but not all, cases, the OKB is not an end in itself but supplies the data that other services can build upon; thus there is a deliberate split between the underlying data and the services and tools that use that data
An OKB share some aspects with Knowledge Base of Metadata on Scholarly Communication but is broader in both in terms of content and its commitment to openness
The best current example of an Open Knowledge Base is Wikidata. An example of a service built on top of Wikidata is Histopedia. Also library communities around the globe contribute journal titles to a Global Open Knowledge Base (GOKB).
Open Knowledge Bases and Scholarly Communication
Traditionally, metadata related to scholarly communications has been managed in discrete, unconnected, closed, commercial systems. Such collections of data have been closely tied to the interface to query the data. This restricts the power of the data – whoever creates the interface determines what types of questions can be asked.
An Open Knowledge Base counters this. Firstly, it separates the interface from the data. Secondly, it opens up and connects the underlying metadata to other sources of metadata. Such an approach allows much greater freedom – users are no longer restricted by the specific manner in which the interface was designed nor restricted to querying one set of metadata. Such openness makes the OKB flexible about the type of data it incorporates and when – other data providers with different datasets can connect or incorporate their data at a date that suits them. The openness also allows third parties to build specific interfaces and different services on top of the OKB.
For the field of scholarly communication, an ambitious federated metadata infrastructure would connect all sorts of entities, each with clear identifiers. Researchers, articles, books, datasets, research projects, research grants, organisations, organisational units, citations etc could all form part of a national OKB that connects to other OKBs. It would also help create enriched data, which could then be fed back into the OKB.
Such a richness of metadata would be a springboard for an array of services and tools to provide new analyses and insights on the evolution of scholarly communication in the Netherlands.
The best current example of an Open Knowledge Base for scholarly communication is developed by Open-Aire.
Issues in constructing an Open Knowledge Base for the Netherlands (OKB-NL)
A well constructed open knowledge base can play a significant role in innovation and efficiency in the scholarly communications ecosystem. Given the breadth of data it can contain, it could be the engine for sophisticated research analysis tools. But it requires significant long-term engagement from multiple stakeholders, who will be both providing and consuming data. It is imperative that such stakeholders work in a collaborative fashion, according to an agreed set of principles.
Whatever principles are used to underlie an OKB, there also needs to be serious thought given to practical concerns. How would an OKB be created and sustained? An OKB is an ambitious project; if it is to succeed it requires strong foundations. The following issues would all need to be addressed:
Who would steer the direction of the OKB? How would any board reflect the multiple research institutions contributing to the OKB? To make an OKB effective, it would require the ongoing participation of every research institution in NL – how would the business model ensure that? And who would actually do the day-to-day management of the OKB? What should be the role of commercial organisations contributing to the OKB and its underlying principles. Should they have a stake in the governance of an OKB?
Who would pay the initial costs for establishing an OKB? How would the ongoing cost be paid? Via institutional membership? Via consortium costs? Via government subsidy? Via public-private partnerships? Would all institutions gain equal benefit from the OKB? Would they pay different rates?
What kind of technical architecture does the OKB require – centralised, with all the data in one place, or distributed, with data residing in multiple locations? If the latter, how can we ensure that the data is open and interoperable? Or some kind of clever hybrid? Given its role as the foundation of other services, how can it be guaranteed that the OKB has close to 100% uptime as possible? And how can it be as responsive as possible, providing instantaneous responses to user demand?
Scope of Metadata Collection
The potential scope of an OKB is huge. Each content type has their own specific metadata schemes. These schemes evolve over time. How are different metadata types incorporated over time? Article metadata first? Then datasets, code, funding grants, projects, organisations, authors, journals? What about different versions of metadata schemes, need all backlog records be converted?
Quality, Provenance and Trust
Would the metadata in the OKB be sufficient to underpin high-quality services? What schema would need to be created for the different sorts of metadata? What critical mass of metadata would be required to create engaging services? What kind of and metadata alignment and enrichment would need to be undertaken? Would that be done centrally or by institutions and publishers? What costs would be associated with that? Would the costs be ongoing? Should provenance to the original supplier of the metadata and metadata enrichments be attributed?
Service development and Commercial engagement
What incentives would there be for commercial partners to a) provide metadata and b) build services on top of the OKB? Would the investment to develop such services simply lead to one or two big companies dominating the service offer? Would they compete with services not relying on the OKB? What would happen to enriched data created by commercial companies? Would it be returned to the OKB?
Would the resulting services be of use to all contributing members? Could the members develop their own services independent of commercial offerings?
Implementation timeline: Lean or Big Bang
When implementing the OKB, should we first carefully design the full stack of the infrastructure, and solve all the questions within the grand information architecture? Or let it grow organically, and start with collecting the metadata in the formats that is already legally available according to the publishing contracts? Can we do both in parallel; start collecting, and start designing?
To give a hint on realising the OKB, we probably need to introduce two other concepts. One is the Start rating system of Data, and the other is building the OKB in two different phases.
Linked Open Data Star Rating
https://5stardata.info/ This is a concept introduced by Sir Tim Berners-Lee, to have not the internet presenting web pages that can be read by humans, but presenting data on the web that can be read and interpreted by machines, directly, interoperably, using a unified agreed standard; resource description format, or RDF. Putting your data in RDF on the web, gets you five stars. The vision of the OKB is to have all the metadata available as 5-star linked open data. This however is not the current reality. The data available, given by publishers and universities and put in the OKB are 3-star data at best; 1. Made available on the web (eg. in a data repository). 2. In a structured manner (eg. as a table or nested structure). 3. In a non-proprietary format (eg. csv, json, xml).
This brings us to the next concept.
IST and SOLL: OKB in different phases and different speeds
What we start right off. Building an OKB with what we have right now. Mature technology and robust services in Phase 1. And start building our envisioned OKB in Phase 2.
The following devices in the details and make things much more concrete, to make it tangible about what a phase 1 OKB can actually be.
IST: Start small and lean – What can we do in the next couple of years?
To make an initial start that is more feasible and to work on pilots, we need to work with the data and the data formats what systems already can deliver.
Next will follow our thought-train how the phase 1 OKB should look like, but we love to hear yours in the comments below.
OKB; data repository for 3-star data
In this initial phase we appoint a data repository for the initial location for metadata providers to periodically deliver their metadata files under a CC0 license, including the information on the standard of the files delivered (how to interpret the syntax and semantics). This can be for example the 4TU.Datacentre or Dataverse.nl, where OKB deposits can be made into a separate collection/dataverse.
Services; working with 3-star data
The datasets are available to the service providers. They need to download the files and process it into their own data structure. Here, at the services level, the interlinks of the different information entities come into existence, and can be used for the purpose of the service.
Metadata-data Providers; delivering 3-star data
In our case we have different kinds of metadata-data providers. To name a few: 3rd parties, Universities, Funders. The 3rd parties can be Publishers, Indexes, Altmetrics. Each of which can deliver different information entities in the scholarly workflow, and can be delivering files in a different formats in an open standard with a CC0 license.
Contains: information about Organisational Units, Researchers, Projects, Publications, Datasets, Awarded Grants, etc.
All information entities need to be delivered as individual files, in a zipped package. That package must be logically aggregated and deposited, eg. by year, month, etc. Provenance metadata of the source providing the data and an open licence needs to be added. Also deposit with descriptive metadata, including pointers of the Open Standard of the datafiles, to adhere the FAIR principles. https://www.go-fair.org/fair-principles/
Service providers can then download the data from the OKB, and fill for example a search index with that information. This can then be used for example to enrich the metadata of the Dutch CRISsystems.
SOLL: the open knowledge base of the future
To stay true to the 5star Linked Open Data mindset, this OKB is an interconnected distributed network of data entities, where access and connectivity is maintained by each owner of the data notes. Those node owners can be the publishers, funders, universities. They can independently make claims, or assertions about the identifiers of the content types they maintain.
For example, Publishers maintain identifiers on publication, universities about affiliations of researchers, orcid about researchers, funders about funded projects, etc. This interconnectivity is gained by the fact that, firstly, node owners can make claims about their content types in relation to other content types of other node owners. For example, a publisher can make the assertion that this publication is made with funds from that funded project, independently from the funder itself.
Staying true to the old days of the internet, where everyone can make their own web page and link to others, without bi-directional approval. Secondly, making assertions using entities and relationships defined by the linked open data cloud. This assures interoperability in a way “machines” understand on a semantic level concepts they can use in their internal processes. For example, a data field called name, can be interpreted by one machine as a first name of a person, the other machine interpret this as an organisation name, or the name of an instrument. Using the ontologies of the linked open data cloud, can pin the exact semantic meaning to the field name.
To keep track of who made what assertion, provenance information is added. This way services are able to weigh assertions from one note owner differently, than the other. (More about that in Nano Publications www.nanopub.org )
Zooming out, we see the OKB, connected with the linked data cloud, as a “knowledge representation graph” that has numerous applications in answering the most complex research questions.
Authors: Esther Plomp, Lena Karvovskaya, Yasemin Turkyilmaz – van der Velden
From the 14th of April until the 7th of May, the Mozilla Foundation launched the “Movement-building from home” – a series of online meetings. The topic of these meetings was activism, community building, and maintenance in the special circumstances around COVID-19. Below follows a summary with some of the key points out of these meetings and some resources that were brought together by all the participants.
Throughout these calls, it was inspiring to hear about the ways that people deal with the new situations caused by COVID-19. Everyone is experiencing similar challenges but shows the remarkable ability to adapt to these changes, and we felt connected through our compassion and understanding during these unusual times.
The sessions were hosted by Abigail Cabunoc Mayes and Chad Sansing from Mozilla Foundation.There were four sessions per week to enable people to join at their preferred time. The calls were open to anyone interested in online community and movement building and sharing experiences. The notes and recordings are available online:
Each session started with a check-in where participants wrote some information about themselves in a collaborative google document, as well as their expectations of the call. After the check-in, the discussion topic of the week was introduced by Abby, as well as the functionalities of the tools used (Google Docs, Zoom). This was followed by some expectations that Abby and Chad had of the participants of the calls. To facilitate an inclusive and accommodating environment we were referred to the Community Participations Guidelines. Issues could be reported to either Abby and/or Chad. Now that a secure environment was established, the goals of the call were outlined based on the topics of the different weeks. After this introduction, the participants got to contribute their experiences on the topic. Abby and Chad summarized the experiences and added their comments to the document. In the next part, Abby and Chad introduced the content that they have prepared and answered questions. Every call included break-out rooms (2-3 people) where participants could have more intimate discussions related to the topic of the meeting. Finally, reflections and take away points from these break out discussions were summarised, and participants were directed to other resources and means to stay in touch with the community.
Week 1: Online Meetings
The first week focussed on our positive and negative experiences with online meetings. The participants listed some successes and challenges:
To host a successful online meeting, you should first choose an accessible platform that meets the needs of your community in terms of privacy and safety (see some examples of platforms here). It should be clear what participants require from the call, and you should follow up with anyone that could not attend the meeting. You should be explicit about the types of contributions you expect from participants, such as note-taking, facilitating the discussion or keeping time. It is good to allow for asynchronous contribution through a collaborative note-taking document to make space for questions as well as contributions from anyone that could not attend. You should document your meeting through e.g., a recording, captioning, or a summary. To facilitate more interaction, participants can be split up into smaller groups using break-out rooms. When your meeting has ended, it should be clear what the next actions are, and how participants can stay in touch with you and each other.
Week 2: Community Care
The second week on ‘Online Meetings’ focussed on community care which was defined as:
all of the ways in which you show attention to and care for your community members across different dimensions of accessibility, equity, and inclusions, from caring meeting times to compensation to hitting pause when things go wrong to take care of people first, etc.
Community care is basically any care provided by an individual to benefit other people in their life. The participants listed some successes and challenges:
Here are the take home messages from this call:
Ensure belonging by MIMI (make it more inviting), set up enough structure to provide a clear purpose, while maintaining enough flexibility to care for each other, and people’s safety and privacy.
Repeating foundational practices such as the Community Practice Guidelines while checking-in with the community members, and showinggratitude and recognition.
Flexibility and prioritization for adjusting to the new norms. What are elements you must sustain, what can be de-emphasized to reduce overwhelming?
Assessing needs, especially those around privacy and security and communicating risks involved with various platforms.
Being prepared about how to disagree. Taking an increased response time to overcome fear-driven defensiveness and sharing key information and gathering responses ahead of time to limit surprises.
Careful and caring moderation. Generating new communication channels when necessary while avoiding duplication/overload.
Reframing professional development & training by asking what people need to do, by offering training on not only new online tools and risks involved but also on new life and work balance demands. Using collaboration and mentorship to show care and build capacity for continuity.
Opting-in social time to help members to feel belonging to their community by doing lightweight prompts such as google street map tours of hometowns, pet parades and virtual play dates.
Expect to make mistakes and rehearse taking responsibility and moving forward from them.
Ensuring sustainability by re-assessing roles, responsibilities, and contribution pathways, identifying what matters most to continue online, and scanning for funding opportunities.
Week 3: Personal Ecology
Personal Ecology is a term that is not well known outside of Mozilla’s community. It refers to self-care in a wide sense of the word: things one does to stay happy, healthy, and engaged with one’s work.
Personal ecology means “To maintain balance, pacing and efficiency to sustain our energy over a lifetime.”– Rockwood Leadership Institute, Art of Leadership
At the beginning of the meeting, some prompts were offered to the participants:
The big idea behind personal is that taking care of oneself is among the responsibilities of an activist, leader or community manager. Self-care must be a strategic: It requires intent, caring, and frequent self-assessment and support from others.
“You can’t sustain a movement if you don’t sustain yourself.” – Akaya Windwoo
The crucial part of this call was a self-care assessment. The participants were invited to make a copy of the inventory prompts below. Ten minutes were devoted to ranking once response on each item from 1 (never) to 5 (always).
I have time to play in ways that refresh and renew me.
I am energized and ready to go at the start of my day.
I regularly get a good night’s sleep.
I effectively notice and manage stress as it arises.
I can execute my current workload with ease and consistency.
I have time to daydream and reflect.
During the day I take time to notice when I’m hungry, tired, need a break, or other physical needs.
I periodically renew my energy through the day, every day.
I eat food that satisfies me and sustains my energy throughout the day.
I often have ways to express my creativity.
I have time to enjoy my hobbies.
Those that love and care about me are happy with my life’s balance.
I spend enough time with family and friends.
I take time to participate in fun activities with others.
I feel connected to and aware of my body’s needs.
I take time to pause and reset now and again.
I am satisfied with my balance of solitude and engagement with others.
I make time for joy and connection.
I feel at peace.
At the end of my day I am content and ready to sleep.
After the ranking was done, the participants were invited to make lists for themselves of:
Things to continue.
Things to improve or increase.
Things to try or work towards.
The meeting was completed with everyone writing down one powerful next step they will take.
In a time of crisis, such as during COVID-19, a community manager should give hope and be emphatic, but also be realistic and transparent about the situation. Abby introduced a community management principle: the Mountain of Engagement. A sustainable community should have two things: 1) new members and 2) a way for existing members to grow within the community. These two things involve different levels of engagement (on the Mountain). First there is the ‘discovery’ level, where members first hear about the community. Then there is the level of ‘first’ contact, where they first engage with the community. After first contact, new members can contribute to a community in the ‘participation’ phase. When this contribution continues they reach the ‘sustained participation’ level. They may also use the community as a network (‘Networked participation’) and eventually take more responsibilities in the project in the ‘leadership’ level. It is good practice to consider how you will engage your members through these various levels from the start of your project or community. Your members will have different requirements and needs, depending on which level they are at:
Discovery; where the promotion of your community is important, which can be done through having a public repository that has an open license so that it is clear for others what they can reuse.
First contact: your community needs to have a clear mission, and multiple communication channels to make it easy for people to get in touch. This includes offering some passive channels which allow them to just follow the community.
Participation: Personal invitations to contribute work best. In these invitations you should set clear expectations by having contributing guidelines and a code of conduct (to which members can contribute). It is also good practice to let your participants know how much time and effort is expected from them if they want to contribute. By allowing your members to contribute at their own terms you allow them to take ownership of their contributions.
Sustained Participation: It is important to recognise the contributions of your community members, as well as to allow for their skills and interests to move the community forward inline with the community mission.
Networked Participation: Your community should be open to mentorship and training possibilities to allow members to grow. You can also think about professional development and offer certificates to members.
Leadership: Leadership should be inclusive, and involve value exchanges. It should be clear what is expected of community members when they take responsibilities. Leadership can take many forms and can come from anyone within the community.
It is also important to recognise that your community members can move up and down these levels of the Mountain of Engagement. Sometimes they will even need to depart and come back to your community at another time. To help move members forward it is important to assign them timebound and specific tasks in accordance with their capacities and recognise their contributions. Not everyone in your community needs to contribute and engage at all opportunities.
In a time of crisis, it can also be important to focus on the things that really matter right now, rather than overburden your community members. Here it is important to ask your community members about their needs and preferences. You can, for example, consult them on their preferences for communication platforms in order to meet where they are. Patience and reflection are great goods in these situations, as they allow us to think more deeply about why we work in certain ways and what we can learn from working online. It is important to realise that anything we build up now can also be used when a time of crisis is over!
How should you move your event or conference online? By examining the three online meetings I attended this week (11-15th May 2020) we might find some insight into this question. These events were all related to open science and data, but the organisers made use of very different types of conference/event formats to move their event online:
This week the fifth edition of csv,conf,v5 took place. During the welcome session up to 900 people were tuning in! The conference took place in several sessions on crowdcast. There was a Slack channel available to facilitate interaction between participants. Everyone was encouraged to introduce themselves through Slack in a dedicated introduction channel. Social interaction was also stimulated during lunch through a Zoom meeting (using break-out rooms to facilitate smaller group discussions) as well as a llama showing (I am not making this up). This has been the perfect set up for a conference with over 800 participants attending each session! Recordings will be made available on YouTube.
Remote ReproHack – #reprohack
To get a better understanding of what a ReproHack is, I can recommend reading a blogpost about a ReproHack that I attended in person last year as well as this overview by the hosts. (A blogpost about the Remote ReproHack will be posted next week!) The remote ReproHack took place on Blackboard Collaborate in the webbrowser. This platform was chosen as it allows participants to freely move between break-out rooms. This is not possible with tools such as Zoom, where participants have to be moved by the host to break-out rooms. During the remote ReproHack the participants could choose a paper to reproduce from a list of 37 papers (who’s author’s volunteered to be reproduced). The ReproHackers could indicate through a hackmd.io file on which paper they were working. We could work on reproducing the papers in three sessions, each lasting about an hour. About half an hour of those sessions was spent on providing constructive feedback to the authors of the paper, using a previously set up feedback from. During the day three short keynotes took place on reproducibility and tools participants could use to make their research more reproducible. No recordings took place during this event.
So which format should you pick?
It is important to stress that there is no right or wrong format for your online event: These three conferences/events were all successful in moving their event online in an engaging way. The idea is to pick the format that fits your community or that best gets your message across.
For example, if you would like to host an event that is similar to a conference, the format that csv,conf,v5 used is more applicable. This requires your participants to be available for only one or two days, but for a longer time period. If your participants cannot commit to a full day online, the format of the Open Scholarship Week could be more interesting.
In terms of commitment it was possible for both the Open Scholarship Week and csv,conf,v5 to only follow certain parts or sessions, and catch up later through the recordings. This was a bit more complicated for the Remote Reprohack: if you would miss out on the explanation in the morning it would be difficult (though not impossible) to attend the rest of the day.
All of these formats allow for active contributions from participants. The Open Science Framework session of the Open Scholarship Week and Remote Reprohack had a more workshop type of format which allowed participants to contribute during the session itself (other than to just ask questions). Csv,conf,v5 used the lunch break as a space for interaction, by hosting a lunch Zoom meeting. With the dedicated csv,conf,v5 Slack channel it was easy to reach out to other participants and engage with them. For example, a self-organised session on communities of practice took place, which resulted of the engagement in the Zoom-lunch and continued afterwards with participants that noticed the message in the Slack channel. The Remote ReproHack made use of a hackmd.io document. Other online meetings, such as the Collaboration Workshop 2020, made use of a Google document where individuals could sign in and reach out to each other. These text documents also allow participants of the event to contribute their thoughts.
Most of these interactions could benefit from some stimulation in the form of a facilitator or organiser that takes charge. Here it is very important to have an inclusive environment and point people towards your code of conduct (see those of the Remote Reprohack and csv,conf,v5 for excellent examples), to ensure that everyone feels safe to contribute. Sometimes interaction can be facilitated by just stating that it is possible to introduce yourself in the chat, or to give an example on how to do it, as was done for the closing session of the Open Scholarship Week.
It is also important to allow your participants to catch a break and recharge. The Open Scholarship Week sessions were spread over several days, which allowed for plenty of time to recharge in between the sessions. Csv,conf,v5 had a number of scheduled breaks of ~15 mins in between the sessions, which was praised by participants in the chat. During the remote ReproHack there were scheduled breaks and during the interactive sessions you could also choose to pause whenever you felt like.
Hosting events online does not make them boring or static. By facilitating interaction with the participants and having a llama show in the break, your event will be just as memorable as normal!
Edited on the 16th of May to add the “Breaks” session, as inspired by a conversation on Twitter with Dr. Elaine Toomey from the Open Scholarship Week.
Over the past months, we have seen so many initiatives in response to the global pandemic COVID-19. TU Delft Data Stewards team has also been asked by different teams and organizations for expertise in providing recommendations on data management. It is quite evident that there has been increasing demand on data management guidance during the crisis, with respect to processing sensitive data, using it in an effective and efficient way, and sharing it in a multi-stakeholder environment.
What we did?
We (data stewards) strongly feel that good data management is not only crucial for timely response to the COVID-19 crisis, but also a general necessary practice for responsible and trustworthy organizations and policy decisions.
Therefore we created this Data Management Concept Note to highlight the rationale, the main points to be considered for effective data management, and suggested sensible approaches for organizational implementations.
Who should read it?
Any individual, group or organization, whose work involves operations with large or small volumes of data, software or other types of information, should read this concept note. It introduces clear and instructive steps to check key data management issues in every stage of data handling, such as collecting, processing, storing, analysing or disseminating. This general guide can be used to assess current awareness and available practices, and serves as the starting point to further develop institutional-specific data management strategy, policy and workflow.
I have recently been appointed Head of Research Services at TU Delft Library. One of my first tasks is to review how effectively our services, advice and support is communicated to the research community.
And of course, the Library is not the only group running services for researchers. Our colleagues in ICT, Legal Services and Valorisation Centre all help staff during different aspects of the research life cycle
This creates a profusion of services. The services are good. But the way they are communicated is awful.
The rationale, help and background for all of these services are usually dumped on the university website. The ideal website is a sleek and concise piece of ingenious design, providing answers in seconds. Most university websites, however, are a sprawling mass of text and images, out of date or 404 pages, with conflicting or unclear information.
Going to the university website is like that moment you come back from a long holiday and find a mass of letters, brochures, business cards and magazines stuffed through your letterbox. Where to start?
I’ve not run any focus groups, but I suspect that researchers would find it difficult to find information about these university services. Most will resort to Google, or knocking or their neighbour’s door. These can be useful solutions, but they don’t necessarily point back to the library services. The library is leaking customers.
Before we try and find solutions to this, it’s worth looking at some of the particular problems we face in dumping all our service information on the university website. If we start to identify these, then we can start to create something better for researchers.
1) Siloed content – Services are not presented in an integrated way. For instance, one service for data management might be run by library, and a related one by ICT. But they do not refer to one another at all.
2) Guidance is text heavy – do visitors really like wading through long scrolls of text? Some might, but they are few. But others need quick, immediate guidance via text, image, video or walkthrough. (Additionally, how can we make the services themselves more intuitive so less guidance is needed?)
3) Excess of articles. Researchers are pressed for time. They want to concentrate on their research. So having multiple pages describe features will drive researchers to distraction
4) For editors and administrators, it’s not easy either. Publishing information via a website Content Management System can be a distressing usability experience. Often the website editor is not the expert on the actual service, leading to further difficulties in getting the right content online.
5) University websites are organisation focussed not service focussed; they are organised in a way that reflect how a department is line managed. (Good businesses never do this) But researchers don’t really care which service or department runs a tool they need – they just want to get access to the tool.
6) Lack of community ownership. This is a more intangible problem. Researchers often avoid library or other websites run by the support teams, because such websites don’t quite speak the researchers’ language. The don’t build up a sense of a user community. Truly great university services and related guidance would give researchers a stake in how these services are run and described
So, there we have it. Some of the key problems in advertising library and other services. I will follow this up with a second blog post looking at some of the solutions.
Authors and contributors: Santosh Ilamparuthi, Marta Teperek, Anke Versteeg and Yasemin Turkyilmaz – van der Velden
The worldwide spread of COVID-19 has led to a shortage of vital healthcare equipment, especially personal protective equipment (PPE) and ventilators. This has led to a dramatic increase in demand for possible solutions. People around the world, including researchers, have started working on various solutions from simple ventilators, homemade face masks and testing equipment.
More does not always mean better
While this outpouring of ideas and projects is welcome, healthcare professionals and others who wish to make use of these resources do not have any guidance as to where to turn. Searching online for these projects is not easy and it might be difficult to find the best resource. Searching in Google for “open ventilator design” returns about 70,200,000 results. Even when promising solutions are found, often the quality and feasibility of the solution isn’t easily verifiable. Malicious actors and profiteering happens aplenty as well. Additionally, the documentation about the details needed to reproduce the solutions are mostly not available.
The need for FAIR
In short, we want to help in making the projects and proposed solutions as FAIR (Findable Accessible Interoperable and Reusable) as possible. The solution for this comes from José Carlos Urra Llanusa, a former TU Delft student and current Data Champion. Along with collaborators and members of the Delft Open Hardware team, he has developed the CombatCovid App that enables the standardisation of documentation for hardware projects related to COVID-19. This open source project hosted on github has developers from around the world working on it. It also provides assistance to those who are unfamiliar with the best practices in documenting open source hardware projects. José has also recently received funding from the EOSC Secretariat Steering Group for hardocs, a package to make hardware documentation intuitive and user friendly.
The app also uses sophisticated search to find hardware designs within the documented database. It is able to handle misspellings and multiple keywords. The fast search functionality along with documentation in multiple languages provides hardware documentation for COVID-19 relevant devices to a broad audience. The documentation format is already available in English and Spanish, versions in Dutch, Russian and Portuguese are currently being prepared.
Documentation is vital for re-usability
A single location to access all relevant projects along with documentation in a single format enables users to efficiently use the resources. Several projects like the MIT ventilator and various face shields have already been documented by volunteers from Delft Open Hardware (see examples here and here). TU Delft students participating in workshops on data documentation and using GitHub also contributed to the documentation of some projects.