Can I really read your emotions if I look deep into your eyes?

Written and illustrated by Connie Clare

TU Delft Data Champion, Joost de Winter, explains the importance of replication research in the scientific field of ‘Cognitive Robotics’.    

The recent ‘Future Forward’ seminar, hosted by TU Delft library, emphasised the importance of replication studies in scientific research. Associate Professor and Data Champion in the Faculty of Maritime and Materials Engineering (3mE), Joost de Winter, explained how replicating ‘cornerstone’ experiments can improve scientific validity in the field of ‘Cognitive Human-Robot Interaction’. With the era of automated driving just around the corner, de Winter’s projects facilitate the development of automated vehicles that can adapt to the driver. His studies involve virtual driving simulation to assess the effects of time pressure on a driver’s emotional state by measuring various physiological parameters, such as eye movement and pupil diameter (Rendón Vélez et al., 2016).

Can pupil diameter serve as an accurate index of driver workload?

It is well-known that pupils of the eye can dilate or contract in response to luminance intensity, as characterised by the ‘light reflex’. However, further evidence shows that pupils can in fact respond to a diversity of stimuli, including cognition, valence (i.e. positive and negative visual experience), interest value, and communication. Such wide-ranging associations of pupil dilation are attributed to the pioneering experiments of psychologist, Eckhard Hess. The highly cited work of Hess and Polt published in Science in 1960, revealed gender-specific differences in pupil diameter in response to emotional visual stimuli in six participants. In females, pupil dilation was elicited by viewing images of a baby or a partially naked male; whereas, in males, pupil dilation was elicited by viewing images of a partially naked female. Subsequent conceptual replication studies have reported similar findings but have been criticised due to small sample population size and poor control of luminance which could confound the pupillary response. Ongoing work by de Winter and colleagues (Dr. Dimitra Dodou, Ir. Lars Kooijman and Dr. ir. Baastian Petermeijer) aims to replicate five classic pupillometry experiments with new data incorporated to reconcile and validate the original findings reported almost 60 years ago.

The eye-opening realities of published research

The reproducibility crisis is an ongoing problem particularly in sociology and psychology where many studies are difficult, if not impossible, to replicate or reproduce. As illustrated in August Comte’s 19th Century representation of the ‘Hierarchy of Sciences’ (Figure 1), human behaviour is a highly context-dependent and complex phenomenon, underpinned by a host of biochemical reactions and physical relations and, therefore, requires that robust statistical methods are adopted to ensure reliable and reproducible research claims are published.

Figure 1. Comte’s ‘Hierarchy of Sciences’ (1854) illustrates the superior complexity of social and behavioural sciences

The probability that a research claim is true depends on statistical power (i.e. sample size and effect size), bias, and the number of other studies on the same question. de Winter shared ‘empirical evidence of inflated effects’ to demonstrate how replication studies can uncover methodological errors that may lead to false research claims. Common errors are typically due to scientists:

  • Conducting exploratory (i.e. discovery-orientated) rather than confirmatory (i.e. hypothesis-driven) research where results are preliminary but not clearly defined.
  • Performing publishing bias by reporting of:
    • positive over negative data
    • a selected subset of a larger dataset
    • non-significant trends in the data with a positive ‘spin’.
  • Incorrectly or subjectively interpreting results (e.g. the ‘Pareidolia’ phenomenon where humans perceive patterns that do not exist).

Five take-home messages for your journey towards more reproducible research

To conclude, de Winter offered his valuable advice on how researchers can plan ahead for scientific validity and the generation of better, more replicable data.

1. Do your homework.

It’s important to complete a literature review and acquire prior knowledge on the subject. This will help you to arrive at an appropriate hypothesis. What is the probability that your  result will be true or false? The ‘Bayesian approach’ uses methods of statistical inference to update the probability for a hypothesis as more evidence to support a finding becomes available.

2. Improve your understanding of the ‘R value’.
As described by Iannodis (2005), the R value is the ratio of the number of ‘true relationships’ to ‘no relationships’, or the pre-study odds, within the subject.

3. Design your methodology appropriately.
This involves using sufficient statistical power; make use of large sample and effect sizes to test your hypothesis. Avoid using ‘fancy’ statistical tests as they are often too flexible and can bias the experimental outcome.

4. Don’t chase significance.  
Despite the pressure to tell a good ‘P<0.05’ story, it’s important to stay true to your results. In the words of Carl Sagan, ‘extraordinary claims require extraordinary evidence’ and in any case, a significant P value does not necessarily mean that your null hypothesis should be rejected.

5. Be an advocate for open science.  
It is recommended to pre-register your project with the Open Science Framework. Registering your trial helps to enforce confirmatory, hypothesis-driven research, and encourages open and transparent working as all registrations are eventually made publically available. Moreover, pre-registration saves time and labour in the long-term by making the writing of your publications much easier upon project completion. Using trusted data archives, such as the 4TU.ResearchData, is another useful way of depositing, storing and sharing your research output.

Funding for replication research

Policy advisor in the Social Sciences and Humanities Department at the Netherlands Organisation for Scientific Research (NWO), Carlien Hillebrink, attended the ‘Future Forward’ seminar to discuss the NWO pilot programme that aims to support those researchers conducting replication research. Hillebrink acknowledged that whilst replication studies have been notoriously difficult to fund since innovation is the typical assessment criteria, replication is an essential part of the scientific method and ‘cornerstone’ experiments that lay foundations for future research should be funded as a regular part of the funding instrument. Hence, the NWO have fuelled the agenda as the first organisation to allocate funding specifically for reproduction and replication research. The third call for NWO replication study applications has recently closed (6th June) but watch this space for more information on how to apply for future funding.


  1. Pingback: Switch gear! Drive the uptake of Open Science within your research team. | Open Working
  2. Pingback: The Third TU Delft Data Champions meeting | Open Working

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s