These support pages relate to eye tracking in Task Builder 1 and may be outdated. Check out our guidance on eye tracking in Task Builder 2!
Navigate through the menu to the left for information on how to understand your eye tracking data, prepare it for analysis and see answers to most commonly asked questions.
These support pages relate to eye tracking in Task Builder 1 and may be outdated. Check out our guidance on eye tracking in Task Builder 2!
Detailed instructions on how to set up your Eye Tracking Zone are available on our Task Builder Zones guide page.
These support pages relate to eye tracking in Task Builder 1 and may be outdated. Check out our guidance on eye tracking in Task Builder 2!
When you download your data, you will (as standard on Gorilla) receive one data file which will contain all of your task metrics for all of your participants. This will contain summarised eye-tracking data. You will receive information on the absolute and relative time participants spent looking at each quadrant and each half of the screen. Screen quadrants are represented by the letters A, B, C and D (where A = top-left, B = top-right, C = bottom-left, and D = bottom-right). For many experiments, this will be the only eye tracking data you need.
If you would like to download the full coordinate data for the eye tracking zone, you will need to manually select this in the configuration settings, under Advanced Data Collection Settings. You will receive the full eye-tracking data in separate files, with one file per participant (all contained inside a zip file). You can also access these files via a unique url for each participant that will be contained in your main data file - when you preview a task, this is the only way to obtain your eye-tracking data. Eye-tracking files contain a lot of raw data, so the guidance below is provided to help you understand it:
A more detailed explanation of getting and processing the data can be found below.
Additionally, on our Data Analysis page we offer a detailed walkthrough of analysing your eye tracking data with R.
This will depend on whether your Experiment is in active data collection (i.e. recruiting) or if you are testing a constituent Task in Preview.
Data in Preview:
Data in Experiment:
These support pages relate to eye tracking in Task Builder 1 and may be outdated. Check out our guidance on eye tracking in Task Builder 2!
Once you have your data, this section describes the relevant column variables you need to look at your data.
The metrics output by Gorilla’s EyeTracking Zone are in longform, with each row representing WebGazer’s prediction of where the participant is looking on the screen. There are rows in the metrics denoted by having ‘prediction’ in the ‘type’ column.
Prediction rows
For predictions, the key variables for each sample/row are:
Collection Screens and Zones
Metric files are also broken up into ‘collection screens’, these represent the screens being shown in gorilla, these are timepoints for data collection (different trials, for example).
In the ‘type’ column the beginning and end of these timepoints are denoted by ‘new collection screen’ and ‘End of Collection Screen’. In screens you are also able to setup content zones, the coordinates of which are recorded in the metrics, before tracking samples are collected – these can represent location of items made in the experiment builder. They have coordinates of origin points, and then height and width. You can use these to calculate occupancy of fixations in these zones
There is also a column called ‘screen_index’ which is gives a numerical index for each screen on each row – this can be used to filter predictions
Calibration/Validation files
There is a separate file for each validation and calibration, for each participant. The format of these files differs somewhat from the eyetracking collection files.
The predictions are not included in here, as they cannot be made until the eyetracker has been trained/calibrated.
Rows containing ‘validation’ in the ‘type’ column, have the calibration point coordinates in real and normalised format (columns: point_x, point_y, point_x_normalised, point_y_normalised). There is a row for each sample taken at each point.
Rows containing ‘accuracy’ in the ‘type’ column have the validation information.
The relevant columns are (centroid information):
Pointers for analysing data
For toolboxes you need to gather the actual data from your experiment, these are stored in url links in the main experiment metric spreadsheets. You will need to download these, and then export them to CSV, making sure the timestamps are printed out in full.
You can use a combination of the ‘screen_index’ and ‘type’ columns to filter data into a format usable with most eyetracking analysis toolboxes.
Using your preferred data processing tool (R, Python, Matlab etc), filter out rows containing ‘prediction’ and then use screen_index to separate each trial or timepoints of data capture.
The data produced by WebGazer and the Gorilla experiment builder works best for Area of Interest (AOI) type data analyse. This is where we pool samples into falling into different areas on the screen, and use this as an index of attention.
Due to the predictive nature of the models used for webcam eyetracking, the estimates can jump around quite a bit – this makes the standard fixation and saccade detection a challenge in lots of datasets.
Toolboxes for data analysis
These support pages relate to eye tracking in Task Builder 1 and may be outdated. Check out our guidance on eye tracking in Task Builder 2!
This guide contains code for using R to analyse your eye-tracking data using the Saccades package from GitHub. For information about getting and processing your eye tracking data, please consult the Eye Tracking Zone in the eye tracking metrics section at the bottom of the page.
To use the script below, copy and paste everything in the box into the top left-hand section of your new RStudio script. Then follow the instructions written in the comments of the script itself. Comments are written in the hashtags (#).
library("devtools")
install_github("tmalsburg/saccades/saccades", dependencies=TRUE)
install.packages('tidyverse')
install.packages('jpeg')
library('saccades')
library('tidyverse')
library('ggplot2')
library('jpeg')
#Load in file -- this is a single trial of freeviewing
data <- read.csv('Documents/puppy-1-2.csv')
#Drop rows that are not predictions
preds <- data[grepl("prediction", data$type),]
#Make dataframe with just time, x,y and trial columns
preds_minimal <- preds %>%
select(time_stamp, x_pred_normalised, y_pred_normalised, screen_index)
preds_minimal <- preds_minimal %>%
rename(time = time_stamp, x = x_pred_normalised, y = y_pred_normalised, trial = screen_index)
#visualise trials -- note how noisy the predictions are
#it is difficult to tell what is going on though without seeing the images
ggplot(preds_minimal, aes(x, y)) +
geom_point(size=0.2) +
coord_fixed() +
facet_wrap(~trial)
# lets align it with the stimuli we had placed
img <- readJPEG('Documents/puppy.jpg') # the image
#but we need to align it with our eye coordinate space, fortunately we have that in our 'zone' rows
zone <- data[grepl("Zone2", data$zone_name),] # Zone2 was our image zone
# we extract coordinate info
orig_x <- zone$zone_x_normalised
orig_y <- zone$zone_y_normalised
width <- zone$zone_width_normalised
height <- zone$zone_height_normalised
# now we add this image using ggplot2 annotation raster with coordinates calculated for the image
m <- ggplot(preds_minimal, aes(x, y)) +
annotation_raster(img, xmin=orig_x, xmax=orig_x+width, ymin=orig_y, ymax=orig_y+height) +
geom_point()
# If you look at the image it makes a bit more sense now
# put on some density plots for aid
m + geom_density_2d(data=preds_minimal)
# But this is not all we can do, lets try extracting some fixation data!
#Detect fixations
fixations <- subset(detect.fixations(preds_minimal), event=="fixation")
#Visualise diagnostics for fixations -- again note the noise
diagnostic.plot(preds_minimal, fixations)
# plot the fixations onto our ggplot, with lines between them
m+ geom_point(data=fixations, colour="red") + geom_path(data=fixations, colour="red")
#as you can see, this is pretty rough and ready, but this hopefully gives you an idea of how you can visualise eye tracking data
# You could filter the data using a convergence threshold, or use this value to throw out trials
preds <- preds[preds$convergence <= 10, ]
# After running the above line, try all the plotting functions and look at the difference, you should be able to see more fixations on the image
# But if generally bad for a given participant you may need to exclude them
# Unfortunately this is a necessary issue with eye tracking data online at this time
# The best thing to increase data quality is to given clear instructions on how to setup the camera, and to repeat validation and calibration frequently
Pressing CTRL+ENTER will run the line of code you are currently on and move you onto the next line. Making your way through the code using CTRL+ENTER will allow you to see how the dataset gradually takes shape after every line of code. CTRL+ENTER will also run any highlighted code, so if you want to run the whole script together, highlight it all and press CTRL+ENTER. You can also press CTRL+ALT+R to run the entire script without highlighting anything.
We will be adding more guides for data transformation using R soon. For more information about Gorilla please consult our support page which contains guides on metrics.
These support pages relate to eye tracking in Task Builder 1 and may be outdated. Check out our guidance on eye tracking in Task Builder 2!
Click to expand
Where can I access the raw eye tracking data?
This is only available by enabling this feature in the zone's advanced options - see Metrics above.
I have enabled advanced options and still can't find the raw data
If you are piloting the task using preview, you will have to go into the summary metric file and find the links for raw data of each trial. If you are collecting data in a study, this data will be bundled with your download.
Can I test children?
You can, however, there are two main issues here 1) the calibration stage requires the participant to look at a series of coloured dots, which would be a challenge with young children, and 2) getting children to keep their head still will be more difficult. If the child is old enough to follow the calibration it should work but you will want to check your data carefully and you may want to limit the time you are using the eye tracking for.
The Example R scipt produces an error with my data
We provide R scripts as an example of how you might investigate raw data from the eye tracking zone. We are not able to provide support for running your analysis, beyond Gorilla's platform. The example is intentionally minimal and focuses on one trial, it is not sufficient to use for a whole study. We suggest you look at the different packages described above, and follow some tutorials on them before running analysis yourself.
The calibrate button is greyed out in my task
The zone only allows you to calibrate the tracker once it has detected a face in the webcam. You may need to move hair off the eyes, come closer to the camera, or move around.
Can I detect fixations, saccades or blinks?
Yes and No - but mostly No. The nature of webgazer.js means that predictions will be a function of how well the eyes are detected, and how good the calibration is. Innacuracies in these can come from any number of sources (e.g. lighting, webcam, screen-size, participant behaviour).
The poorer the predictions, the more random noise they include, and this stochasticity prevents standard approaches to detecting fixations, blinks and saccades. One option is to use spatio-temporal smoothing -- but you need to know how to implement this yourself.
In our experience less than 30% of your participants will give good enough data to detect these things.
You will get the best results by using a heatmap, or percentage occupancy of a region type analysis. If you are interested in knowing more, have a look at this Twitter thread.
What do the normalised predicted coordinates mean?
We've created a mock-up image which should make this clearer (note: image not to scale). To work out the normalised X, we need to take into account the white space on the side of the 4:3 area in which Gorilla studies are presented.
Can you provide support with data analysis?
We’ve created some materials to help you analyse your eye tracking data, which you can find on this eye tracking support page. If you have a specific question about your data, you can get in touch with our support desk, but unfortunately we’re not able to provide extensive support for eye tracking data analysis. If you want to analyse the full coordinate eye tracking data, you should ensure you have the resources to conduct your analysis before you run your full experiment.
Are there any studies published using the eye tracking zone?
We are so far aware of three published studies using the Eye Tracking zone - please let us know if you have published or are writing up a manuscript!
Lira Calabrich, S., Oppenheim, G., & Jones, M. (2021). Episodic memory cues in the acquisition of novel visual-phonological associations: a webcam-based eyetracking study. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 43, pp. 2719-2725). https://escholarship.org/uc/item/76b3c54t
Greenaway, A. M., Nasuto, S., Ho, A., & Hwang, F. (2021). Is home-based webcam eye-tracking with older adults living with and without Alzheimer's disease feasible? Presented at ASSETS '21: The 23rd International ACM SIGACCESS Conference on Computers and Accessibility. https://doi.org/10.1145/3441852.3476565
Prystauka, Y., Altmann, G. T. M., & Rothman, J. (2023). Online eye tracking and real-time sentence processing: On opportunities and efficacy for capturing psycholinguistic effects of different magnitudes and diversity. Behavior Research Methods. https://doi.org/10.3758/s13428-023-02176-4