Support Home
TB2 Task Builder 2
GB Game Builder
QB2 Questionnaire Builder 2

HOW TO: Experiment Tree

  • Overview
  • What is the Experiment Tree?
  • How do I Start a New Experiment?
  • How do I Use the Experiment Tree?
  • Design Tab
  • How do I Use the Design Tab?
  • How do I Build an Experiment?
  • What are Nodes?
  • How do I Add Tasks and Questionnaires to an Experiment?
  • Updating Your Nodes
  • What are Core Nodes?
  • What are Control Nodes?
  • Recruitment Tab
  • How do I Recruit Participants?
  • Can I Set a Timelimit?
  • Can I Restrict who Enters my Experiment?
  • Can I Make Changes after Recruitment?
  • Participants Tab
  • Participants
  • Data Tab
  • Data: Consort Data
  • Data: Downloading my Metrics
  • Advanced Techniques
  • Embedded Data
  • Randomisation
  • Longitudinal Studies
  • Launching your Experiment
  • Experiments: From Creation to Launch
  • Troubleshooting
  • Troubleshooting
  • Bot checks

Welcome to the Experiment 'How To' Guide

Here you can learn about the basic features of building Experiments in Gorilla by exploring the list of questions on the left.

Not sure where to start? Try one of these quick-start shortcuts:

Looking for more information on a specific Experiment Tree Node? Check out the Experiment Tree Node Tooling Reference Guide.

For examples of incorporating Questionnaires, Tasks, and Experiment Tree Nodes into complex real-life experiments, check out Gorilla Academy!

If you can't find an answer to your question here please get in touch with us via our contact form. We are always happy to help you, simply tell us a little about what you are trying to achieve and where you are getting stuck.

What is the Experiment Tree?

In Gorilla you create Experiments using the Experiment Tree.

Gorilla uses a graphical drag-and-drop interface to represent your Experiments, which take the form of a tree or flowchart.

You create Experiments by combining together your Questionnaire and Task components as 'Nodes' which you link together to form your experiment tree.

A simple experiment may consist of a consent Questionnaire, a demographics Questionnaire and a test Task.

For a more advanced experiment; there are also powerful Control Nodes such as the Randomiser Node, Branch Node, and Order Node, that support complex experimental designs, all without touching a single line of code!

Extending my first experiment

How do I Start a New Experiment?

A new Gorilla Experiment can be created within a Project by pressing the 'Create' button and selecting 'Experiment' from the dropdown menu.

In the create menu that appears, enter a name for your new Experiment and then press 'OK'.

You will then be redirected to the Design Tab for your newly created Experiment.

You can learn more about the Experiment Builder interface here.

/632485D0-D15C-47F8-8958-AFF5B5A8A068 Create Dropdown Menu: Selecting Experiment from the create Menu Dropdown. Selection highlighted by the blue box in the image above. /DF6233AC-8DAE-42E6-B7D2-FAD23A9D19F4 Create menu: With the 'Create New' option selected; Enter a name for your new Experiment and press OK.
Pro Tip

When choosing a name for your Experiment, try to make it something unique and memorable - a name you would easily associate with the Experiment contents.

You will use this name to identify your Experiment in your project. It is also the name people will see if you collaborate or send your Experiment with someone, so its important that they would be able to recognise it easily too!

Add descriptions to your Experiment via the 'description' option in the Settings menu. This description will then be visible from the project overview screen. You can use this feature to add a short reminder of what your Experiment is about or leave a progress message to yourself or collaborators.

How do I Use the Experiment Tree?

The Experiment Tree interface is divided into 4 major sections each found in a separate Tab:

  • Design: This is where you build your experiment. Find out more here.
  • Recruitment: This is where you set your recruitment options. Find out more here.
  • Participants: This is where you view the status of participants you have recruited. Find out more here.
  • Data: This is where you download your experimental data collected from your participants. Find out more here.

Each major section represents a stage in your overall experimental design. Usually you will progress through each of these sections one-after-another from left-to-right.

When you first enter an Experiment you will be presented with the Design Tab as is shown in the image below. The Design Tab is where you create your experimental design.

From this page you can navigate to any of the tabs for your experiment.

Image below shows the Design Tab of the Experiment Tree, with an example of a simple experiment:


To learn how to use the Experiment Tree to design and build a simple Experiment, like the one in the example above, click here.

How do I Use the Design Tab?

In Gorilla you build your Experiments in the Design Tab of the Experiment Tree:

Image below shows the Design Tab of the Experiment Tree:

Annotated Design tab showing the interface described below

Understanding the design tab interface:

  1. Name of your Experiment and Description: you can change these in the settings under Name and Description
  2. Settings & Preview Experiment
  3. Version Bar: Version History, current experiment status: either Edit, or Cancel Changes/Commit Version
  4. Design Bar: Add New Node, New Node Icons
  5. Utilities menu: Update All Nodes, Check for Errors
  6. Design Space

How do I Build an Experiment in Gorilla?

In Gorilla you build your Experiments in the Design Tab of the Experiment Tree:

When you create a new experiment for the first time you'll notice that unlike the Questionaire and Task builders the experiment tree already contains two Nodes: a Start Node and a Finish Node.

When building a new experiment the first step is to add some Nodes, here's how:

  1. Click the Edit button.
  2. Click the Add New Node button, found on the left just above the building area.
  3. A New Node menu will appear.
  4. Select the type of Node you wish to add into your Experiment.
  5. Click 'OK'.


  1. Click the Edit button.
  2. Find the icon of the Node you want to use on the Design Bar
  3. Click the icon.

Once you have added at least one node, you can clone that Node (and its settings) by clicking in the bottom left-hand corner of the Node.

Image below shows the Add New Node Menu:


Image below shows the Icons available on the Design bar:


Image below shows the a Node with the Clone Icon highlighted:


There are currently 15 different Experiment Tree Nodes to choose from, allowing you to present and/or gather data from your participants in a variety of different ways.

What are Nodes?

Experiment Tree Nodes are the building blocks of experiment creation. Building a Longitudinal study or creating a training study? Sophisticated experimental designs are now seconds away!

There are currently 15 different Experiment Tree Nodes for you to choose from, allowing you to perform randomisation, branching and counterbalancing without touching a line of Code! Simply choose from our available Experiment Tree Nodes, drag and drop them into your experiment Tree, and link them together along with your Task and Questionnaire Nodes.

You can create any experiment design you wish by simply combining Experiement Tree nodes in different ways and combinations. Learn how to add Experiment Tree Nodes into your Experiment Tree Design here.

Broadly speaking the Experiment Tree Nodes fall into three categories: Study Nodes, Core Nodes and Control Nodes. Below are the Experiment Tree Nodes you will find in Gorilla's Experiment Builder, click on an individual Node to view the dedicated Tooling Reference Guide page:

Study Nodes:

Task Node

Questionnaire Node

Core Nodes:

Start Node

Finish Node

Reject Node

Redirect Node

Checkpoint Node

Control Nodes:

Delay Node

Quota Node

Repeat Node

Switch Node

Randomiser Node

Branch Node

Order Node

Counterbalance Node

You can find out more detailed information about each Experiment Tree Node, and how to set them up, in the Tooling Reference Guide.

How do I Add Tasks and Questionnaires to an Experiment?


The task node is blue with a in the top left corner. It has a single connection point.


Double clicking on the nodes will open the node modal screen shown below.


It includes the standard save and remove buttons as well as preview and options. Additionally, if your task has any manipulations available, these can be set from here as well.


The questionnaire node is green with a in the top left corner. It has a single connection point.


Double clicking on the nodes will open the node modal screen shown below.


It includes the standard save and remove buttons as well as preview and options. Additionally, there is the choice to randomise the elements of the questionnaire. The questionnaire can be fully randomised or have all nodes but the first randomised. This is particularly useful if the first element in the questionnaire is a markdown item which provides the instructions to the participant.

Note: If you make changes to your Task or Experiment after you have added it to the Experiment Tree, you will need to update the Task/Questionnaire Node to the latest version. See the next page to learn how.

Updating Your Nodes

It is very important to keep your nodes up-to-date so that participants will take part in the latest version of your experiment. The nodes in your Experiment Tree do not update automatically - they need to be updated by the researcher after you commit a new version of your Task/Questionnaire in the Task or Questionnaire Builder.

To update individual nodes to the latest version, click on the Node, click 'Options' in the bottom left-hand corner, then 'Update to latest version'. If your Task/Questionnaire is not the latest version, an orange warning triangle will appear next to the 'Options' button.

Alternatively, you could update all nodes quickly by clicking on Utilities -> Update All Nodes in the top right corner of your Experiment Tree Design Tab.

For worked example on how to update the nodes, visit our Troubleshooting guide.

What are Core Nodes?

Core Nodes are structural elements of your task. This includes Start Nodes, Finish Nodes, Reject Nodes, Checkpoint Nodes, and Redirect Nodes.

These primarily control how participants exit and enter your task.

Reject Nodes allow you to reject participants who are not suitable for your experiment, or who withdraw their participation. Checkpoint nodes allow you to monitor how far along a participant is in your experiment, which can be useful for longitudinal studies or for managing attrition.

Core Nodes can be added from the design bar in the same way as Task and Questionnaire Nodes, and have Node modal screens that may require configuration.

One Start Node and one Finish Node are automatically added to any new Experiment. However, it is possible to have multiple Start or multiple Finish Nodes. For more information about this, see the Start and Finish Node pages in the Tooling Reference Guide.


To learn more about each Node and how to set them up, see the Experiment Tree Nodes Tooling Reference Guide.

What are Control Nodes?

Control Nodes allow you to manipulate the path of participants through your experiment.

Some Control Nodes, such as the Repeat Node and Switch Node affect a single path of participants. These nodes allow you to, for example, ensure participants repeat a task, or are able switch between tasks.

Other Control Nodes, such as Branch Nodes, Randomiser Nodes and Counterbalance Nodes, allow you to divide participants into different conditions. Different participants can then be shown different tasks/questionnaires, or different versions of the same task/questionnaire.

Control Nodes can be added from the design bar in the same way as Task and Questionnaire Nodes, and have Node modal screens that may require configuration.


To learn more about each Node, and how to set them up, see the Experiment Tree Nodes Tooling Reference Guide.

How do I Recruit Participants?

Gorilla does not recruit participants for you, however, you can link an external recruitment service to your Gorilla Experiment, create a link to distribute, or invite participants you already know to participate. The recruitment section is where you configure the method by which participants will access your experiment, optionally restrict the devices, browsers or location they can take part from, and control how many participants you wish to recruit.


Recruitment Policy

The recruitment policy you choose determines how participants will access your experiment. There are several options here: a simple link that you can put on your website or post to social media, uploading a CSV of email addresses and inviting them all to take part, or interfacing with other recruitment systems such as Prolific.co, MTurk, or SONA.

Click here for a full list of recruitment policies.

Recruitment Target

The recruitment target is the number of participants you wish to recruit. All participants who are either currently live on your experiment, or marked as included on your participants page, will count towards this total.

You must set the recruitment target to a specific number. This will assign the appropriate number of tokens from your account to the experiment.

To unassign tokens from an experiment and return them to your account, you can reduce the recruitment target. Note that any tokens from participants who are currently live on the experiment or have already completed the experiment cannot be unassigned.

For more on what happens to tokens as participants move through your experiment, see our Participant Status and Tokens guide.

Pro Tip

Only the Project Owner can change the recruitment target settings. Only the Project Owner's tokens can be assigned to the experiment. Collaborators are not able to interact with these settings, so cannot contribute any of their own tokens or use their unlimited subscription (if they have one).

Can I Set a Timelimit?

Time Limit, found on your experiment recruitment page, allows you to automatically reject participants who do not complete your experiment, or who take longer to complete it than is considered reasonable.

Once a Time Limit is set, in hours and minutes, participants who reach the Time Limit will be automatically rejected, but will be allowed to finish their current task, before being redirected to the Finish Node.


You may wish to set a Time Limit because ‘Live’ participants reserve tokens, contributing towards your recruitment target. This means that participants who drop out without finishing your experiment can prevent more participants from entering your experiment until they are rejected.

Whilst you can reject participants manually, this requires monitoring your recruitment progress closely. Instead, you may choose to set a Time Limit to automate this process.

We suggest setting a Time Limit that is far longer than it could reasonably take to complete your experiment. For example, If your experiment should take 15 minutes to complete, you might set your time limit at 2 hours.

For this reason, we do not recommend using Time Limits for longitudinal studies. In a longitudinal study, the reasons for taking a long time to complete a study are much more numerous, which makes the padding you'd want to give the time limit excessively large and hard to estimate. When you can see your attrition and rejection numbers, you may wish to revise your Time Limit, and would then have to manually include the participants you’d automatically rejected.

Additionally, depending on ethics and your recruitment service, you will likely still have to pay participants who only complete the first half of your study for completing the first section, so you may wish to make use of their data.

Note: When using the Time Limit with recruitment services that offer a similar Time Limit, make sure that your Gorilla experiment Time Limit matches the time limit set in the recruitment service.

Can I Restrict who Enters my Experiment?

You can optionally restrict your participants by device type, connection speed, browsers or geographic location. Any participants not meeting these criteria will be shown an error message.

To find out more about setting requirements, check out our Experiment Requirements guide.

/ETree Recruitment tab Layout Recruitment-REQUIREMENTS-Menu

Above you can see a close up of the Requirements section of the Experiment Recruitment tab. If you have set any requirements, icons representing your requirements will appear under the headings.

/ETree Recruitment tab Layout Recruitment-REQUIREMENTS-MENU-open limit-device-types

Image above shows the Requirements menu that appears when you click 'Change Requirements'.

I've Launched my Experiment, can I Still Make Changes?

Can I change recruitment policies at any time?

It is possible to change a recruitment policy at any time. However, switching between policies that do or don't require public ids can cause disruption to any current participants. For example, if participants have originally been sent a simple link and the recruitment policy is subsequently changed to require a public id or login, those simple links will no longer work. Consider only changing the recruitment policy once a trial of the experiment has run successfully, or sending out updated invites to existing participants.

How can I change the requirements of my task?

By default, participants can perform an experiment on any device from anywhere in the world. If necessary, it is possible to restrict the circumstances under which a participants can perform an experiment. These requirements consist of: limiting device types to phones, tablets and/or computers; limiting to a geographical location via a 2-letter country code; limiting the browser used to Chrome, Safari, Edge, Firefox and/or Internet Explorer; and limiting to a minimum connection speed. Any participant who doesn't meet the criteria below will be shown a default page explaining why they cannot proceed. If they log in later and meet the criteria (e.g. because they have switched from their phone to their tablet), they will be able to proceed as normal.

Can I edit other parts of my experiment after I've started collecting data?

Yes, you can edit your experiment whilst data is being collected, and commit any changes as a new version. Your current participants will not be interrupted, and will not see the new changes, as they will remain in the experiment version that they entered. It's not possible to make changes to the experiment version that live participants have already entered.



The participants screen allows you to observe and manage the participants who have been invited to, are registered to or have completed your task.

  • If you are using a Simple Link or Pilot Recruitment policy, this list of participants will be populated as people first log into the task and will indicate their progress.
  • If you are using the Email Shot, Email ID or Supervised Recruitment policies, the pre-prepared list of participants, email addresses or public IDs will appear in 'Participants' before the participant has logged in for the first time. If necessary, there will be an option on this page to 'activate' the participant, which will send them the initial recruitment email and any login details.

Participants can also be rejected, included or deleted from this page. Explore our Participant Status and Tokens guide to learn what do different Participant Statuses mean, how they affect your tokens and how you can manipulate them when required.

The Data Tab: Consort Data

The data tab now includes information about the state of participants at each Node. This means that you can see where participants have dropped out, been rejected, and gain detailed attrition data.

e.g. a participant has been through three nodes before being rejected. The participant will be shown as entering and exiting three nodes, and then entering the node at which they were rejected and shown as rejected at that node.


The Number of Participants who have entered the node

The Number of Participants who are still live.

The Number of Participants who were rejected

The Number of Participants who were deleted

The Number of Participants who have exited the node

Here we have 17 participants entering the node and 16 exiting, with one remaining live. It may be that the participant has left the experiment at this point, and we may wish to manually reject them. This will set the number of ‘Live’ participants to 0, and the number of rejected participants to 1.

Note: When your Experiment contains an Order Node, the consort data refers to the Node position rather than the Node itself. i.e. if you have a Flanker Task and next a Thatcher Task connected to an Order Node, the consort data for the Flanker Node will refer to the first task participants saw, whether that be the Flanker or Thatcher Task, instead of the attrition etc. data for the Flanker Task itself.

In the Data Tab of the Experiment Builder, you can download a CSV file which contains all the above information for each node in your experiment in an Attrition Report. Click the 'Download CONSORT Data' button to download this report:

A screenshot of the Data Tab in the Experiment Builder. An arrow points to the red 'Download CONSORT Data' button on the right.

The Data Tab: How do I Download my Metrics?

Your experiment Data page allows you to download data from the various Task and Questionnaire Nodes of your experiment in the form of a metrics spreadsheet.

In compliance with BPS (The British Psychological Society) and NIHR (National Institute of Health Regulations) we store data from each node separately. This way demographics data and performance data are always kept separately.

Data is presented in long-format, with one row per-event. There is an option, however, to download questionnaire data in short-format, with one row per-participant.

Note: Script widget data will not be displayed in short form. If you have used a script widget in your questionnaire, download the long-form version.

Below, you can see the Data Tab of the Experiment Tree:

Datatab to generate and download participant data

To download data from all Nodes:

  1. From the Data Tab, click ‘Download Experiment Data’. This will open the data-download menu.
  2. Select your preferred Filetype.
  3. Select the Timeframe you wish to collect data from.
  4. Click ‘Generate Data’.
  5. Click ‘Download Data’.

To download data from one Node:

  1. From the Data Tab, click on any Node in your Experiment Tree to open its individual data-download menu.
  2. Select your preferred Filetype.
  3. Select the Timeframe you wish to collect data from.
  4. Click ‘Generate Data’.
  5. Click ‘Download Data’.

Below you can see the data-download menu:

Download menu with filetype, blinding, timeframe and form.

For more information on data format and analysis, take a look at the How To: Metrics Guide.

Embedded Data Walkthrough

Embedded data is data collected about a participant's responses that can be used to alter the experiment (in real time) depending on their response. Essentially, embedded data is information you can ’carry’ from one part of your task or questionnaire to others within the same experiment.

Here are some examples of when to use embedded data:

  • You may want to show participants their scores at the end of an experiment.
  • You may want participants to take different routes through your Experiment Tree depending upon what answers they give in your questionnaire.
  • You could end an experiment early if participants' scores do not match criteria needed for your experiment.

Learn how you can manipulate your experiment using Embedded Data through our Embedded Data Guide.

Randomisation and Attrition in Gorilla

Randomisation is an important aspect of many experiments as it reduces bias that could impact the outcome of the study. We created some excellent documentation pages to walk you through randomisation in Gorilla.

What follows is a description of some of the complexities surrounding randomisation in Gorilla.

In short, there are two types of randomisation: with replacement and without replacement. These differ in how they allocate participants to different conditions and it’s important to understand these differences.

There are also various ways that Gorilla can handle attrition!


The best way to think about random with replacement is that it is like a coin toss, or dice roll. Each coin toss is independent of what has happened before.

If we have a randomiser going to two groups, it’s like tossing a coin for each participant. On average, we would expect a 50:50 ratio of head to tails, but you could get runs of all heads by sheer (bad) luck. We are unlikely to end up with a ratio of exactly 50:50, but it will probably be close. The larger the sample, the closer the ratio will get.

This is called WITH REPLACEMENT because each time a participant is assigned a group, Heads or Tails, all options are still available for the next participant. Each allocation is independent of all previous allocations.


The best way to think about the balanced randomiser is like a deck of cards. Each draw is dependent on what has been drawn before.

The colour of the card (red or black) determines which branch participants go down. Balanced with a 2:2 ratio means we have 4 cards, 2 are black and 2 are red. Cards are handed out in a random order (BBRR). In each lot of 4 cards, we will have the 2:2 ratio exactly. For the next four participants, the process is repeated. So over 12 participants we would get 3 lots of 4 cards, with 6 Red and 6 Black. For instance, the three lots could be; BRBR then RRBB then RBBR. Importantly, consider the 1st lot. Once we’ve had BRB, the last card has to be R. There is no chance of it being anything else.

A 2:2 ratio is different from 1:1 ratio. With 1:1 there are only 2 cards, 1 red and 1 black, so there are only two possible orders (BR and RB). So over 12 participants we might get BR, RB, BR, RB, BR, BR.

A 10:10 ratio means we have 20 cards, 10 red and 10 black. We could (by chance) have a run of 10 red followed by 10 black, then 10 black and 10 red. In that unlikely event, the only times we have a balanced set of participants is at 20 and 40 Ns. For example, if we only had 15 Ns, the first 10 would be allocated to red, leaving us with only 5 allocated to black.

With a 2:2 ratio, one group can only get 2 ahead of the other group. If you have already sampled 100 participants equally into black and red and you only want to sample two more, it may be that these last two both get red cards (RRBB), leading to 50 black and 52 red.

With 1:1 ratio this goes down to 1. If you have already sampled 100 participants equally into black and red and you only want to sample two more, one will be black and one red.

With 10:10 ratio, this goes up to 10. This is important because a larger ratio increases your chance of unequal conditions if you end recruitment early.

This is called RANDOM WITHOUT REPLACEMENT because in each lot of participants (2, 4, 10) cards are dealt out without replacing the cards that have been taken by previous participants. At the end of each lot we will have dealt participants the exact ratio we set. Allocations are dependent on all previous allocations.


In Gorilla the Randomiser node, Order node and Counterbalance node have no knowledge of subsequent attrition. Attrition is when a participant drops out of your experiment part way through. This means that even with balanced randomisation, you may end up with unequal groups if participants drop out. Drop-out can be caused by a range of factors: participants can get bored, have their attention called elsewhere, dislike the task, or stop participating for whatever reason.

Worked example:

  1. Imagine we have a simple between-subject experiment with two groups. The Randomiser leads to Group A and Group B via two Checkpoint nodes. We want 12 participants overall and the randomiser is set to 2:2 Balanced (without replacement).
  2. We launch the experiment.
  3. Participants come into our experiment and get assigned as follows: AABB, ABAB, BABA
  4. Our experiment is now full and participants can no longer join. Great!
  5. Participants that have already started come to the end of our experiment. Participants AABB, BB, BABA all finish. Two participants remain live both on the A branch. It might be that these participants have got bored and wandered off.

Two scenarios can happen next.

Scenario 1: We have a Timelimit set, no manual intervention.

  1. After the appropriate time, both remaining live participants are automatically rejected.
  2. Our experiment is no longer full. New participants join.
  3. Gorilla generates a new block of participants (BBAA). Remember, the randomiser has no knowledge of subsequent attrition.
  4. The first participant starts and immediately drops out handing back their token. That’s the first B of our new BBAA lot used.
  5. The next participant who joins takes up the next B on our BBAA lot and goes down the B branch and completes the task.
  6. The next participant goes down the A branch and completes the task.

Summary: At the end we have 12 completes, 5 on the A branch and 7 on the B branch.

We might report this as: 12 participants randomised to condition (5 condition A, 7 condition B) completed the study. 14 participants were recruited overall, 2 participants dropped out of the A condition.

Scenario 2: We don’t have a Timelimit set, manual intervention.

  1. Gorilla has sent us an email saying our experiment is full. We leave it 2 hours, so allow people to complete. After this time we are happy to manually reject Live participants as they have probably dropped out.
  2. We look at our participants dashboard and see that we have 6 completes on the B branch and 4 completes on the A branch.
  3. We edit our experiment and change the randomiser ratio from A2:B2 to A2:B0.
  4. We reject the two live participants.
  5. Recruitment resumes, and we get two participants that are sent down the A branch by our new randomiser ratio and complete the task.

Summary: At the end we have 12 completes, 6 on the A branch and 6 on the B branch.

We report this as 12 participants were randomly assigned to groups and recruited until there were 6 participants that completed each condition.


In scenario 2, we ended up with the ideal number of participants in each condition. But we have lost information about attrition. In scenario 1 we end up with small differences in group size, but we have the information about attrition. Neither are ideal, but good science is often about compromises.

Before we look at possible solutions, let's take a deeper look at attrition and why it is important.


To take an extreme example, imagine we have two groups, one with a positive mood induction (kittens and puppies) and one with a negative mood induction (spiders and snakes) followed by a probabilistic discounting task. Our hypothesis is that a poor mood makes people favour money now over money later, whereas a good mood pushes this horizon further out.

In an ideal condition, we’d recruit 50 participants to each group and they would all complete the experiment. But this is the real world, and that’s unlikely to happen. Importantly, we'd probably expect a lot more attrition in the negative mood group.

Three scenarios can happen next.

Scenario 1: We have a Timelimit set, no manual intervention.

By the time we have 100 complete datasets, our methods section might read as follows:

100 participants completed the study. They were randomised to conditions (70 in condition A, 30 in condition B). Overall, 200 participants were recruited; 10 participants dropped out of condition A and 90 dropped out of condition B.

This tells the reader that we had significant asymmetric attrition and that the negative mood induction group are probably systematically different to the positive mood induction group. They have self-selected for being less sensitive to spiders and snakes.

Scenario 2: We have don't a Timelimit set, manual intervention.

100 participants were randomly assigned to groups and recruited until there were 50 participants that completed each condition.

The asymmetric attrition information is hidden from the reader.

Scenario 3: Recruit in 3 phases changing the randomiser ratio.

An alternative, and elegant approach is to split our recruitment into 2 phases.

In phase 1 we collect 50 Ns worth of data at a 4:4 ratio. We then determine the attrition rate in each group and adjust the ratios accordingly.

In phase 2 we collect 50 Ns worth of data at the 4:8 or 4:12 ratio – whatever fits our attrition rate. By doing so we aim to get both groups to 50 completes at about the same time.


In Gorilla we have a feature called Quota Nodes. This allow us to set a quota on a branch of our experiment. Once the quota is full, participants will no longer be sent down that branch. This will give us the effect of setting randomiser branches to 0 once we have enough participants for that condition without the need for manual intervention.

This feature should be used with care so that we don’t hide asymmetric attrition from ourselves or others.


The Allocator Node combines Randomiser and Quota nodes and therefore allows them to be sensitive to subsequent attrition.

Going back to the original example:

  1. Imagine we have a simple between-subject experiment with two groups. The Randomiser leads to Group A and Group B via two checkpoint nodes. We want 12 participants overall and the randomiser is set to 2:2 Balanced.
  2. We launch the experiment.
  3. Participants come into our experiment and get assigned as follows: AABB, ABAB, BABA
  4. Our experiment is now full and participants can no longer join. Great!
  5. Participants start completing our experiment. Participants AABB, BB, BABA all complete. 2 remain live both on the A branch. It might be that these participants have got bored and wandered off.
  6. After the appropriate time, both these participants are automatically rejected.
  7. Our experiment is no longer full.
  8. New participants join.
  9. Gorilla checks to see if any branch assignments have been returned due to attrition. They have (AA).
  10. These assignments are handed out to new participants until they are consumed. The Allocator Node does exactly that: it will give us complete information on how many participants were recruited, how many completed and how much attrition there was in each condition, before reassigning new participants to a branch. On the face of it, it’s simple. But we also need to factor in Version Control across experiments and how to manage this.

Longitudinal or Multi-Part Studies

If you want to run a longitudinal or multi-part study there are a few more aspects you'll need to consider, particularly when building your experiment tree and choosing a recruitment policy. It's best to read through this page thoroughly to make sure you've set everything up properly.


You can use one experiment tree for your whole study, and make good use of the Delay Node and Checkpoint Node.

  • At the end of the first session of your experiment, add in a Delay Node to prevent your participant from continuing your experiment until the specified time has elapsed. You can customise this with a message to let your participants know that the session has finished and provide instructions (if necessary) on how to return for the next session.
  • A Checkpoint Node is a useful way to keep track of participants in your experiment. For example, you can quickly identify any participants who haven't returned for the next session of your experiment and use this information to prompt them to return or even to reject them from your experiment if too much time has passed.
  • A Repeat node allows you to repeat a task or questionnaire, or groups of different nodes, several times before the participant moves onto the next node. For example, if you want a participant to repeat a task over 3 separate days you can nest your task node and Delay node within the repeat node and edit the settings to ask Gorilla to repeat 3 times.
Pro Tip

For some recruitment platforms, such as Prolific, you can use the Redirect Node node instead of the Delay Node to send participants externally to the recruitment platform at the end of each session. You can add a message to participants when they reach a Redirect Node, and also set a delay to prevent them returning too early. When you invite them back through the recruitment platform on the next day of participation, they can pick up where they left off in the experiment tree.

If you will be using Prolific for a longitudinal experiment, we recommend using their Gorilla Integration Guide FAQ "How do I setup a longitudinal (multi-part) study on Gorilla and Prolific".


You will need a way to identify your participants when they return for the next session of your experiment, to prevent Gorilla processing them as a new participant. This will also allow participants to resume the experiment where they left off in the previous session, because Gorilla will be able to identify them when they return. Fortunately, many of our recruitment policies (as well as external recruitment providers such as Prolific) allow you to do this with ease.

This is easiest to achieve using an ID-based recruitment policy, which allows your participants to return by logging in (e.g. Supervised or Email ID) or clicking (e.g. Email Shot) their personalised link. These recruitment policies prevent participants entering your study more than once, and allows them to continue later after completing part of the experiment. Most external participant recruitment services will assign each participant a unique ID, and so Gorilla will assume that two entries into the study with the same ID are the same person and therefore won't attempt to process them as new participants. If the participant returns with the same ID, they will continue on in the experiment tree at the next node from where they left off.

Note: We don't recommend you use Simple Link with a Delay Node because, by default, no reminder email will be sent to participants and they won't be able to log back in again where left off. If you do decide to use Simple Link for any reason you must check Send Reminder and Reminder Form in the configuration settings of the Delay Node. This will require collecting participants' email addresses, so make sure you have ethical clearance for this first.

Participants and managing attrition

Longitudinal studies are likely to have higher attrition rates than studies that can be completed all in one sitting. You can monitor your participant status in the Participants tab of the Experiment Tree to track which participants have yet to return for a new session of your experiment. Doing this in combination with a Checkpoint Node (see above) makes this even easier.

Managing participants with Checkpoint node

In the above example, the Participants tab shows which Checkpoint each participant has passed through. Today, on Day 3 of the study, we expect everyone to have passed through Session 3 and be marked as "Complete", but two participants are still "Live". The first participant has passed through the "Session 1" Checkpoint and did not return for Session 2, so we might want to use this information to reject them from participating further. Another participant is due to complete Session 3 today but hasn't returned yet, so they might need a nudge to come back and finish the study before the end of the day. From the Participants screen, it's very easy to see the status of all your participants so that you can manage attrition.

Pro Tip

From Creation to Launch Walkthrough

We've created a Walkthrough that will take you step-by-step through the journey of creating and launching your Experiments in Gorilla. There, you will find references to the support pages for all the crucial components that build your Projects.

Explore the Experiments: From Creation to Launch Walkthrough here!


For general troubleshooting advice visit this page.

If you don't find an answer to your question reach out to our friendly support team via the Contact Form - we are happy to help!

Bot checks

We understand that many users can be worried about bots taking part in their questionnaires, tasks, and experiments.

Here are some great articles that explain many asked about questions about bots - the why, how, and where?

Bot checks already available for Gorilla

We don't see any evidence of bots on your site, but for those who want to be extra cautious, we have a collection of bot check examples on our samples page.

You can choose from a variety of pre-created tasks that can be placed in the experiment tree and act as bot checks to help ease your mind about the quality of data collected.

Anagram Task - Click the scrambled letters in the right order to make a word.

Naming Task - Name the animal in the picture!

Real Effort Number Counting - Count how many zeroes appear in a grid. This is a classic real effort task. The grid is shown as an image, to make it more bot proof.

Sentence Unscrambling Task - Click the scrambled words in the right order to make a sentence.

Visual Search - Can you find the cat among the dogs?

Rating Scale - Select 'Strongly Agree' on a Likert scale.

Click a Colour - Choose a colour from the list of words.