This walkthrough offers guidance and advice on best practice for creating and launching an experiment in Gorilla.
You may need ethical approval from your Internal Review Board or Ethics Committee to run your experiment. If so, you will find a useful template on our Ethics Applications page.
The first thing to do is to create the different parts of your study: these are your individual Questionnaires and Tasks (where you can record response times and accuracy etc).
Even a simple experiment is likely to have three parts:
As you create these parts, use the preview and play functionalities a lot! These will help you get everything working exactly as you want. Particularly, make sure you've tested your experiment thoroughly across all the browsers and devices that you intend to support. We have created a helpful tutorial to preview Task and Questionnaires on different devices (iPad, iPhone etc).
If you have different media files throughout your experiment, check out our Technical Checklist to ensure your audio / video / image files will be displayed correctly.
Once the individual questionnaires and tasks of your experiment are working, make sure you preview each of them and look at the data. Scrutinising the data will help you be confident that Gorilla is collecting the metrics that you need. For example, check that you can identify each condition and each dependent variable in your task.
When you commit (save) versions in Gorilla, you can write a commit message. Use this to write a note to your future self about what is working and what is left to be done. This helps when you return to building the task or questionnaire after a break. Your future self will thank you!
Learn about Questionnaire Builder 2
Learn about Task Builder 2
Learn about the Experiment Tree
Once you have created the individual tasks and surveys; put them together in the experiment tree! Simply add new nodes to your tree and link them together between the start and finish nodes to map out the journey that your participant takes through your experiment.
Once you have created your experiment, you can experience your whole experiment almost as a participant would by previewing your experiment You can then download the data for each questionnaire/task individually at the end. We recommend you do this at least once before piloting your study! Take your time looking over your data output, making sure you understand the data columns, and looking at our troubleshooting guide if your data looks wrong.
After previewing your experiment, you may find you need to make changes to one of the Questionnaires or Tasks. Once you have made these changes you’ll need to commit them and then update the corresponding Node in your experiment tree to the latest version.
If you are running a longitudinal or multi-part study, you can do this all within one experiment tree using Delay nodes or Redirect nodes. You can preview experiments easily with Delay Nodes by setting a small delay of 1 or 2 minutes, and then once you are happy that everything works you can increase the Delay ready for piloting.
Learn about the Experiment Tree and how to navigate the different tabs.
We strongly advocate the use of Checkpoint Nodes throughout your experiment. Place them at major steps in your experiment to monitor participant progress and aid in data analysis.
We recommend you place Checkpoint Nodes after consent and demographics questionnaires and at the beginning of new experimental branches from Branch or Randomiser Nodes. These nodes will be invaluable in assessing your participants progress through your experiment as well as identifying any potential problems in your experimental design. They are also great help when it comes to analysing your data.
If you have a limited number of participant tokens, Checkpoint nodes will allow you to clearly identify participants who have not sufficiently progressed through your experiment so you can reject them confidently. Conversely, they'll allow you to identify participants who have progressed through enough of the experiment to merit including them and collecting their data.
This video shows you how to use the checkpoint node to monitor particpant progress and determine whether to reject or include participants.
Length (mins): 6:41
Before you launch your experiment you should collect some pilot data, such as from friends or colleagues so that you can get some feedback. We suggest using the Pilot Recruitment Policy. Because you are now running an experiment and collecting real data, this process will consume tokens.
If you want to test your experiment live without collecting data, you can use a Reject node to avoid consuming tokens. However, we would recommend running at least a small pilot (see below) where you do collect data so that you can see what it looks like and how you might analyse it.
The Pilot Recruitment policy requires participants to type in some text as an ID. You can create IDs that help you remember what functionality you are testing. For example, Test_1 or Testing_Branch_1.
The pilot ID can also be useful when you want feedback from your whole lab. You can send out the link and each person can use their name. We use this recruitment policy internally at Gorilla to test experiments, too!
Remember to set a Recruitment Target.
Learn more about the different Recruitment Policies.
You should also complete a small pilot with some real participants. Let’s imagine you’re using Facebook to recruit participants. We’d recommend initially launching just a few participants (5 to 10) to allows participants to raise any questions. Don't forget to change the Recruitment Policy!
You may want to include an additional feedback questionnaire at the end of your experiment. Check out our sample: Generic Pilot Feedback Questionnaire
Once you've finished piloting and taken the feedback on board you can then remove this questionnaire from your Experiment Tree.
Once your colleagues or friends have tested your experiment, download the data files from the Data Tab and check that you have everything you need to run your analysis. We have guides on how to understand your data, or what to do if your data doesn't look as you expect.
Now that you've got pilot data, it’s time to run through how you are going to do your data pre-processing and data analysis. This can also be a good time to define exclusion criteria so that you end up with high quality data.
Running through your data analysis workflow gives you an opportunity to add metadata, checkpoints, and make corrections - without using up a lot of tokens and losing potential participants (and their data!).
Data pre-processing is the process of taking data from Gorilla and manipulating it into the right format so that it can be analysed in your software of choice (such as SPSS, RStudio, or JASP). This usually involves combining data files and calculating summary data for each participant.
Gorilla can provide questionnaire data in short form or long form. Task data is always provided in long form as only you know how you want to process your data and what summary statistics you want to calculate, such as which measure of central tendency you need (e.g. the mean or median), how different conditions are defined in your study, and which information you want to aggregate (e.g. accuracy, response times, or slider ratings to name a few).
We cannot aggregate data for you as National Institute for Health and Care Research (NIHR) and British Psychological Society (BPS) guidelines are to keep demographic and performance data separate.
If you're happy that the following two points are true then you're ready to launch your experiment for real! Congratulations!
Select a Recruitment Policy that meets your study's requirements. You may wish to Crowdsource your participants by sticking a Simple Link on Facebook or other social media channels. If you already have a list of participants the Email ID or Email Shot may be the right recruitment policies for you.
You can browse our full list of available Recruitment Policies.
We have also created a short video that walks you through the process of launching your experiment:
You could consider adding specific Requirements to your experiment. This could be to limit the devices, browsers, or connection speeds of participants you recruit.
Participant attrition - the rate of participant dropout from your experiment by any means - is a factor to consider in any experiment whether that is a lab study or an online study.
One of the great benefits of conducting research online is the increased scale and reach. That is, the wide availability of diverse participants and access to otherwise 'hard-to-reach' populations. The result: participant sample sizes in the 1000s rather than 10s or 100s is now an achievable possibility!
However, the upshot of it being easier to join an experiment, is that it is also easier to leave one. In other words, you should expect the attrition rate for online experiments to be higher than for the same experiment conducted in a Lab. There is no way of knowing why they have stopped, and for ethical reasons participants have the right to withdraw at any time.
When you set a recruitment target, both your Complete and Live participants will contribute towards your target. So, if you have a recruitment target of 20, 8 Completes and 12 Live, Gorilla will mark your experiment as FULL and prevent further participants from joining your study. This is because they might all still complete the experiment. However, some participants will leave your study without completing and will remain 'Live'. While they are 'Live' they have reserved a token that you can't yet use on another participant. Consequently, it is important to regularly reject participants who have dropped out. You can find some information about participant status in our guide.
There are two ways you can reject participants that have dropped out.
For a participant to be marked as Completed by Gorilla they must have completed the last task and reached the 'Finish' Node in your Experiment Tree. Nevertheless, you may be willing to still use the data if a participant has completed most – but not all - of the experiment.
Whether or not you include participants who have dropped out during your experiment is up to you. We suggest including a Checkpoint Node at the point at which you are happy to still include their data. A checkpoint node will allow you to identify the participants that you want to manually include on the Participants page.
Often the fastest way of getting participants for your study is to work with a Recruitment Service who provides them.
Recruitment Services will take care of both finding the participant and also paying them. For this they will take a commission for the work that they have done.
We highly recommend Prolific.com; they are specialists for behavioural scientists. We also have a full list of integrated recruitment services but you can also integrate with other 3rd party recruitment services yourself such as market research agencies.
Market Research agencies exist all over the world and in nearly every jurisdiction. They are more expensive than recruitment services like mTurk and Prolific, and their participants are used to questionnaires about products, but this can be a good option if you need participants fast.
There are a number of challenges to manage when using a recruitment service:
Participating in a Gorilla study via a recruitment service should be a seamless experience for the participant. Gorilla has been designed to reduce participant 'barriers to entry' and protect participant anonymity; Gorilla does NOT require participants to sign-up for a Gorilla account. Nor are participants required to download anything to their computer in order to run your study.
While a participant may be familiar with taking studies through a particular recruitment service they may not have taken part in a Gorilla study before. Mentioning that they don't need to sign up or download anything can improve uptake of your study!
When using a Recruitment Service, you are using two paid-for services that must both protect you from overspending: Gorilla, and the recruitment service. Consequently, you’ll need to set the number of participants you want to recruit in both services. In Gorilla you do this by setting the Recruitment Target to be the total number of Participants that you want to recruit in your study. You can find out more about Assigning Tokens from our Pricing FAQ page.
Problems can occur when the two different systems get ‘out of balance’. For example, Gorilla may think you have finished participant recruitment while the recruitment service may still send participants to Gorilla. The result is a frustrating experience for a participant as they will encounter a message saying 'this experiment is not currently available'.
How can this happen? A recruitment service may have a way of keeping track of participant attrition that we don’t have in Gorilla (and vice-versa):
Continue reading below to learn how to avoid this and stay on top of participant attrition!
When you set a specific Recruitment Target in Gorilla (in addition to the recruitment target set in the recruitment service itself) you will need to make sure you keep the number of recruited participants on both websites in check:
Another way you can alleviate this issue if you don't wish to manually monitor your participants is to use over-allocate participant tokens to your study e.g. if you want 200 complete participants and you are expecting ~30% attrition then assign 300 tokens to your experiment.
You can also set a maximum time limit for a participant to complete a study. If they have not completed within this time, they are automatically rejected. If your study takes 20 minutes on average you could set this to 30 minutes (and risk rejecting slower participants) or 2 or 24 hours if you wanted to be more generous.
Microsoft Azure guarantees that our servers will be working 99.95% of the time, but they can still go down. See our Server Downtime page for more details.
If you are paying a lot for your recruited participants or recruiting a hard to reach demographic then low participant attrition may be a crucial factor in both your experimental design and recruitment phase.
In these cases we highly recommend launching experiments in small enough batches that you can afford to lose every participant that is currently active.
To do this, set the Recruitment Target to 20 (for example) and once these are all complete, update the Recruitment Target to 40. Continue adding batches of 20 until you reach your total.
In parallel, do the same with your recruitment service. Start with a smaller batch of 20 participants, and once that data is in, release a further 20. This way you can protect yourself from the cost of participant attrition.
Some participant recruitment services give you the option of limiting how many participants can take part simultaneously.