Support Home Launching Your Study From Creation to Launch

From Creation to Launch

  • Overview
  • Ethics/IRB Approval
  • Create the questionnaires/tasks
  • Build your Experiment
  • Checkpoint Nodes
  • Pilot Your Experiment
  • Check Your Data
  • Launch Your Experiment
  • Attrition
  • Using Recruitment Services

Overview


This walkthrough offers guidance and advice on best practice for creating and launching an experiment in Gorilla.


Ethics/IRB Approval


You may need ethical approval from your Internal Review Board or Ethics Committee to run your experiment. If so, you will find a useful template on our Ethics Applications page.


Create the questionnaires/tasks


The first thing to do is to create the different parts of your study: these are your individual Questionnaires and Tasks (where you can record response times and accuracy etc).

Even a simple experiment is likely to have three parts:

  1. A consent form
  2. A demographics questionnaire
  3. A task.

As you create these parts, use the preview and play functionalities a lot! These will help you get everything working exactly as you want. Particularly, make sure you've tested your experiment thoroughly across all the browsers and devices that you intend to support. We have created a helpful tutorial to preview Task and Questionnaires on different devices (iPad, iPhone etc).

If you have different media files throughout your experiment, check out our Technical Checklist to ensure your audio / video / image files will be displayed correctly.

Once the individual questionnaires and tasks of your experiment are working, make sure you preview each of them and look at the data. Scrutinising the data will help you be confident that Gorilla is collecting the metrics that you need. For example, check that you can identify each condition and each dependent variable in your task.

When you commit (save) versions in Gorilla, you can write a commit message. Use this to write a note to your future self about what is working and what is left to be done. This helps when you return to building the task or questionnaire after a break. Your future self will thank you!

Learn about Questionnaire Builder 2

Learn about Task Builder 2

Learn about the Experiment Tree


Build your Experiment


Once you have created the individual tasks and surveys; put them together in the experiment tree! Simply add new nodes to your tree and link them together between the start and finish nodes to map out the journey that your participant takes through your experiment.

Once you have created your experiment, you can experience your whole experiment almost as a participant would by previewing your experiment You can then download the data for each questionnaire/task individually at the end. We recommend you do this at least once before piloting your study! Take your time looking over your data output, making sure you understand the data columns, and looking at our troubleshooting guide if your data looks wrong.

  • If you have Branch Nodes, make sure you test the experience for each possible response. to ensure your branching is working correctly. We have a guide which explains common Branch Node mistakes and how to fix them.
  • If you are using Randomiser Nodes, the preview tool will randomly assign you to a condition. However, the preview tool does not have any memory of previous previews. Consequently, if you’ve set up a Balanced Randomiser, you’ll experience it in Random mode during preview. If you are worried about randomisation and attrition, you could consider using the Allocator Node instead.
  • If you are using Quota Nodes, the preview tool will not send you along the reject path, as the preview tool does not consume tokens, and so will not fill your Quotas.
  • If you are using an Order Node, the preview will only give you one of the possible orders!
  • If you are using a Counterbalance Node, ensure you have checked your spreadsheets or manipulations are configured correctly.

After previewing your experiment, you may find you need to make changes to one of the Questionnaires or Tasks. Once you have made these changes you’ll need to commit them and then update the corresponding Node in your experiment tree to the latest version.

If you are running a longitudinal or multi-part study, you can do this all within one experiment tree using Delay nodes or Redirect nodes. You can preview experiments easily with Delay Nodes by setting a small delay of 1 or 2 minutes, and then once you are happy that everything works you can increase the Delay ready for piloting.

Learn about the Experiment Tree and how to navigate the different tabs.


Checkpoint Nodes


We strongly advocate the use of Checkpoint Nodes throughout your experiment. Place them at major steps in your experiment to monitor participant progress and aid in data analysis.

We recommend you place Checkpoint Nodes after consent and demographics questionnaires and at the beginning of new experimental branches from Branch or Randomiser Nodes. These nodes will be invaluable in assessing your participants progress through your experiment as well as identifying any potential problems in your experimental design. They are also great help when it comes to analysing your data.

If you have a limited number of participant tokens, Checkpoint nodes will allow you to clearly identify participants who have not sufficiently progressed through your experiment so you can reject them confidently. Conversely, they'll allow you to identify participants who have progressed through enough of the experiment to merit including them and collecting their data.


This video shows you how to use the checkpoint node to monitor particpant progress and determine whether to reject or include participants.

Experiment Tree Checkpoint Node

Length (mins): 6:41


Pilot Your Experiment


Before you launch your experiment you should collect some pilot data, such as from friends or colleagues so that you can get some feedback. We suggest using the Pilot Recruitment Policy. Because you are now running an experiment and collecting real data, this process will consume tokens.

If you want to test your experiment live without collecting data, you can use a Reject node to avoid consuming tokens. However, we would recommend running at least a small pilot (see below) where you do collect data so that you can see what it looks like and how you might analyse it.

The Pilot Recruitment policy requires participants to type in some text as an ID. You can create IDs that help you remember what functionality you are testing. For example, Test_1 or Testing_Branch_1.

The pilot ID can also be useful when you want feedback from your whole lab. You can send out the link and each person can use their name. We use this recruitment policy internally at Gorilla to test experiments, too!

Remember to set a Recruitment Target.

Learn more about the different Recruitment Policies.

You should also complete a small pilot with some real participants. Let’s imagine you’re using Facebook to recruit participants. We’d recommend initially launching just a few participants (5 to 10) to allows participants to raise any questions. Don't forget to change the Recruitment Policy!

You may want to include an additional feedback questionnaire at the end of your experiment. Check out our sample: Generic Pilot Feedback Questionnaire

Once you've finished piloting and taken the feedback on board you can then remove this questionnaire from your Experiment Tree.

Learn more about your Data and how to understand your data and find out what each column means.


Check Your Data


Once your colleagues or friends have tested your experiment, download the data files from the Data Tab and check that you have everything you need to run your analysis. We have guides on how to understand your data, or what to do if your data doesn't look as you expect.

Always check:

Check your data workflow

Now that you've got pilot data, it’s time to run through how you are going to do your data pre-processing and data analysis. This can also be a good time to define exclusion criteria so that you end up with high quality data.

Running through your data analysis workflow gives you an opportunity to add metadata, checkpoints, and make corrections - without using up a lot of tokens and losing potential participants (and their data!).

Data pre-processing:

Data pre-processing is the process of taking data from Gorilla and manipulating it into the right format so that it can be analysed in your software of choice (such as SPSS, RStudio, or JASP). This usually involves combining data files and calculating summary data for each participant.

Gorilla can provide questionnaire data in short form or long form. Task data is always provided in long form as only you know how you want to process your data and what summary statistics you want to calculate, such as which measure of central tendency you need (e.g. the mean or median), how different conditions are defined in your study, and which information you want to aggregate (e.g. accuracy, response times, or slider ratings to name a few).

We cannot aggregate data for you as National Institute for Health and Care Research (NIHR) and British Psychological Society (BPS) guidelines are to keep demographic and performance data separate.

Check our our useful tutorials about using Excel and RStudio, including an R script which will combine your data files and transform long-form data to short-form.

Launch Your Experiment


If you're happy that the following two points are true then you're ready to launch your experiment for real! Congratulations!

  • Your experiment is working smoothly for real participants.
  • You are collecting all the data you need for analysis from real participants.

Select a Recruitment Policy that meets your study's requirements. You may wish to Crowdsource your participants by sticking a Simple Link on Facebook or other social media channels. If you already have a list of participants the Email ID or Email Shot may be the right recruitment policies for you.

Alternatively many researchers choose to use Third Party Recruitment Services. You can find out more about using recruitment services here.

You can browse our full list of available Recruitment Policies.

We have also created a short video that walks you through the process of launching your experiment:


You could consider adding specific Requirements to your experiment. This could be to limit the devices, browsers, or connection speeds of participants you recruit.


Attrition


Participant attrition - the rate of participant dropout from your experiment by any means - is a factor to consider in any experiment whether that is a lab study or an online study.

One of the great benefits of conducting research online is the increased scale and reach. That is, the wide availability of diverse participants and access to otherwise 'hard-to-reach' populations. The result: participant sample sizes in the 1000s rather than 10s or 100s is now an achievable possibility!

However, the upshot of it being easier to join an experiment, is that it is also easier to leave one. In other words, you should expect the attrition rate for online experiments to be higher than for the same experiment conducted in a Lab. There is no way of knowing why they have stopped, and for ethical reasons participants have the right to withdraw at any time.

When you set a recruitment target, both your Complete and Live participants will contribute towards your target. So, if you have a recruitment target of 20, 8 Completes and 12 Live, Gorilla will mark your experiment as FULL and prevent further participants from joining your study. This is because they might all still complete the experiment. However, some participants will leave your study without completing and will remain 'Live'. While they are 'Live' they have reserved a token that you can't yet use on another participant. Consequently, it is important to regularly reject participants who have dropped out. You can find some information about participant status in our guide.

There are two ways you can reject participants that have dropped out.

  • Automatically by setting an Experiment Time Limit: The easiest way to reject participants who have likely dropped out is by setting a Time Limit. This option can be found in the Requirements section of your experiment, under 'Recruitment'. Participants who take longer than the time limit will be automatically rejected (note: The Time Limit feature is not recommended for longitudinal studies). If you're recruiting via Prolific, make sure that your Gorilla experiment Time Limit matches the time limit set in Prolific.
  • Manual Rejection: Alternatively, you can manually reject Live participants at any point to return the token. For experiments that are completed in one sitting, a good rule of thumb is to reject participants that started over 24 hours ago.

For a participant to be marked as Completed by Gorilla they must have completed the last task and reached the 'Finish' Node in your Experiment Tree. Nevertheless, you may be willing to still use the data if a participant has completed most – but not all - of the experiment.

Whether or not you include participants who have dropped out during your experiment is up to you. We suggest including a Checkpoint Node at the point at which you are happy to still include their data. A checkpoint node will allow you to identify the participants that you want to manually include on the Participants page.


Using Recruitment Services


Often the fastest way of getting participants for your study is to work with a Recruitment Service who provides them.

Recruitment Services will take care of both finding the participant and also paying them. For this they will take a commission for the work that they have done.

We highly recommend Prolific.com; they are specialists for behavioural scientists. We also have a full list of integrated recruitment services but you can also integrate with other 3rd party recruitment services yourself such as market research agencies.

Market Research agencies exist all over the world and in nearly every jurisdiction. They are more expensive than recruitment services like mTurk and Prolific, and their participants are used to questionnaires about products, but this can be a good option if you need participants fast.

There are a number of challenges to manage when using a recruitment service:

  1. Fully informing your participants how to interact with Gorilla and 'complete' your study.
  2. Setting Participant Recruitment Numbers in both Gorilla and the recruitment service.
  3. Keeping on top of participant attrition.
  4. Taking account of server downtime - limiting live participants.

Informing your participants about Gorilla

Participating in a Gorilla study via a recruitment service should be a seamless experience for the participant. Gorilla has been designed to reduce participant 'barriers to entry' and protect participant anonymity; Gorilla does NOT require participants to sign-up for a Gorilla account. Nor are participants required to download anything to their computer in order to run your study.

While a participant may be familiar with taking studies through a particular recruitment service they may not have taken part in a Gorilla study before. Mentioning that they don't need to sign up or download anything can improve uptake of your study!


Setting Participant recruitment numbers

When using a Recruitment Service, you are using two paid-for services that must both protect you from overspending: Gorilla, and the recruitment service. Consequently, you’ll need to set the number of participants you want to recruit in both services. In Gorilla you do this by setting the Recruitment Target to be the total number of Participants that you want to recruit in your study. You can find out more about Assigning Tokens from our Pricing FAQ page.

Problems can occur when the two different systems get ‘out of balance’. For example, Gorilla may think you have finished participant recruitment while the recruitment service may still send participants to Gorilla. The result is a frustrating experience for a participant as they will encounter a message saying 'this experiment is not currently available'.

How can this happen? A recruitment service may have a way of keeping track of participant attrition that we don’t have in Gorilla (and vice-versa):

Scenario 1:

  1. A participant clicks on a link to your study in a recruitment service. The recruitment service logs them as an active participant.
  2. They click the Gorilla Start button. At this point, in Gorilla, the participant reserves a participant token.
  3. The participant consent and come to a screening questionnaire which they fail to pass.
  4. They are sent to a Gorilla Reject Node. At this point in Gorilla, the participant token is returned to the pool, and the participant is not counted towards the Recruitment Target.
  5. Depending upon how you have set up your Reject Node the participant may or may not be redirected back to the recruitment service. Therefore this may (or may not) change how many participants the recruitment service believes it has recruited. Therefore, it's possible that the recruitment service still believes they are an active participant.

Scenario 2:

  1. A participant clicks on a link to your study in a recruitment service. The recruitment service logs them as an active participant.
  2. They click the Gorilla Start button. At this point, in Gorilla, the participant reserves a token.
  3. The participant consents and they start the task but, for whatever reason, they decide to drop out. The participant goes back to the recruitment service to tell them they've dropped out of your experiment.
  4. At this point Gorilla may not be told by the recruitment service that the participant has withdrawn. As such the participant's status will remain 'Live, their token is still reserved, and the researcher must manually reject this participant.

Continue reading below to learn how to avoid this and stay on top of participant attrition!


Keeping on top of participant attrition

When you set a specific Recruitment Target in Gorilla (in addition to the recruitment target set in the recruitment service itself) you will need to make sure you keep the number of recruited participants on both websites in check:

  1. Be sure to read our Participant Tokens Guide so you fully understand when and where tokens will be reserved, spent, and returned in Gorilla.
  2. Be aware of how participants are considered 'recruited' within your chosen recruitment service. For example: commonly the recruitment service requires that they need to be returned to their site and/or submit a code to be considered finished.
  3. Keep a close eye on participants who have started your study and reserved a token in Gorilla, but who have since dropped out (their status remains 'Live').
  4. You may need to manually reject participants who look to have dropped out in order to return their 'reserved' token to the pool.

Another way you can alleviate this issue if you don't wish to manually monitor your participants is to use over-allocate participant tokens to your study e.g. if you want 200 complete participants and you are expecting ~30% attrition then assign 300 tokens to your experiment.

You can also set a maximum time limit for a participant to complete a study. If they have not completed within this time, they are automatically rejected. If your study takes 20 minutes on average you could set this to 30 minutes (and risk rejecting slower participants) or 2 or 24 hours if you wanted to be more generous.


Recruit in batches

Microsoft Azure guarantees that our servers will be working 99.95% of the time, but they can still go down. See our Server Downtime page for more details.

If you are paying a lot for your recruited participants or recruiting a hard to reach demographic then low participant attrition may be a crucial factor in both your experimental design and recruitment phase.

In these cases we highly recommend launching experiments in small enough batches that you can afford to lose every participant that is currently active.

To do this, set the Recruitment Target to 20 (for example) and once these are all complete, update the Recruitment Target to 40. Continue adding batches of 20 until you reach your total.

In parallel, do the same with your recruitment service. Start with a smaller batch of 20 participants, and once that data is in, release a further 20. This way you can protect yourself from the cost of participant attrition.

Some participant recruitment services give you the option of limiting how many participants can take part simultaneously.