Lab #2: Word Recognition


Posted Monday November 9th 2020. Due Wednesday December 2nd.

These instructions are in progress, as we update to a more internet-friendly platform. 


In the previous lab in this course you ran pre-prepared experiments. In this lab you will create a pair of simple experiments that have not been prepared for you in advance. The studies are relatively simple, but you will have more direct control over all stages of the creation of the experiments. 

The objective of the lab is to design, implement and run two Lexical Decision experiments. The goal of the experiments is (i) to verify the existence of frequency effects in accessing lexical items, and (ii) to test for (semantic) priming effects in lexical access. Things that you should learn from the lab include:

  • Researching and designing an experiment using a paradigm that is (possibly) novel to you
  • Analyzing the parts of an experiment in sufficient detail to implement it in a computer program
  • Stimulus creation for lexical access studies
  • Preliminary analysis of simple reaction-time datasets
  • Learning to use widely used experimental control software for online data collection.

The Experiments

The Lexical Decision Task is one of the simplest and most widely used reaction time tasks in psycholinguistics. In the simplest visual version of this task, participants read words and non-words that appear one-at-a-time in the middle of a computer screen. The participant’s task is simply to respond “yes” as quickly as possible if a real word is presented (e.g. TABLE, DOG, ACRONYM), or to respond “no” as quickly as possible if a sequence of letters is presented that is not a word (e.g. GLUB, FENDLE, DERILIOUS). The Lexical Decision task has been used to demonstrate a variety of properties of lexical access, including frequency sensitivity and many kinds of priming effects. Your task in this lab is to design, implement and run two simple lexical decision studies, with the goal of testing. 

  • Frequency sensitivity in lexical access (in other words, the fact that more frequent words are accessed faster than less frequent words).
  • Priming of associated words. Semantic priming is the most straightforward option, but other kinds of relations are possible.

You are encouraged to work in groups on this project. Since we do not have regular access to experimental participants on campus, and we are spread across many time zones, we will need to gather participants via the internet. Most likely we will recruit paid participants via the Amazon Mechanical Turk (“MTurk”) platform. This limits the number of studies that we can pay for, and increases the need for collaboration. 

Please submit your own copy of the lab report. But it is encouraged to collaborate closely on all stages of the project. Figuring things out together can be very useful, as long as it doesn’t mean that you fail to learn how to do something yourself.

Some Potentially Useful Resources

You can implement the experiment using any software package that you prefer. But we recommend to use PsychoPy, a modern, flexible, cross-platform tool. You should aim to test and analyze results from at least 5 people. Beyond this, the details of the design of the studies are left largely to you. Although this leaves the choice of design relatively open, it does not mean that ‘anything goes’ – rather, it means that you are responsible for doing some research into what would be an appropriate way to design the experiment.


Our default platform for running these experiments is “PC Ibex”, i.e., the Penn Controller version of the Ibex software originally developed by UMD graduate student Alex Drummond in the late 2000s. PC Ibex was developed by Jeremy Zahr and Florian Schwarz at the University of Pennsylvania.    

Getting started with PC Ibex

PC Ibex is a web-based experiment platform. Everything is done online..

To get started, at the PC Ibex website, login or create an account. Do this by opening the PC Ibex Farm via the big blue button on the home page. Click the Design Experiments button, and either login or create a new account. 

Click on the “+” button to create a new experiment. Choose a name for the experiment that is self-explanatory, and that contains no spaces. This will show you what a project window looks like. Open main.js to get an idea of what an experiment script looks like. There’s no need to do anything with this right away. 

Group vs. individual. PC Ibex is designed around individual accounts, but you are working in groups for this project. You will need to have access to a shared account in order to work together on experiment implementation. But for learning your way around the platform it may be easier to create your own account that you can work with.   

Tutorial. We recommend to work through the PC Ibex Tutorial first. You can also follow it with the help of a tutorial video. You do not need to work through all of the modules in the Tutorial, but if you carefully follow the first few modules, e.g., parts 1-5, then you should have a good idea of how PC Ibex works in general. Sections 8 (“Trial templates and tables”) and 9 (“Timers and randomization”) will certainly be relevant to your experiment. 

There is a general Reference section of the PC Ibex documentation. This will be much easier to understand if you have worked through some of the examples in the tutorial. The Text element section and the Canvas element sections are likely to be useful for your lexical decision experiments.

Setting up your experiment

These instructions are in progress …

Your experiment(s) will have a series of main components.

  1. Consent form. Participants must give informed consent to participate in your study. This study is already approved by the University of Maryland Institutional Review Board (IRB), which governs research with human participants.
  2. Screening questions. If we want to include screening questions to determine that the participant is a native speaker of English, that should happen first, before the rest of the study. [We are currently checking on the status of existing screeners. We will only include this if the benefits outweigh the challenges of including this.]
  3. Introduction, instructions, practice. Participants should understand what they are going to do before the start of the study. This should at least include instructions. It is generally useful to include a few practice trials.
  4. Experiment trials. This is the main part of the experiment. Each trial presents a single word for lexical decision or a sequence of two words for lexical decision.
  5. Thank you / end. It’s always good to be polite and thankful. You also need to be sure that your data is successfully saved before the participant is finished.

Consent form

You will need to use an approved version of a consent form. You can save the consent form linked here as consent.html and then upload it to the Resources section of your project.

In the example script below you can see a section of PC Ibex script (following “//Consent form”) that you can use to present the consent form and ensure that answers are collected for the questions included in that form.

Screening for non-native speakers and bots

With the opportunity to make money comes the opportunity for deception. One cost of efficient, remote data collection is that you do not get to interact with your participants, who might not have the language skills that you need, e.g., native speaker of English, or might not even be humans. Bots that try to complete online experiments are a concern.

NATIVE SPEAKERS: To probe for native English abilities we routinely take two simple steps. (i) Restrict a study to participants inside the US. This is simple, though it is also a blunt tool and one that it is easy to get around. (ii) Screen participants through a simple quiz that checks English judgments that should be easy for native speakers but should be difficult even for advanced non-native speakers. MTurk participants can be pre-qualified for UMD Language studies if they have already taken and passed this test.

BOTS: The simpler the experimental task, the greater the risk that bots will try to take your experiment. If you are recording speech from participants, then it is very difficult for a bot to fool you. If you are collecting simple button-press responses, then it is much easier for a bot to get in. Various simple measures can help to filter out bots. These can be added at the start of your experiment. E.g., show a picture and ask for a text description of it (the bot probably can’t do visual image analysis). Another simple strategy is to use a really easy memory task, as the bots don’t remember things. E.g., remember a word and type it into a box on the next page.

Introduction, Instructions, Practice

Humans are complicated. And distractible. How you interact with them makes a big difference to the data that you will collect. This is at least as important for online experiments as it is for face-to-face experiments. In an online experiment, the participant knows nothing about you except what they see on their computer. They likely start with preconceptions about you, based on their beliefs about researchers in general, and their experience of other online experiments.

If your study is a COVID-19 vaccine trial, then it is likely that participants sign up because they care about the success of your research. If your study is a psycholinguistics study, then it is highly unlikely that participants sign up because they care about your research. They are motivated by making money, as painlessly as possible. If the experience of your experiment is made less painful, then that makes the experience less painful.

Your goals in introducing your study are: (i) ensure that the participants understand what to do, (ii) ensure that participants are motivated to do what you want them to do. If they do not think that the study is serious, then they will not take it seriously. Details like clearly written, neatly formatted instructions make a difference to this.

For a lexical decision experiment, the instructions can be simple. Participants should know what they are expected to do. They do not need to know why you care or what you are testing.

Practice items are not always needed, but they definitely help participants to be more confident. They typically lead to better experimental data.

Sometimes you might want to give feedback (e.g., “Correct!’) during practice trials, even if that does not happen during the main experiment.

It is very useful for participants to know how long the experiment will last and/or how many trials they will complete. It is VERY disconcerting to be a participant in an experiment when you have no idea how much longer you have to continue. For experiments of any significant length, it is also valuable to take breaks and to know how far through the experiment you are.

In PC Ibex the practice trials can simply be a set of trials that are run before the main experiment. You probably will draw the items from a separate table. You might choose not to log the data from the practice trials.

Experiment Trials

The basic structure of a lexical decision trial is relatively straightforward: display a string of letters that remains until a yes/no judgment is recorded by a button press.

You should ensure that trials are presented in randomized order. The PC Ibex tutorial has a section that focuses on randomization.

In the frequency experiment, the randomization should choose a single word from your items table to present on each trial. Things are a little more complicated in the priming experiment, where you want to randomly select pairs of words, but you want to present the members of each pair in a strict order, i.e., prime then target. To do this, you will need to have separate prime and target columns in your items table. Your experiment trials should consist of two lexical decision events. The first event presents the prime word for lexical decision. The second event presents the target word for lexical decision. Then you start a new trial. If you set this up so that the interval between prime and target is the same as the interval between a target and the next prime, then participants will be unaware that words are being presented to them in pairs. From their perspective, they are simply seeing a sequence of individual words.

You should also ensure that the data file contains sufficient information that will allow you to identify your key experimental information, e.g., high/mid/low frequency words.

Two Experiments

You are running two experiments, a simple lexical decision experiment that tests for word frequency effects, and a priming study. A key question is how to present this to online participants.

If you were conducting the experiments in a lab, this would be straightforward. You would create two separate experiment scripts, because it is easy to control them separately. You would likely have all participants do both experiments, because people who are making a visit to a lab prefer to come for a long enough session that they can earn a meaningful amount of money.

These considerations apply differently when you are running experiments online. You must keep the participants’ interaction with the experiment(s) as simple as possible, to ensure accurate data collection and saving. Having a single URL for each experimental session is preferable. On the other hand, participants are managing their time differently when they are working online. Some may even prefer shorter experiments that they can squeeze into gaps in their life.

Our default recommendation for this project is that you create a single experiment script that contains both of your experiments, with a short break in between the two experiments. From the participants’ perspective they might not need to know that there is any difference between the two experiments. As far as they are concerned, they are doing exactly the same task in both experiments. They do not need to know that you have carefully controlled the order of words to induced priming effects in one of the experiments. In fact, they could just as easily believe that they are doing two sessions of the same experiment.

A possible alternative approach is to have two experiment scripts, but with a link to the second at the end of the first experiment. This has the advantage of making your scripts simpler and segregating the data from your two experiments. But it introduces one more point where a participant might make a misstep.

Testing the Experiments

One of the worst experiences for an experimenter is to spend a lot of time or money collecting data for an experiment, only to find that participants did not understand the task, or the data was not recorded properly.

To avoid this, it is important to test out your experiment and your data collection before you run your experiment in full. You can test it on yourself. You can ask a friend to test it. Or you can test it on “real” participants, just not very many of them. This is called a Pilot Study, and it is often one of the most valuable phases of an experiment.

An important part of testing the experiment is ensuring that the data files that you generate contain the information that you need.

Sample Code

The following is sample code that implements a very basic lexical decision experiment. You can copy and paste it into a script window.

PennController.ResetPrefix(null); // Initiates PennController



// Consent form


newHtml(“consent”, “consent.html”)







newButton(“I agree to participate in this study”)




.failure( getHtml(“consent”).warn() ))

).setOption(“hideProgressBar”, true);




newText(“introduction”, “In this experiment, you will see strings of letters and you will be tasked with deciding whether they are real words.”)



newText(“.   “)  // Adding space for formatting




newText(“.   “)  // Adding space for formatting




newText(“.   “)  // Adding space for formatting




newText(“.   “)  // Adding space for formatting




newText(“wait”, “When you are ready, press the spacebar to continue”)






).setOption(“hideProgressBar”, true);



Template( variable =>


newCanvas(“trial_background”, 720, 540)




newText(“fixation”, variable.fixation)

.print(360, 270, “trial_background”)









.print(360, 270, “trial_background”)






.log( “corrAns” , variable.corrAns)




We will collect data using the Amazon Mechanical Turk crowd-sourcing platform. This will allow you to collect data quickly and easily from internet workers who are paid for taking part in your study.

This part of data collection will be managed by Cassidy Wyatt, who is working with CP as a part time lab manager this year. You can reach Cassidy via email or via the Maryland Psycholinguistics Slack group.


Running the Experiments

For this experiment we will aim to gather data via the internet, using PC Ibex experiments that participants access via the Amazon Mechanical Turk platform.  

Internet-based data collection has seen steady growth over the past 10 years, and dramatic growth in 2020. It presents benefits and challenges, relative to traditional lab-based data collection.

You should aim to collect data from 10-20 participants in this study, all native speakers of English.

Instructions will be coming here soon on logistical details of preparing the experiments, including: instructions and participant training, IRB and informed consent, implementing in MTurk, avoiding bots and frauds, customer satisfaction, etc.

Analyzing the Results

In presenting the results of both studies, you should calculate average (mean) RTs for your different experimental conditions (e.g. high/low frequency, related/unrelated pairs). You should also include information about the variance associated with each average, e.g. Standard Error (Std. Err. = Std. Dev. / sq. root of number of participants).

You should probably also exclude from your analysis trials on which the participant gave an incorrect response. You may also consider excluding outlier trials (e.g. the most extreme RT values for each participant, or any trials above some cut-off value that is likely to indicate distraction or equipment problems rather than slow processing). In your write-up, you should describe what data was excluded, and why you chose to do this.

One question to consider: although you do not need to run statistical tests on the results of your studies, you should give an approximate indication of how great the difference between the means is in terms of numbers of standard errors. If the difference between the two means is less than the sum of the two standard errors, then it is unlikely that the difference would be reliable (at the p < 0.05 level). If the difference between the means is less than 2 standard errors, you should give an estimate of how many more participants you would need to run in order to reach this level (explain your reasoning).

In writing-up the results of your studies, you should explain your choice of methods and stimuli. You do not need to write at great length, but you do need to be explicit. Take a look at other relevant literature for guidance in writing up details of methods, etc.