How to Get the Most out of Your Operant Training Chambers

How to Get the Most out of Your Operant Training Chambers

In this post, we describe four common tasks you can use with your operant training chambers, and what exactly they measure. All of these tasks are easy to program with our Touch Panel operant chambers and TaskStudio software.

Learn more about our chambers and their unique specifications here.

Two-Choice Visual Discrimination Task:

This task involves learning that one of the two shapes displayed on the screen is correct. Touching the correct stimuli is rewarded and touching the incorrect stimuli is punished with a timeout where the mouse or rat cannot start another trial. Once the mouse or rat learns the correct stimuli, they are reversed so that the previously rewarded stimuli now results in punishment. This type of reversal learning requires the mouse or rat to inhibit automatic responses that require the prefrontal cortex. This task is a great measure of cognitive flexibility and is a great tool for examining animal models of many neuropsychiatric disorders.

Example of the two-choice visual discrimination task.

Paired Associate Learning (PAL)

In this task, mice or rats learn and remember which of three objects goes in which of three spatial locations. On each trial, two different objects are presented; one is in the correct location; the other in the incorrect location. The rat or mouse must choose which stimulus is in the correct location. This task relies on the hippocampus and can be used to test hippocampal dysfunction as seen in Alzheimer’s disease.

Visuomotor Conditional Learning (VMCL)

This task is a stimulus-response task. The rat or mouse must learn that two stimuli go with two different locations. When stimulus A is presented the rat or mouse must always respond to location A. If stimulus B is presented, the rat or mouse must always respond to location B. This type of test is useful for examining motor dysfunction in rat and mouse models of Parkinson’s disease and Huntington’s disease.

5-Choice Serial Reaction Time (5CSRT)

This task requires the rodent to respond to a brief visual stimuli presented randomly in one of 5 locations. It is used to measure attention span and impulsivity control in mice and rats and is useful for animal models of ADHD.


Part V – Tokyo Medical and Dental University

Part V – Tokyo Medical and Dental University

Upon my return back to Tokyo, I had one final visit with Dr. Isomura at Tokyo Medical and Dental University. He originally developed the TaskForcer for rats with O’Hara over 8 years ago!

Dr. Isomura’s research focuses on understanding information processing in Motor Cortex during motor skill learning. To do this, he performs in vivo whole-cell patch clamp recordings in Motor Cortex as animals learn the lever pull task that was specifically designed for the TaskForcer.

What makes simultaneous neural recording during operant behaviors possible with the TaskForcer is the unique spout-lever. This was specially designed by Dr. Isomura and O’Hara such that the reward (liquid from the spout) and operandum (lever) are combined into one. In this way, the animal can still obtain a reward for pulling the lever even while its body is restrained, allowing for operant learning during simultaneous neurophysiological recording.

Dr. Isomura explains, “Since the animals must learn to perform the lever pull task while under head fixation, we wanted to make sure that the animal could access the reward with minimal head movement, but still be motivated to perform the task.”

Isomura also explains, “We were surprised that rats started pulling the lever the very first day that we put them in the chamber. The lever pull task is very robust. We don’t see animal attrition from failure of animals to learn the task.”

The TaskForcer with a stereotaxic setup in a sound attenuating box.

Me with Dr. Takahashi at Doshisha University

“With the TaskForcer, we can reliably get extremely precise single unit recordings during motor behaviors which allows us to examine causal links between neural activity and behavior in great detail.” – Dr. Isomura

Me with O’Hara team members alongside Dr. Isomura (left).