Testing Materials


3DCCS Test


The Three Dimension-Change Card Sorting (3DCCS, see Deák & Narasimham, 2003) tests young children's ability to flexibly switch sorting choices when rules change. It is based on a simpler by Philip D. Zelazo and colleagues (e.g., Zelazo, Frye, & Palfai, 1996), but it reveals more graded and nuanced patterns of flexibility and inflexibility in children's sorting responses.

The 3DCCS uses three rules and is roughly matched for complexity to the FIM tests (see below). Children must sort six cards into one of four boxes. Each card has a picture of an animal that differs in shape, color, and size. Children are told a rule (for example, the "color game" in which red, blue, and yellow items are sorted into different boxes). After they sort each card, they are told new rules (e.g., the "shape game), and after sorting the cards again they are told a third rule (e.g., "size game"). The hitch is that children must sort each card into a different box when following each different rule.
  3DCCSstimulus_set

The 3DCCS uses pictures that can be downloaded below as a Stuft folder. Each picture is in a separate .jpg file. They can be downloaded freely, but are proprietary intellectual property. If used in any public forum, the following work should be cited:

Deák, G. O., Narasimham, G., & Legare, C. (submitted). Cognitive flexibility in young children: Age, individual, and tasks differences.

Users (researchers at non-profit educational or research institutions) who would like the pictures in Photoshop file format, or files of formatted response recording sheets, should send a request to CDLab@cogsci.ucsd.edu.




FIM-Animates Test


The Flexible Induction of Meaning of Words for Animates (FIM-An, see Deák & Narasimham, 2003) tests young children's ability to flexibly use language cues to figure out what new words mean. In the test, children hear several words for a complex set of items. The cue changes with each word, and each cue implies a different meaning. The cues are phrases before a novel word: "is a...," "lives in a..." or "holds a...." In the FIM-An, the words imply a species, habitat, or possessed-object meaning. Below is the labeled picture from one of six sets of pictures; after hearing each word, children would be asked, for instance, to "find another one that lives in a [novel word]." They would generalize the word to one of four other pictures: one with the same kind of creature, one with a different creature in a similar setting, one with a different creature holding an identical object, or one completely dissimilar scene (control item).



The FIM-An is based on an earlier test, the FIM-Objects (Deák, 2000; Deák & Narasimham, 2003). That test uses real, complex objects, so it is hard to replicate outside of the Cognitive Development Lab. The FIM-An uses pictures (created by G. Narasimham) that can be downloaded below as a Stuft folder. Each of the 30 pictures in a separate .jpg file. They can be downloaded freely, but are proprietary intellectual property. If used in any public forum, the following work should be cited:

Deák, G. O., Narasimham, G., & Legare, C. (submitted). Cognitive flexibility in young children: Age, individual, and tasks differences.

Users (researchers at non-profit educational or research institutions) who would like the pictures in Photoshop file format, or files of formatted response recording sheets, should send a request to CDLab@cogsci.ucsd.edu.





Computerized FIM-Animates Test


Nicholas Cepeda (University of Colorado-Boulder) is developing a computerized version of the FIM-An test. Check back soon!




Response Time Stage


[.jpg of response time stage, Cog. Dev. lab, UCSD] Developed to measure preschool children's response time when choosing between "real items" (objects or analog pictures), the Response Time stage is accurate to the magnitude of between .01 and .1 sec, which is sufficient to reveal significant variability in young children's response times in complex cognitive tests (where Standard Deviations are often over 1 sec).


Children as young as 36 months (probably younger) can learn to use the stage within 1-3 trials. A set of 12 easy  trials is used to measure each individual child's motor baseline. In the critical test trials children can choose between any objects that fit well in the 58 x 18 cm display area. Visual access is controlled by the experimenter, and the child's response time is measured as [cognitive decision time + motor response time]. The motor response time component is based on a simple reaching response across a distance of approximately 2 cm, and is identical to the motor baseline trials. This distance is less than the typical distance from a hand rest to a touch screen, and thus adds an error term no greater than in any other method suitable for young children.




Visuo-Motor Processing Speed Test


[visuo-motor processing speed test in situ, Cog. Dev. Lab, UCSD] Developed with Nicholas Cepeda (University of Colorado-Boulder), this MatLab-based program uses touchscreen input to test young children's perceptual-motor response speed. Used with a laptop, the test is portable and easy to administer and learn. Children must choose, on every trial, an identical colored geometric shape from an array, by touching the matching shape. The arrays vary in difficulty. Users can specify various parameters (e.g., inter-trial interval; number of trials) in a GUI before each use.


Check back soon for a downloadable Stuft folder of the stimulus images (.tif format) and MatLab application.




InfAttend/MESA


InfAttend/MESA is a software tool written in Visual Basic (by H. Kim and G. Deák). It has two modes: Habituation mode and Preferential Looking. These VisualBasic applications provide precise, easy control over both testing paradigms, which are commonly used in infant perception/cognition studies. This application offers a user-friendly tool for testing infants in a freely available VisBasic tool. The work to develop InfAttend/MESA was supported by the National Science Foundation (SES-0527756 to Gedeon Deák). To use InfAttend/MESA, please contact Gedeon Deák at deak@cogsci.ucsd.edu.




VisLearn/MESA


VisLearn/MESA is a software tool written in Presentation (http://www.neurobs.com/). It supports the creation, control, and delivery of contingency- or sequence-learning stimuli for infants, children, or adults. It is written to be usable and modifiable with minimal difficulty by other Presentation users. The work to develop VisLearn/MESA was supported by the National Science Foundation (SES-0527756 to Gedeon Deák). To use VisLearn/MESA, please contact Gedeon Deák at deak@cogsci.ucsd.edu.