psychopy.data - functions for storing/saving/analysing data

Contents:

Utility functions:

  • importConditions() - to load a list of dicts from a csv/excel file

  • functionFromStaircase()- to convert a staircase into its psychopmetric function

  • bootStraps() - generate a set of bootstrap resamples from a dataset

  • getDateStr() - provide a date string (in format suitable for filenames)

Curve Fitting:


ExperimentHandler

class psychopy.data.ExperimentHandler(name='', version='', extraInfo=None, runtimeInfo=None, originPath=None, savePickle=True, saveWideText=True, sortColumns=False, dataFileName='', autoLog=True, appendFiles=False)[source]

A container class for keeping track of multiple loops/handlers

Useful for generating a single data file from an experiment with many different loops (e.g. interleaved staircases or loops within loops

Usage:

exp = data.ExperimentHandler(name=”Face Preference”,version=’0.1.0’)

Parameters:
namea string or unicode

As a useful identifier later

versionusually a string (e.g. ‘1.1.0’)

To keep track of which version of the experiment was run

extraInfoa dictionary

Containing useful information about this run (e.g. {‘participant’:’jwp’,’gender’:’m’,’orientation’:90} )

runtimeInfopsychopy.info.RunTimeInfo

Containing information about the system as detected at runtime

originPathstring or unicode

The path and filename of the originating script/experiment If not provided this will be determined as the path of the calling script.

dataFileNamestring

This is defined in advance and the file will be saved at any point that the handler is removed or discarded (unless .abort() had been called in advance). The handler will attempt to populate the file even in the event of a (not too serious) crash!

savePickle : True (default) or False

saveWideText : True (default) or False

sortColumnsstr or bool

How (if at all) to sort columns in the data file, if none is given to saveAsWideText. Can be: - “alphabetical”, “alpha”, “a” or True: Sort alphabetically by header name - “priority”, “pr” or “p”: Sort according to priority - other: Do not sort, columns remain in order they were added

autoLog : True (default) or False

_getAllParamNames()[source]

Returns the attribute names of loop parameters (trialN etc) that the current set of loops contain, ready to build a wide-format data file.

_getExtraInfo()[source]

Get the names and vals from the extraInfo dict (if it exists)

_getLoopInfo(loop)[source]

Returns the attribute names and values for the current trial of a particular loop. Does not return data inputs from the subject, only info relating to the trial execution.

_guessPriority(name)[source]

Get a best guess at the priority of a column based on its name

Parameters:

name (str) – Name of the column

Returns:

One of the following: - HIGH (19): Important columns which are near the front of the data file - MEDIUM (9): Possibly important columns which are around the middle of the data file - LOW (-1): Columns unlikely to be important which are at the end of the data file

NOTE: Values returned from this function are 1 less than values in constants.priority, columns whose priority was guessed are behind equivalently prioritised columns whose priority was specified.

Return type:

int

abort()[source]

Inform the ExperimentHandler that the run was aborted.

Experiment handler will attempt automatically to save data (even in the event of a crash if possible). So if you quit your script early you may want to tell the Handler not to save out the data files for this run. This is the method that allows you to do that.

addAnnotation(value)[source]

Add an annotation at the current point in the experiment

Parameters:

value (str) – Value of the annotation

addData(name, value, row=None, priority=None)[source]

Add the data with a given name to the current experiment.

Typically the user does not need to use this function; if you added your data to the loop and had already added the loop to the experiment then the loop will automatically inform the experiment that it has received data.

Multiple data name/value pairs can be added to any given entry of the data file and is considered part of the same entry until the nextEntry() call is made.

e.g.:

# add some data for this trial
exp.addData('resp.rt', 0.8)
exp.addData('resp.key', 'k')
# end of trial - move to next line in data output
exp.nextEntry()
Parameters:
  • name (str) – Name of the column to add data as.

  • value (any) – Value to add

  • row (int or None) – Row in which to add this data. Leave as None to add to the current entry.

  • priority (int) – Priority value to set the column to - higher priority columns appear nearer to the start of the data file. Use values from constants.priority as landmark values: - CRITICAL: Always at the start of the data file, generally reserved for Routine start times - HIGH: Important columns which are near the front of the data file - MEDIUM: Possibly important columns which are around the middle of the data file - LOW: Columns unlikely to be important which are at the end of the data file - EXCLUDE: Always at the end of the data file, actively marked as unimportant

addLoop(loopHandler)[source]

Add a loop such as a TrialHandler or StairHandler Data from this loop will be included in the resulting data files.

close()[source]
property currentLoop

Return the loop which we are currently in, this will either be a handle to a loop, such as a TrialHandler or StairHandler, or the handle of the ExperimentHandler itself if we are not in a loop.

getAllEntries()[source]

Fetches a copy of all the entries including a final (orphan) entry if that exists. This allows entries to be saved even if nextEntry() is not yet called.

Returns:

copy (not pointer) to entries

getJSON(priorityThreshold=-9)[source]

Get the experiment data as a JSON string.

Parameters:

priorityThreshold (int) – Output will only include columns whose priority is greater than or equal to this value. Use values in psychopy.constants.priority as a guideline for priority levels. Default is -9 (constants.priority.EXCLUDE + 1)

Returns:

JSON string with the following fields: - ‘type’: Indicates that this is data from an ExperimentHandler (will always be “trials_data”) - ‘trials’: list of dict`s representing requested trials data - ‘priority’: `dict of column names

Return type:

str

getPriority(name)[source]

Get the priority value for a given column. If no priority value is stored, returns best guess based on column name.

Parameters:

name (str) – Column name

Returns:

The priority value stored/guessed for this column, most likely a value from constants.priority, one of: - CRITICAL (30): Always at the start of the data file, generally reserved for Routine start times - HIGH (20): Important columns which are near the front of the data file - MEDIUM (10): Possibly important columns which are around the middle of the data file - LOW (0): Columns unlikely to be important which are at the end of the data file - EXCLUDE (-10): Always at the end of the data file, actively marked as unimportant

Return type:

int

loopEnded(loopHandler)[source]

Informs the experiment handler that the loop is finished and not to include its values in further entries of the experiment.

This method is called by the loop itself if it ends its iterations, so is not typically needed by the user.

nextEntry()[source]

Calling nextEntry indicates to the ExperimentHandler that the current trial has ended and so further addData() calls correspond to the next trial.

pause()[source]

Set status to be PAUSED.

resume()[source]

Set status to be STARTED.

saveAsPickle(fileName, fileCollisionMethod='rename')[source]

Basically just saves a copy of self (with data) to a pickle file.

This can be reloaded if necessary and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsWideText(fileName, delim='auto', matrixOnly=False, appendFile=None, encoding='utf-8-sig', fileCollisionMethod='rename', sortColumns=None)[source]

Saves a long, wide-format text file, with one line representing the attributes and data for a single trial. Suitable for analysis in R and SPSS.

If appendFile=True then the data will be added to the bottom of an existing file. Otherwise, if the file exists already it will be kept and a new file will be created with a slightly different name. If you want to overwrite the old file, pass ‘overwrite’ to fileCollisionMethod.

If matrixOnly=True then the file will not contain a header row, which can be handy if you want to append data to an existing file of the same format.

Parameters:
fileName:

if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else ‘.tsv’ will be appended. Can include path info.

delim:

allows the user to use a delimiter other than the default tab (“,” is popular with file extension “.csv”)

matrixOnly:

outputs the data with no header row.

appendFile:

will add this output to the end of the specified file if it already exists.

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

fileCollisionMethod:

Collision method passed to handleFileCollision()

sortColumnsstr or bool

How (if at all) to sort columns in the data file. Can be: - “alphabetical”, “alpha”, “a” or True: Sort alphabetically by header name - “priority”, “pr” or “p”: Sort according to priority - other: Do not sort, columns remain in order they were added

setPriority(name, value=20)[source]

Set the priority of a column in the data file.

Parameters:
  • name (str) – Name of the column, e.g. text.started

  • value (int) – Priority value to set the column to - higher priority columns appear nearer to the start of the data file. Use values from constants.priority as landmark values: - CRITICAL (30): Always at the start of the data file, generally reserved for Routine start times - HIGH (20): Important columns which are near the front of the data file - MEDIUM (10): Possibly important columns which are around the middle of the data file - LOW (0): Columns unlikely to be important which are at the end of the data file - EXCLUDE (-10): Always at the end of the data file, actively marked as unimportant

property status
stop()[source]

Set status to be FINISHED.

timestampOnFlip(win, name, format=<class 'float'>)[source]

Add a timestamp (in the future) to the current row

Parameters:
  • win (psychopy.visual.Window) – The window object that we’ll base the timestamp flip on

  • name (str) – The name of the column in the datafile being written, such as ‘myStim.stopped’

  • format (str, class or None) – Format in which to return time, see clock.Timestamp.resolve() for more info. Defaults to float.

TrialHandler

class psychopy.data.TrialHandler(trialList, nReps, method='random', dataTypes=None, extraInfo=None, seed=None, originPath=None, name='', autoLog=True)[source]

Class to handle trial sequencing and data storage.

Calls to .next() will fetch the next trial object given to this handler, according to the method specified (random, sequential, fullRandom). Calls will raise a StopIteration error if trials have finished.

See demo_trialHandler.py

The psydat file format is literally just a pickled copy of the TrialHandler object that saved it. You can open it with:

from psychopy.tools.filetools import fromFile
dat = fromFile(path)

Then you’ll find that dat has the following attributes that

Parameters:
trialList: a simple list (or flat array) of dictionaries

specifying conditions. This can be imported from an excel/csv file using importConditions()

nReps: number of repeats for all conditions

method: ‘random’, ‘sequential’, or ‘fullRandom’

‘sequential’ obviously presents the conditions in the order they appear in the list. ‘random’ will result in a shuffle of the conditions on each repeat, but all conditions occur once before the second repeat etc. ‘fullRandom’ fully randomises the trials across repeats as well, which means you could potentially run all trials of one condition before any trial of another.

dataTypes: (optional) list of names for data storage.

e.g. [‘corr’,’rt’,’resp’]. If not provided then these will be created as needed during calls to addData()

extraInfo: A dictionary

This will be stored alongside the data and usually describes the experiment and subject ID, date etc.

seed: an integer

If provided then this fixes the random number generator to use the same pattern of trials, by seeding its startpoint

originPath: a string describing the location of the

script / experiment file path. The psydat file format will store a copy of the experiment if possible. If originPath==None is provided here then the TrialHandler will still store a copy of the script where it was created. If OriginPath==-1 then nothing will be stored.

Attributes (after creation):
.data - a dictionary (or more strictly, a DataHandler sub-

class of a dictionary) of numpy arrays, one for each data type stored

.trialList - the original list of dicts, specifying the conditions

.thisIndex - the index of the current trial in the original

conditions list

.nTotal - the total number of trials that will be run

.nRemaining - the total number of trials remaining

.thisN - total trials completed so far

.thisRepN - which repeat you are currently on

.thisTrialN - which trial number within that repeat

.thisTrial - a dictionary giving the parameters of the current

trial

.finished - True/False for have we finished yet

.extraInfo - the dictionary of extra info as given at beginning

.origin - the contents of the script or builder experiment that

created the handler

_createOutputArray(stimOut, dataOut, delim=None, matrixOnly=False)[source]

Does the leg-work for saveAsText and saveAsExcel. Combines stimOut with ._parseDataOutput()

_createOutputArrayData(dataOut)[source]

This just creates the dataOut part of the output matrix. It is called by _createOutputArray() which creates the header line and adds the stimOut columns

_createSequence()[source]

Pre-generates the sequence of trial presentations (for non-adaptive methods). This is called automatically when the TrialHandler is initialised so doesn’t need an explicit call from the user.

The returned sequence has form indices[stimN][repN] Example: sequential with 6 trialtypes (rows), 5 reps (cols), returns:

[[0 0 0 0 0]
[1 1 1 1 1]
[2 2 2 2 2]
[3 3 3 3 3]
[4 4 4 4 4]
[5 5 5 5 5]]
These 30 trials will be returned by .next() in the order:

0, 1, 2, 3, 4, 5, 0, 1, 2, … … 3, 4, 5

To add a new type of sequence (as of v1.65.02): - add the sequence generation code here - adjust “if self.method in [ …]:” in both __init__ and .next() - adjust allowedVals in experiment.py -> shows up in DlgLoopProperties Note that users can make any sequence whatsoever outside of PsychoPy, and specify sequential order; any order is possible this way.

_makeIndices(inputArray)[source]

Creates an array of tuples the same shape as the input array where each tuple contains the indices to itself in the array.

Useful for shuffling and then using as a reference.

_terminate()

Remove references to ourself in experiments and terminate the loop

addData(thisType, value, position=None)[source]

Add data for the current trial

getCurrentTrial()[source]

Returns the condition for the current trial, without advancing the trials.

getEarlierTrial(n=-1)[source]

Returns the condition information from n trials previously. Useful for comparisons in n-back tasks. Returns ‘None’ if trying to access a trial prior to the first.

getExp()

Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached

getFutureTrial(n=1)[source]

Returns the condition for n trials into the future, without advancing the trials. A negative n returns a previous (past) trial. Returns ‘None’ if attempting to go beyond the last trial.

getOriginPathAndFile(originPath=None)

Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.

If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).

next()

Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex If the trials have ended this method will raise a StopIteration error. This can be handled with code such as:

trials = data.TrialHandler(.......)
for eachTrial in trials:  # automatically stops when done
    # do stuff

or:

trials = data.TrialHandler(.......)
while True:  # ie forever
    try:
        thisTrial = trials.next()
    except StopIteration:  # we got a StopIteration error
        break #break out of the forever loop
    # do stuff here for the trial
printAsText(stimOut=None, dataOut=('all_mean', 'all_std', 'all_raw'), delim='\t', matrixOnly=False)

Exactly like saveAsText() except that the output goes to the screen instead of a file

saveAsExcel(fileName, sheetName='rawData', stimOut=None, dataOut=('n', 'all_mean', 'all_std', 'all_raw'), matrixOnly=False, appendFile=True, fileCollisionMethod='rename')

Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).

It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the file. So you could have a single file named after your experiment and then have one worksheet for each participant. Or you could have one file for each participant and then multiple sheets for repeated sessions etc.

The file extension .xlsx will be added if not given already.

Parameters:
fileName: string

the name of the file to create or append. Can include relative or absolute path

sheetName: string

the name of the worksheet within the file

stimOut: list of strings

the attributes of the trial characteristics to be output. To use this you need to have provided a list of dictionaries specifying to trialList parameter of the TrialHandler and give here the names of strings specifying entries in that dictionary

dataOut: list of strings

specifying the dataType and the analysis to be performed, in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including ‘mean’,’std’,’median’,’max’,’min’. e.g. rt_max will give a column of max reaction times across the trials assuming that rt values have been stored. The default values will output the raw, mean and std of all datatypes found.

appendFile: True or False

If False any existing file with this name will be kept and a new file will be created with a slightly different name. If you want to overwrite the old file, pass ‘overwrite’ to fileCollisionMethod. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.

fileCollisionMethod: string

Collision method (rename,``overwrite``, fail) passed to handleFileCollision() This is ignored if append is True.

saveAsJson(fileName=None, encoding='utf-8', fileCollisionMethod='rename')[source]

Serialize the object to the JSON format.

Parameters:
  • fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.

  • encoding (string, optional) – The encoding to use when writing the file.

  • fileCollisionMethod (string) – Collision method passed to handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.

Notes

Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.

saveAsPickle(fileName, fileCollisionMethod='rename')

Basically just saves a copy of the handler (with data) to a pickle file.

This can be reloaded if necessary and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsText(fileName, stimOut=None, dataOut=('n', 'all_mean', 'all_std', 'all_raw'), delim=None, matrixOnly=False, appendFile=True, summarised=True, fileCollisionMethod='rename', encoding='utf-8-sig')

Write a text file with the data and various chosen stimulus attributes

Parameters:

fileName:

will have .tsv appended and can include path info.

stimOut:

the stimulus attributes to be output. To use this you need to use a list of dictionaries and give here the names of dictionary keys that you want as strings

dataOut:

a list of strings specifying the dataType and the analysis to be performed,in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including; ‘mean’,’std’,’median’,’max’,’min’… The default values will output the raw, mean and std of all datatypes found

delim:

allows the user to use a delimiter other than tab (“,” is popular with file extension “.csv”)

matrixOnly:

outputs the data with no header row or extraInfo attached

appendFile:

will add this output to the end of the specified file if it already exists

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

saveAsWideText(fileName, delim=None, matrixOnly=False, appendFile=True, encoding='utf-8-sig', fileCollisionMethod='rename')[source]

Write a text file with the session, stimulus, and data values from each trial in chronological order. Also, return a pandas DataFrame containing same information as the file.

That is, unlike ‘saveAsText’ and ‘saveAsExcel’:
  • each row comprises information from only a single trial.

  • no summarizing is done (such as collapsing to produce mean and standard deviation values across trials).

This ‘wide’ format, as expected by R for creating dataframes, and various other analysis programs, means that some information must be repeated on every row.

In particular, if the trialHandler’s ‘extraInfo’ exists, then each entry in there occurs in every row. In builder, this will include any entries in the ‘Experiment info’ field of the ‘Experiment settings’ dialog. In Coder, this information can be set using something like:

myTrialHandler.extraInfo = {'SubjID': 'Joan Smith',
                            'Group': 'Control'}
Parameters:
fileName:

if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else ‘.tsv’ will be appended. Can include path info.

delim:

allows the user to use a delimiter other than the default tab (“,” is popular with file extension “.csv”)

matrixOnly:

outputs the data with no header row.

appendFile:

will add this output to the end of the specified file if it already exists.

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

setExp(exp)

Sets the ExperimentHandler that this handler is attached to

Do NOT attempt to set the experiment using:

trials._exp = myExperiment

because it needs to be performed using the weakref module.

TrialHandler2

class psychopy.data.TrialHandler2(trialList, nReps, method='random', dataTypes=None, extraInfo=None, seed=None, originPath=None, name='', autoLog=True)[source]

Class to handle trial sequencing and data storage.

Calls to .next() will fetch the next trial object given to this handler, according to the method specified (random, sequential, fullRandom). Calls will raise a StopIteration error if trials have finished.

See demo_trialHandler.py

The psydat file format is literally just a pickled copy of the TrialHandler object that saved it. You can open it with:

from psychopy.tools.filetools import fromFile
dat = fromFile(path)

Then you’ll find that dat has the following attributes that

Parameters:
trialList: filename or a simple list (or flat array) of

dictionaries specifying conditions

nReps: number of repeats for all conditions

method: ‘random’, ‘sequential’, or ‘fullRandom’

‘sequential’ obviously presents the conditions in the order they appear in the list. ‘random’ will result in a shuffle of the conditions on each repeat, but all conditions occur once before the second repeat etc. ‘fullRandom’ fully randomises the trials across repeats as well, which means you could potentially run all trials of one condition before any trial of another.

dataTypes: (optional) list of names for data storage.

e.g. [‘corr’,’rt’,’resp’]. If not provided then these will be created as needed during calls to addData()

extraInfo: A dictionary

This will be stored alongside the data and usually describes the experiment and subject ID, date etc.

seed: an integer

If provided then this fixes the random number generator to use the same pattern of trials, by seeding its startpoint.

originPath: a string describing the location of the script /

experiment file path. The psydat file format will store a copy of the experiment if possible. If originPath==None is provided here then the TrialHandler will still store a copy of the script where it was created. If OriginPath==-1 then nothing will be stored.

Attributes (after creation):
.data - a dictionary of numpy arrays, one for each data type

stored

.trialList - the original list of dicts, specifying the conditions

.thisIndex - the index of the current trial in the original

conditions list

.nTotal - the total number of trials that will be run

.nRemaining - the total number of trials remaining

.thisN - total trials completed so far

.thisRepN - which repeat you are currently on

.thisTrialN - which trial number within that repeat

.thisTrial - a dictionary giving the parameters of the current

trial

.finished - True/False for have we finished yet

.extraInfo - the dictionary of extra info as given at beginning

.origin - the contents of the script or builder experiment that

created the handler

_terminate()

Remove references to ourself in experiments and terminate the loop

abortCurrentTrial(action='random')[source]

Abort the current trial.

Calling this during an experiment replace this trial. The condition related to the aborted trial will be replaced elsewhere in the session depending on the method in use for sampling conditions.

Parameters:

action (str) – Action to take with the aborted trial. Can be either of ‘random’, or ‘append’. The default action is ‘random’.

Notes

  • When using action=’random’, the RNG state for the trial handler is not used.

addData(thisType, value)[source]

Add a piece of data to the current trial

property data

Returns a pandas DataFrame of the trial data so far Read only attribute - you can’t directly modify TrialHandler.data

Note that data are stored internally as a list of dictionaries, one per trial. These are converted to a DataFrame on access.

getEarlierTrial(n=-1)[source]

Returns the condition information from n trials previously. Useful for comparisons in n-back tasks. Returns ‘None’ if trying to access a trial prior to the first.

getExp()

Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached

getFutureTrial(n=1)[source]

Returns the condition for n trials into the future, without advancing the trials. Returns ‘None’ if attempting to go beyond the last trial.

getOriginPathAndFile(originPath=None)

Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.

If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).

next()

Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex If the trials have ended this method will raise a StopIteration error. This can be handled with code such as:

trials = data.TrialHandler(.......)
for eachTrial in trials:  # automatically stops when done
    # do stuff

or:

trials = data.TrialHandler(.......)
while True:  # ie forever
    try:
        thisTrial = trials.next()
    except StopIteration:  # we got a StopIteration error
        break  # break out of the forever loop
    # do stuff here for the trial
printAsText(stimOut=None, dataOut=('all_mean', 'all_std', 'all_raw'), delim='\t', matrixOnly=False)

Exactly like saveAsText() except that the output goes to the screen instead of a file

saveAsExcel(fileName, sheetName='rawData', stimOut=None, dataOut=('n', 'all_mean', 'all_std', 'all_raw'), matrixOnly=False, appendFile=True, fileCollisionMethod='rename')

Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).

It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the file. So you could have a single file named after your experiment and then have one worksheet for each participant. Or you could have one file for each participant and then multiple sheets for repeated sessions etc.

The file extension .xlsx will be added if not given already.

Parameters:
fileName: string

the name of the file to create or append. Can include relative or absolute path

sheetName: string

the name of the worksheet within the file

stimOut: list of strings

the attributes of the trial characteristics to be output. To use this you need to have provided a list of dictionaries specifying to trialList parameter of the TrialHandler and give here the names of strings specifying entries in that dictionary

dataOut: list of strings

specifying the dataType and the analysis to be performed, in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including ‘mean’,’std’,’median’,’max’,’min’. e.g. rt_max will give a column of max reaction times across the trials assuming that rt values have been stored. The default values will output the raw, mean and std of all datatypes found.

appendFile: True or False

If False any existing file with this name will be kept and a new file will be created with a slightly different name. If you want to overwrite the old file, pass ‘overwrite’ to fileCollisionMethod. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.

fileCollisionMethod: string

Collision method (rename,``overwrite``, fail) passed to handleFileCollision() This is ignored if append is True.

saveAsJson(fileName=None, encoding='utf-8', fileCollisionMethod='rename')[source]

Serialize the object to the JSON format.

Parameters:
  • fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.

  • encoding (string, optional) – The encoding to use when writing the file.

  • fileCollisionMethod (string) – Collision method passed to handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.

Notes

Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.

The RNG self._rng cannot be serialized as-is, so we store its state in self._rng_state so we can restore it when loading.

saveAsPickle(fileName, fileCollisionMethod='rename')

Basically just saves a copy of the handler (with data) to a pickle file.

This can be reloaded if necessary and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsText(fileName, stimOut=None, dataOut=('n', 'all_mean', 'all_std', 'all_raw'), delim=None, matrixOnly=False, appendFile=True, summarised=True, fileCollisionMethod='rename', encoding='utf-8-sig')

Write a text file with the data and various chosen stimulus attributes

Parameters:

fileName:

will have .tsv appended and can include path info.

stimOut:

the stimulus attributes to be output. To use this you need to use a list of dictionaries and give here the names of dictionary keys that you want as strings

dataOut:

a list of strings specifying the dataType and the analysis to be performed,in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including; ‘mean’,’std’,’median’,’max’,’min’… The default values will output the raw, mean and std of all datatypes found

delim:

allows the user to use a delimiter other than tab (“,” is popular with file extension “.csv”)

matrixOnly:

outputs the data with no header row or extraInfo attached

appendFile:

will add this output to the end of the specified file if it already exists

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

saveAsWideText(fileName, delim=None, matrixOnly=False, appendFile=True, encoding='utf-8-sig', fileCollisionMethod='rename')[source]

Write a text file with the session, stimulus, and data values from each trial in chronological order. Also, return a pandas DataFrame containing same information as the file.

That is, unlike ‘saveAsText’ and ‘saveAsExcel’:
  • each row comprises information from only a single trial.

  • no summarising is done (such as collapsing to produce mean and standard deviation values across trials).

This ‘wide’ format, as expected by R for creating dataframes, and various other analysis programs, means that some information must be repeated on every row.

In particular, if the trialHandler’s ‘extraInfo’ exists, then each entry in there occurs in every row. In builder, this will include any entries in the ‘Experiment info’ field of the ‘Experiment settings’ dialog. In Coder, this information can be set using something like:

myTrialHandler.extraInfo = {'SubjID': 'Joan Smith',
                            'Group': 'Control'}
Parameters:
fileName:

if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else ‘.tsv’ will be appended. Can include path info.

delim:

allows the user to use a delimiter other than the default tab (“,” is popular with file extension “.csv”)

matrixOnly:

outputs the data with no header row.

appendFile:

will add this output to the end of the specified file if it already exists.

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

setExp(exp)

Sets the ExperimentHandler that this handler is attached to

Do NOT attempt to set the experiment using:

trials._exp = myExperiment

because it needs to be performed using the weakref module.

property trialAborted

True if the trial has been aborted an should end.

This flag is reset to False on the next call to next().

TrialHandlerExt

class psychopy.data.TrialHandlerExt(trialList, nReps, method='random', dataTypes=None, extraInfo=None, seed=None, originPath=None, name='', autoLog=True)[source]

A class for handling trial sequences in a non-counterbalanced design (i.e. oddball paradigms). Its functions are a superset of the class TrialHandler, and as such, can also be used for normal trial handling.

TrialHandlerExt has the same function names for data storage facilities.

To use non-counterbalanced designs, all TrialType dict entries in the trial list must have a key called “weight”. For example, if you want trial types A, B, C, and D to have 10, 5, 3, and 2 repetitions per block, then the trialList can look like:

[{Name:’A’, …, weight:10},

{Name:’B’, …, weight:5}, {Name:’C’, …, weight:3}, {Name:’D’, …, weight:2}]

For experimenters using an excel or csv file for trial list, a column called weight is appropriate for this purpose.

Calls to .next() will fetch the next trial object given to this handler, according to the method specified (random, sequential, fullRandom). Calls will raise a StopIteration error when all trials are exhausted.

Authored by Suddha Sourav at BPN, Uni Hamburg - heavily borrowing from the TrialHandler class

Parameters:
trialList: a simple list (or flat array) of dictionaries

specifying conditions. This can be imported from an excel / csv file using importConditions() For non-counterbalanced designs, each dict entry in trialList must have a key called weight!

nReps: number of repeats for all conditions. When using a

non-counterbalanced design, nReps is analogous to the number of blocks.

method: ‘random’, ‘sequential’, or ‘fullRandom’

When the weights are not specified: ‘sequential’ presents the conditions in the order they appear in the list. ‘random’ will result in a shuffle of the conditions on each repeat, but all conditions occur once before the second repeat etc. ‘fullRandom’ fully randomises the trials across repeats as well, which means you could potentially run all trials of one condition before any trial of another.

In the presence of weights: ‘sequential’ presents each trial type the number of times specified by its weight, before moving on to the next type. ‘random’ randomizes the presentation order within block. ‘fulLRandom’ shuffles trial order across weights an nRep, that is, a full shuffling.

dataTypes: (optional) list of names for data storage. e.g.

[‘corr’,’rt’,’resp’]. If not provided then these will be created as needed during calls to addData()

extraInfo: A dictionary

This will be stored alongside the data and usually describes the experiment and subject ID, date etc.

seed: an integer

If provided then this fixes the random number generator to use the same pattern of trials, by seeding its startpoint

originPath: a string describing the location of the script /

experiment file path. The psydat file format will store a copy of the experiment if possible. If originPath==None is provided here then the TrialHandler will still store a copy of the script where it was created. If OriginPath==-1 then nothing will be stored.

Attributes (after creation):
.data - a dictionary of numpy arrays, one for each data type

stored

.trialList - the original list of dicts, specifying the conditions

.thisIndex - the index of the current trial in the original

conditions list

.nTotal - the total number of trials that will be run

.nRemaining - the total number of trials remaining

.thisN - total trials completed so far

.thisRepN - which repeat you are currently on

.thisTrialN - which trial number within that repeat

.thisTrial - a dictionary giving the parameters of the current

trial

.finished - True/False for have we finished yet

.extraInfo - the dictionary of extra info as given at beginning

.origin - the contents of the script or builder experiment that

created the handler

.trialWeights - None if all weights are not specified. If all

weights are specified, then a list containing the weights of the trial types.

_createOutputArray(stimOut, dataOut, delim=None, matrixOnly=False)

Does the leg-work for saveAsText and saveAsExcel. Combines stimOut with ._parseDataOutput()

_createOutputArrayData(dataOut)[source]

This just creates the dataOut part of the output matrix. It is called by _createOutputArray() which creates the header line and adds the stimOut columns

_createSequence()[source]

Pre-generates the sequence of trial presentations (for non-adaptive methods). This is called automatically when the TrialHandler is initialised so doesn’t need an explicit call from the user.

The returned sequence has form indices[stimN][repN] Example: sequential with 6 trialtypes (rows), 5 reps (cols), returns:

[[0 0 0 0 0]
[1 1 1 1 1]
[2 2 2 2 2]
[3 3 3 3 3]
[4 4 4 4 4]
[5 5 5 5 5]]
These 30 trials will be returned by .next() in the order:

0, 1, 2, 3, 4, 5, 0, 1, 2, … … 3, 4, 5

Example: random, with 3 trialtypes, where the weights of conditions 0,1, and 2 are 3,2, and 1 respectively, and a rep value of 5, might return:

[[0 1 2 0 1]
[1 0 1 1 1]
[0 2 0 0 0]
[0 0 0 1 0]
[2 0 1 0 2]
[1 1 0 2 0]]
These 30 trials will be returned by .next() in the order:

0, 1, 0, 0, 2, 1, 1, 0, 2, 0, 0, 1, … … 0, 2, 0 stopIteration

To add a new type of sequence (as of v1.65.02): - add the sequence generation code here - adjust “if self.method in [ …]:” in both __init__ and .next() - adjust allowedVals in experiment.py -> shows up in DlgLoopProperties Note that users can make any sequence whatsoever outside of PsychoPy, and specify sequential order; any order is possible this way.

_makeIndices(inputArray)

Creates an array of tuples the same shape as the input array where each tuple contains the indices to itself in the array.

Useful for shuffling and then using as a reference.

_terminate()

Remove references to ourself in experiments and terminate the loop

addData(thisType, value, position=None)[source]

Add data for the current trial

getCurrentTrial()

Returns the condition for the current trial, without advancing the trials.

getCurrentTrialPosInDataHandler()[source]
getEarlierTrial(n=-1)

Returns the condition information from n trials previously. Useful for comparisons in n-back tasks. Returns ‘None’ if trying to access a trial prior to the first.

getExp()

Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached

getFutureTrial(n=1)

Returns the condition for n trials into the future, without advancing the trials. A negative n returns a previous (past) trial. Returns ‘None’ if attempting to go beyond the last trial.

getNextTrialPosInDataHandler()[source]
getOriginPathAndFile(originPath=None)

Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.

If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).

next()

Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex If the trials have ended this method will raise a StopIteration error. This can be handled with code such as:

trials = data.TrialHandler(.......)
for eachTrial in trials:  # automatically stops when done
    # do stuff

or:

trials = data.TrialHandler(.......)
while True:  # ie forever
    try:
        thisTrial = trials.next()
    except StopIteration:  # we got a StopIteration error
        break  # break out of the forever loop
    # do stuff here for the trial
printAsText(stimOut=None, dataOut=('all_mean', 'all_std', 'all_raw'), delim='\t', matrixOnly=False)

Exactly like saveAsText() except that the output goes to the screen instead of a file

saveAsExcel(fileName, sheetName='rawData', stimOut=None, dataOut=('n', 'all_mean', 'all_std', 'all_raw'), matrixOnly=False, appendFile=True, fileCollisionMethod='rename')

Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).

It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the file. So you could have a single file named after your experiment and then have one worksheet for each participant. Or you could have one file for each participant and then multiple sheets for repeated sessions etc.

The file extension .xlsx will be added if not given already.

Parameters:
fileName: string

the name of the file to create or append. Can include relative or absolute path

sheetName: string

the name of the worksheet within the file

stimOut: list of strings

the attributes of the trial characteristics to be output. To use this you need to have provided a list of dictionaries specifying to trialList parameter of the TrialHandler and give here the names of strings specifying entries in that dictionary

dataOut: list of strings

specifying the dataType and the analysis to be performed, in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including ‘mean’,’std’,’median’,’max’,’min’. e.g. rt_max will give a column of max reaction times across the trials assuming that rt values have been stored. The default values will output the raw, mean and std of all datatypes found.

appendFile: True or False

If False any existing file with this name will be kept and a new file will be created with a slightly different name. If you want to overwrite the old file, pass ‘overwrite’ to fileCollisionMethod. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.

fileCollisionMethod: string

Collision method (rename,``overwrite``, fail) passed to handleFileCollision() This is ignored if append is True.

saveAsJson(fileName=None, encoding='utf-8', fileCollisionMethod='rename')[source]

Serialize the object to the JSON format.

Parameters:
  • fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.

  • encoding (string, optional) – The encoding to use when writing the file.

  • fileCollisionMethod (string) – Collision method passed to handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.

Notes

Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.

saveAsPickle(fileName, fileCollisionMethod='rename')

Basically just saves a copy of the handler (with data) to a pickle file.

This can be reloaded if necessary and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsText(fileName, stimOut=None, dataOut=('n', 'all_mean', 'all_std', 'all_raw'), delim=None, matrixOnly=False, appendFile=True, summarised=True, fileCollisionMethod='rename', encoding='utf-8-sig')

Write a text file with the data and various chosen stimulus attributes

Parameters:

fileName:

will have .tsv appended and can include path info.

stimOut:

the stimulus attributes to be output. To use this you need to use a list of dictionaries and give here the names of dictionary keys that you want as strings

dataOut:

a list of strings specifying the dataType and the analysis to be performed,in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including; ‘mean’,’std’,’median’,’max’,’min’… The default values will output the raw, mean and std of all datatypes found

delim:

allows the user to use a delimiter other than tab (“,” is popular with file extension “.csv”)

matrixOnly:

outputs the data with no header row or extraInfo attached

appendFile:

will add this output to the end of the specified file if it already exists

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

saveAsWideText(fileName, delim='\t', matrixOnly=False, appendFile=True, encoding='utf-8-sig', fileCollisionMethod='rename')[source]

Write a text file with the session, stimulus, and data values from each trial in chronological order.

That is, unlike ‘saveAsText’ and ‘saveAsExcel’:
  • each row comprises information from only a single trial.

  • no summarizing is done (such as collapsing to produce mean and standard deviation values across trials).

This ‘wide’ format, as expected by R for creating dataframes, and various other analysis programs, means that some information must be repeated on every row.

In particular, if the trialHandler’s ‘extraInfo’ exists, then each entry in there occurs in every row. In builder, this will include any entries in the ‘Experiment info’ field of the ‘Experiment settings’ dialog. In Coder, this information can be set using something like:

myTrialHandler.extraInfo = {'SubjID':'Joan Smith',
                            'Group':'Control'}
Parameters:
fileName:

if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else ‘.txt’ will be appended. Can include path info.

delim:

allows the user to use a delimiter other than the default tab (“,” is popular with file extension “.csv”)

matrixOnly:

outputs the data with no header row.

appendFile:

will add this output to the end of the specified file if it already exists.

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

setExp(exp)

Sets the ExperimentHandler that this handler is attached to

Do NOT attempt to set the experiment using:

trials._exp = myExperiment

because it needs to be performed using the weakref module.

StairHandler

class psychopy.data.StairHandler(startVal, nReversals=None, stepSizes=4, nTrials=0, nUp=1, nDown=3, applyInitialRule=True, extraInfo=None, method='2AFC', stepType='db', minVal=None, maxVal=None, originPath=None, name='', autoLog=True, **kwargs)[source]

Class to handle smoothly the selection of the next trial and report current values etc. Calls to next() will fetch the next object given to this handler, according to the method specified.

See Demos >> ExperimentalControl >> JND_staircase_exp.py

The staircase will terminate when nTrials AND nReversals have been exceeded. If stepSizes was an array and has been exceeded before nTrials is exceeded then the staircase will continue to reverse.

nUp and nDown are always considered as 1 until the first reversal is reached. The values entered as arguments are then used.

Parameters:
startVal:

The initial value for the staircase.

nReversals:

The minimum number of reversals permitted. If stepSizes is a list, but the minimum number of reversals to perform, nReversals, is less than the length of this list, PsychoPy will automatically increase the minimum number of reversals and emit a warning. This minimum number of reversals is always set to be greater than 0.

stepSizes:

The size of steps as a single value or a list (or array). For a single value the step size is fixed. For an array or list the step size will progress to the next entry at each reversal.

nTrials:

The minimum number of trials to be conducted. If the staircase has not reached the required number of reversals then it will continue.

nUp:

The number of ‘incorrect’ (or 0) responses before the staircase level increases.

nDown:

The number of ‘correct’ (or 1) responses before the staircase level decreases.

applyInitialRulebool

Whether to apply a 1-up/1-down rule until the first reversal point (if True), before switching to the specified up/down rule.

extraInfo:

A dictionary (typically) that will be stored along with collected data using saveAsPickle() or saveAsText() methods.

method:

Not used and may be deprecated in future releases.

stepType: ‘db’, ‘lin’, ‘log’

The type of steps that should be taken each time. ‘lin’ will simply add or subtract that amount each step, ‘db’ and ‘log’ will step by a certain number of decibels or log units (note that this will prevent your value ever reaching zero or less)

minVal: None, or a number

The smallest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance.

maxVal: None, or a number

The largest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance.

Additional keyword arguments will be ignored.

Notes:

The additional keyword arguments **kwargs might for example be passed by the MultiStairHandler, which expects a label keyword for each staircase. These parameters are to be ignored by the StairHandler.

_intensityDec()[source]

decrement the current intensity and reset counter

_intensityInc()[source]

increment the current intensity and reset counter

_terminate()

Remove references to ourself in experiments and terminate the loop

addData(result, intensity=None)[source]

Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:

  • .addResponse(result, intensity)

  • .addOtherData(‘dataName’, value’)

addOtherData(dataName, value)[source]

Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the staircase

addResponse(result, intensity=None)[source]

Add a 1 or 0 to signify a correct / detected or incorrect / missed trial.

This is essential to advance the staircase to a new intensity level!

Supplying an intensity value here indicates that you did not use the recommended intensity in your last trial and the staircase will replace its recorded value with the one you supplied here.

calculateNextIntensity()[source]

Based on current intensity, counter of correct responses, and current direction.

getExp()

Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached

getOriginPathAndFile(originPath=None)

Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.

If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).

property intensity

The intensity (level) of the current staircase

next()

Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex.

If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code such as:

staircase = data.StairHandler(.......)
for eachTrial in staircase:  # automatically stops when done
    # do stuff

or:

staircase = data.StairHandler(.......)
while True:  # ie forever
    try:
        thisTrial = staircase.next()
    except StopIteration:  # we got a StopIteration error
        break  # break out of the forever loop
    # do stuff here for the trial
printAsText(stimOut=None, dataOut=('all_mean', 'all_std', 'all_raw'), delim='\t', matrixOnly=False)

Exactly like saveAsText() except that the output goes to the screen instead of a file

saveAsExcel(fileName, sheetName='data', matrixOnly=False, appendFile=True, fileCollisionMethod='rename')[source]

Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).

It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the file. So you could have a single file named after your experiment and then have one worksheet for each participant. Or you could have one file for each participant and then multiple sheets for repeated sessions etc.

The file extension .xlsx will be added if not given already.

The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase / intensity level on every trial and the corresponding responses of the participant on every trial.

Parameters:
fileName: string

the name of the file to create or append. Can include relative or absolute path.

sheetName: string

the name of the worksheet within the file

matrixOnly: True or False

If set to True then only the data itself will be output (no additional info)

appendFile: True or False

If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.

fileCollisionMethod: string

Collision method passed to handleFileCollision() This is ignored if appendFile is True.

saveAsJson(fileName=None, encoding='utf-8-sig', fileCollisionMethod='rename')

Serialize the object to the JSON format.

Parameters:
  • fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.

  • encoding (string, optional) – The encoding to use when writing the file.

  • fileCollisionMethod (string) – Collision method passed to handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.

Notes

Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.

saveAsPickle(fileName, fileCollisionMethod='rename')[source]

Basically just saves a copy of self (with data) to a pickle file.

This can be reloaded if necessary and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsText(fileName, delim=None, matrixOnly=False, fileCollisionMethod='rename', encoding='utf-8-sig')[source]

Write a text file with the data

Parameters:
fileName: a string

The name of the file, including path if needed. The extension .tsv will be added if not included.

delim: a string

the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)

matrixOnly: True/False

If True, prevents the output of the extraInfo provided at initialisation.

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

setExp(exp)

Sets the ExperimentHandler that this handler is attached to

Do NOT attempt to set the experiment using:

trials._exp = myExperiment

because it needs to be performed using the weakref module.

PsiHandler

class psychopy.data.PsiHandler(nTrials, intensRange, alphaRange, betaRange, intensPrecision, alphaPrecision, betaPrecision, delta, stepType='lin', expectedMin=0.5, prior=None, fromFile=False, extraInfo=None, name='')[source]

Handler to implement the “Psi” adaptive psychophysical method (Kontsevich & Tyler, 1999).

This implementation assumes the form of the psychometric function to be a cumulative Gaussian. Psi estimates the two free parameters of the psychometric function, the location (alpha) and slope (beta), using Bayes’ rule and grid approximation of the posterior distribution. It chooses stimuli to present by minimizing the entropy of this grid. Because this grid is represented internally as a 4-D array, one must choose the intensity, alpha, and beta ranges carefully so as to avoid a Memory Error. Maximum likelihood is used to estimate Lambda, the most likely location/slope pair. Because Psi estimates the entire psychometric function, any threshold defined on the function may be estimated once Lambda is determined.

It is advised that Lambda estimates are examined after completion of the Psi procedure. If the estimated alpha or beta values equal your specified search bounds, then the search range most likely did not contain the true value. In this situation the procedure should be repeated with appropriately adjusted bounds.

Because Psi is a Bayesian method, it can be initialized with a prior from existing research. A function to save the posterior over Lambda as a Numpy binary file is included.

Kontsevich & Tyler (1999) specify their psychometric function in terms of d’. PsiHandler avoids this and treats all parameters with respect to stimulus intensity. Specifically, the forms of the psychometric function assumed for Yes/No and Two Alternative Forced Choice (2AFC) are, respectively:

_normCdf = norm.cdf(x, mean=alpha, sd=beta) Y(x) = .5 * delta + (1 - delta) * _normCdf

Y(x) = .5 * delta + (1 - delta) * (.5 + .5 * _normCdf)

Initializes the handler and creates an internal Psi Object for grid approximation.

Parameters:
nTrials (int)

The number of trials to run.

intensRange (list)

Two element list containing the (inclusive) endpoints of the stimuli intensity range.

alphaRange (list)

Two element list containing the (inclusive) endpoints of the alpha (location parameter) range.

betaRange (list)

Two element list containing the (inclusive) endpoints of the beta (slope parameter) range.

intensPrecision (float or int)

If stepType == ‘lin’, this specifies the step size of the stimuli intensity range. If stepType == ‘log’, this specifies the number of steps in the stimuli intensity range.

alphaPrecision (float)

The step size of the alpha (location parameter) range.

betaPrecision (float)

The step size of the beta (slope parameter) range.

delta (float)

The guess rate.

stepType (str)

The type of steps to be used when constructing the stimuli intensity range. If ‘lin’ then evenly spaced steps are used. If ‘log’ then logarithmically spaced steps are used. Defaults to ‘lin’.

expectedMin (float)

The expected lower asymptote of the psychometric function (PMF).

For a Yes/No task, the PMF usually extends across the interval [0, 1]; here, expectedMin should be set to 0.

For a 2-AFC task, the PMF spreads out across [0.5, 1.0]. Therefore, expectedMin should be set to 0.5 in this case, and the 2-AFC psychometric function described above going to be is used.

Currently, only Yes/No and 2-AFC designs are supported.

Defaults to 0.5, or a 2-AFC task.

prior (numpy ndarray or str)

Optional prior distribution with which to initialize the Psi Object. This can either be a numpy ndarray object or the path to a numpy binary file (.npy) containing the ndarray.

fromFile (str)

Flag specifying whether prior is a file pathname or not.

extraInfo (dict)

Optional dictionary object used in PsychoPy’s built-in logging system.

name (str)

Optional name for the PsiHandler used in PsychoPy’s built-in logging system.

Raises:
NotImplementedError

If the supplied minVal parameter implies an experimental design other than Yes/No or 2-AFC.

_checkFinished()[source]

checks if we are finished. Updates attribute: finished

_intensityDec()

decrement the current intensity and reset counter

_intensityInc()

increment the current intensity and reset counter

_terminate()

Remove references to ourself in experiments and terminate the loop

addData(result, intensity=None)

Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:

  • .addResponse(result, intensity)

  • .addOtherData(‘dataName’, value’)

addOtherData(dataName, value)

Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the staircase

addResponse(result, intensity=None)[source]

Add a 1 or 0 to signify a correct / detected or incorrect / missed trial. Supplying an intensity value here indicates that you did not use the recommended intensity in your last trial and the staircase will replace its recorded value with the one you supplied here.

calculateNextIntensity()

Based on current intensity, counter of correct responses, and current direction.

estimateLambda()[source]

Returns a tuple of (location, slope)

estimateThreshold(thresh, lamb=None)[source]

Returns an intensity estimate for the provided probability.

The optional argument ‘lamb’ allows thresholds to be estimated without having to recompute the maximum likelihood lambda.

getExp()

Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached

getOriginPathAndFile(originPath=None)

Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.

If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).

property intensity

The intensity (level) of the current staircase

next()

Advances to next trial and returns it.

printAsText(stimOut=None, dataOut=('all_mean', 'all_std', 'all_raw'), delim='\t', matrixOnly=False)

Exactly like saveAsText() except that the output goes to the screen instead of a file

saveAsExcel(fileName, sheetName='data', matrixOnly=False, appendFile=True, fileCollisionMethod='rename')

Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).

It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the file. So you could have a single file named after your experiment and then have one worksheet for each participant. Or you could have one file for each participant and then multiple sheets for repeated sessions etc.

The file extension .xlsx will be added if not given already.

The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase / intensity level on every trial and the corresponding responses of the participant on every trial.

Parameters:
fileName: string

the name of the file to create or append. Can include relative or absolute path.

sheetName: string

the name of the worksheet within the file

matrixOnly: True or False

If set to True then only the data itself will be output (no additional info)

appendFile: True or False

If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.

fileCollisionMethod: string

Collision method passed to handleFileCollision() This is ignored if appendFile is True.

saveAsJson(fileName=None, encoding='utf-8-sig', fileCollisionMethod='rename')

Serialize the object to the JSON format.

Parameters:
  • fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.

  • encoding (string, optional) – The encoding to use when writing the file.

  • fileCollisionMethod (string) – Collision method passed to handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.

Notes

Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.

saveAsPickle(fileName, fileCollisionMethod='rename')

Basically just saves a copy of self (with data) to a pickle file.

This can be reloaded if necessary and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsText(fileName, delim=None, matrixOnly=False, fileCollisionMethod='rename', encoding='utf-8-sig')

Write a text file with the data

Parameters:
fileName: a string

The name of the file, including path if needed. The extension .tsv will be added if not included.

delim: a string

the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)

matrixOnly: True/False

If True, prevents the output of the extraInfo provided at initialisation.

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

savePosterior(fileName, fileCollisionMethod='rename')[source]

Saves the posterior array over probLambda as a pickle file with the specified name.

Parameters:

fileCollisionMethod (string) – Collision method passed to handleFileCollision()

setExp(exp)

Sets the ExperimentHandler that this handler is attached to

Do NOT attempt to set the experiment using:

trials._exp = myExperiment

because it needs to be performed using the weakref module.

QuestHandler

class psychopy.data.QuestHandler(startVal, startValSd, pThreshold=0.82, nTrials=None, stopInterval=None, method='quantile', beta=3.5, delta=0.01, gamma=0.5, grain=0.01, range=None, extraInfo=None, minVal=None, maxVal=None, staircase=None, originPath=None, name='', autoLog=True, **kwargs)[source]

Class that implements the Quest algorithm for quick measurement of psychophysical thresholds.

Uses Andrew Straw’s QUEST, which is a Python port of Denis Pelli’s Matlab code.

Measures threshold using a Weibull psychometric function. Currently, it is not possible to use a different psychometric function.

The Weibull psychometric function is given by the formula

\(\Psi(x) = \delta \gamma + (1 - \delta) [1 - (1 - \gamma)\, \exp(-10^{\\beta (x - T + \epsilon)})]\)

Here, \(x\) is an intensity or a contrast (in log10 units), and \(T\) is estimated threshold.

Quest internally shifts the psychometric function such that intensity at the user-specified threshold performance level pThreshold (e.g., 50% in a yes-no or 75% in a 2-AFC task) is euqal to 0. The parameter \(\epsilon\) is responsible for this shift, and is determined automatically based on the specified pThreshold value. It is the parameter Watson & Pelli (1983) introduced to perform measurements at the “optimal sweat factor”. Assuming your QuestHandler instance is called q, you can retrieve this value via q.epsilon.

Example:

# setup display/window
...
# create stimulus
stimulus = visual.RadialStim(win=win, tex='sinXsin', size=1,
                             pos=[0,0], units='deg')
...
# create staircase object
# trying to find out the contrast where subject gets 63% correct
# if wanted to do a 2AFC then the defaults for pThreshold and gamma
# are good. As start value, we'll use 50% contrast, with SD = 20%
staircase = data.QuestHandler(0.5, 0.2,
    pThreshold=0.63, gamma=0.01,
    nTrials=20, minVal=0, maxVal=1)
...
while thisContrast in staircase:
    # setup stimulus
    stimulus.setContrast(thisContrast)
    stimulus.draw()
    win.flip()
    core.wait(0.5)
    # get response
    ...
    # inform QUEST of the response, needed to calculate next level
    staircase.addResponse(thisResp)
...
# can now access 1 of 3 suggested threshold levels
staircase.mean()
staircase.mode()
staircase.quantile(0.5)  # gets the median
Typical values for pThreshold are:
  • 0.82 which is equivalent to a 3 up 1 down standard staircase

  • 0.63 which is equivalent to a 1 up 1 down standard staircase

    (and might want gamma=0.01)

The variable(s) nTrials and/or stopSd must be specified.

beta, delta, and gamma are the parameters of the Weibull psychometric function.

Parameters:
startVal:

Prior threshold estimate or your initial guess threshold.

startValSd:

Standard deviation of your starting guess threshold. Be generous with the sd as QUEST will have trouble finding the true threshold if it’s more than one sd from your initial guess.

pThreshold

Your threshold criterion expressed as probability of response==1. An intensity offset is introduced into the psychometric function so that the threshold (i.e., the midpoint of the table) yields pThreshold.

nTrials: None or a number

The maximum number of trials to be conducted.

stopInterval: None or a number

The minimum 5-95% confidence interval required in the threshold estimate before stopping. If both this and nTrials is specified, whichever happens first will determine when Quest will stop.

method: ‘quantile’, ‘mean’, ‘mode’

The method used to determine the next threshold to test. If you want to get a specific threshold level at the end of your staircasing, please use the quantile, mean, and mode methods directly.

beta: 3.5 or a number

Controls the steepness of the psychometric function.

delta: 0.01 or a number

The fraction of trials on which the observer presses blindly.

gamma: 0.5 or a number

The fraction of trials that will generate response 1 when intensity=-Inf.

grain: 0.01 or a number

The quantization of the internal table.

range: None, or a number

The intensity difference between the largest and smallest intensity that the internal table can store. This interval will be centered on the initial guess tGuess. QUEST assumes that intensities outside of this range have zero prior probability (i.e., they are impossible).

extraInfo:

A dictionary (typically) that will be stored along with collected data using saveAsPickle() or saveAsText() methods.

minVal: None, or a number

The smallest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance.

maxVal: None, or a number

The largest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance.

staircase: None or StairHandler

Can supply a staircase object with intensities and results. Might be useful to give the quest algorithm more information if you have it. You can also call the importData function directly.

Additional keyword arguments will be ignored.

Notes:

The additional keyword arguments **kwargs might for example be passed by the MultiStairHandler, which expects a label keyword for each staircase. These parameters are to be ignored by the StairHandler.

_checkFinished()[source]

checks if we are finished Updates attribute: finished

_intensity()[source]

assigns the next intensity level

_intensityDec()

decrement the current intensity and reset counter

_intensityInc()

increment the current intensity and reset counter

_terminate()

Remove references to ourself in experiments and terminate the loop

addData(result, intensity=None)

Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:

  • .addResponse(result, intensity)

  • .addOtherData(‘dataName’, value’)

addOtherData(dataName, value)

Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the staircase

addResponse(result, intensity=None)[source]

Add a 1 or 0 to signify a correct / detected or incorrect / missed trial

Supplying an intensity value here indicates that you did not use the recommended intensity in your last trial and the staircase will replace its recorded value with the one you supplied here.

property beta
calculateNextIntensity()[source]

based on current intensity and counter of correct responses

confInterval(getDifference=False)[source]

Return estimate for the 5%–95% confidence interval (CI).

Parameters:
getDifference (bool)

If True, return the width of the confidence interval (95% - 5% percentiles). If False, return an NumPy array with estimates for the 5% and 95% boundaries.

Returns:

scalar or array of length 2.

property delta
property epsilon
property gamma
getExp()

Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached

getOriginPathAndFile(originPath=None)

Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.

If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).

property grain
importData(intensities, results)[source]

import some data which wasn’t previously given to the quest algorithm

incTrials(nNewTrials)[source]

increase maximum number of trials Updates attribute: nTrials

property intensity

The intensity (level) of the current staircase

mean()[source]

mean of Quest posterior pdf

mode()[source]

mode of Quest posterior pdf

next()

Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN, thisIndex, finished, intensities

If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code such as:

staircase = data.QuestHandler(.......)
for eachTrial in staircase:  # automatically stops when done
    # do stuff

or:

staircase = data.QuestHandler(.......)
while True:  # i.e. forever
    try:
        thisTrial = staircase.next()
    except StopIteration:  # we got a StopIteration error
        break  # break out of the forever loop
    # do stuff here for the trial
printAsText(stimOut=None, dataOut=('all_mean', 'all_std', 'all_raw'), delim='\t', matrixOnly=False)

Exactly like saveAsText() except that the output goes to the screen instead of a file

quantile(p=None)[source]

quantile of Quest posterior pdf

property range
saveAsExcel(fileName, sheetName='data', matrixOnly=False, appendFile=True, fileCollisionMethod='rename')

Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).

It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the file. So you could have a single file named after your experiment and then have one worksheet for each participant. Or you could have one file for each participant and then multiple sheets for repeated sessions etc.

The file extension .xlsx will be added if not given already.

The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase / intensity level on every trial and the corresponding responses of the participant on every trial.

Parameters:
fileName: string

the name of the file to create or append. Can include relative or absolute path.

sheetName: string

the name of the worksheet within the file

matrixOnly: True or False

If set to True then only the data itself will be output (no additional info)

appendFile: True or False

If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.

fileCollisionMethod: string

Collision method passed to handleFileCollision() This is ignored if appendFile is True.

saveAsJson(fileName=None, encoding='utf-8-sig', fileCollisionMethod='rename')

Serialize the object to the JSON format.

Parameters:
  • fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.

  • encoding (string, optional) – The encoding to use when writing the file.

  • fileCollisionMethod (string) – Collision method passed to handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.

Notes

Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.

saveAsPickle(fileName, fileCollisionMethod='rename')

Basically just saves a copy of self (with data) to a pickle file.

This can be reloaded if necessary and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsText(fileName, delim=None, matrixOnly=False, fileCollisionMethod='rename', encoding='utf-8-sig')

Write a text file with the data

Parameters:
fileName: a string

The name of the file, including path if needed. The extension .tsv will be added if not included.

delim: a string

the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)

matrixOnly: True/False

If True, prevents the output of the extraInfo provided at initialisation.

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

sd()[source]

standard deviation of Quest posterior pdf

setExp(exp)

Sets the ExperimentHandler that this handler is attached to

Do NOT attempt to set the experiment using:

trials._exp = myExperiment

because it needs to be performed using the weakref module.

simulate(tActual)[source]

returns a simulated user response to the next intensity level presented by Quest, need to supply the actual threshold level

QuestPlusHandler

class psychopy.data.QuestPlusHandler(nTrials, intensityVals, thresholdVals, slopeVals, lowerAsymptoteVals, lapseRateVals, responseVals=('Yes', 'No'), prior=None, startIntensity=None, psychometricFunc='weibull', stimScale='log10', stimSelectionMethod='minEntropy', stimSelectionOptions=None, paramEstimationMethod='mean', extraInfo=None, name='', label='', **kwargs)[source]

QUEST+ implementation. Currently only supports parameter estimation of a Weibull-shaped psychometric function.

The parameter estimates can be retrieved via the .paramEstimate attribute, which returns a dictionary whose keys correspond to the names of the estimated parameters (i.e., QuestPlusHandler.paramEstimate[‘threshold’] will provide the threshold estimate). Retrieval of the marginal posterior distributions works similarly: they can be accessed via the .posterior dictionary.

Parameters:
  • nTrials (int) – Number of trials to run.

  • intensityVals (collection of floats) – The complete set of possible stimulus levels. Note that the stimulus levels are not necessarily limited to intensities (as the name of this parameter implies), but they could also be contrasts, durations, weights, etc.

  • thresholdVals (float or collection of floats) – The complete set of possible threshold values.

  • slopeVals (float or collection of floats) – The complete set of possible slope values.

  • lowerAsymptoteVals (float or collection of floats) – The complete set of possible values of the lower asymptote. This corresponds to false-alarm rates in yes-no tasks, and to the guessing rate in n-AFC tasks. Therefore, when performing an n-AFC experiment, the collection should consists of a single value only (e.g., [0.5] for 2-AFC, [0.33] for 3-AFC, [0.25] for 4-AFC, etc.).

  • lapseRateVals (float or collection of floats) – The complete set of possible lapse rate values. The lapse rate defines the upper asymptote of the psychometric function, which will be at 1 - lapse rate.

  • responseVals (collection) – The complete set of possible response outcomes. Currently, only two outcomes are supported: the first element must correspond to a successful response / stimulus detection, and the second one to an unsuccessful or incorrect response. For example, in a yes-no task, one would use [‘Yes’, ‘No’], and in an n-AFC task, [‘Correct’, ‘Incorrect’]; or, alternatively, the less verbose [1, 0] in both cases.

  • prior (dict of floats) – The prior probabilities to assign to the parameter values. The dictionary keys correspond to the respective parameters: threshold, slope, lowerAsymptote, lapseRate.

  • startIntensity (float) – The very first intensity (or stimulus level) to present.

  • psychometricFunc ({'weibull'}) – The psychometric function to fit. Currently, only the Weibull function is supported.

  • stimScale ({'log10', 'dB', 'linear'}) – The scale on which the stimulus intensities (or stimulus levels) are provided. Currently supported are the decadic logarithm, log10; decibels, dB; and a linear scale, linear.

  • stimSelectionMethod ({'minEntropy', 'minNEntropy'}) – How to select the next stimulus. minEntropy will select the stimulus that will minimize the expected entropy. minNEntropy will randomly pick pick a stimulus from the set of stimuli that will produce the smallest, 2nd-smallest, …, N-smallest entropy. This can be used to ensure some variation in the stimulus selection (and subsequent presentation) procedure. The number N will then have to be specified via the stimSelectionOption parameter.

  • stimSelectionOptions (dict) – This parameter further controls how to select the next stimulus in case stimSelectionMethod=minNEntropy. The dictionary supports two keys: N and maxConsecutiveReps. N defines the number of “best” stimuli (i.e., those which produce the smallest N expected entropies) from which to randomly select a stimulus for presentation in the next trial. maxConsecutiveReps defines how many times the exact same stimulus can be presented on consecutive trials. For example, to randomly pick a stimulus from those which will produce the 4 smallest expected entropies, and to allow the same stimulus to be presented on two consecutive trials max, use stimSelectionOptions=dict(N=4, maxConsecutiveReps=2). To achieve reproducible results, you may pass a seed to the random number generator via the randomSeed key.

  • paramEstimationMethod ({'mean', 'mode'}) – How to calculate the final parameter estimate. mean returns the mean of each parameter, weighted by their respective posterior probabilities. mode returns the parameters at the peak of the posterior distribution.

  • extraInfo (dict) – Additional information to store along the actual QUEST+ staircase data.

  • name (str) – The name of the QUEST+ staircase object. This will appear in the PsychoPy logs.

  • label (str) – Only used by MultiStairHandler, and otherwise ignored.

  • kwargs (dict) – Additional keyword arguments. These might be passed, for example, through a MultiStairHandler, and will be ignored. A warning will be emitted whenever additional keyword arguments have been passed.

Warns:

RuntimeWarning – If an unknown keyword argument was passed.

Notes

The QUEST+ algorithm was first described by [1].

_intensityDec()

decrement the current intensity and reset counter

_intensityInc()

increment the current intensity and reset counter

_terminate()

Remove references to ourself in experiments and terminate the loop

addData(result, intensity=None)

Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:

  • .addResponse(result, intensity)

  • .addOtherData(‘dataName’, value’)

addOtherData(dataName, value)

Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the staircase

addResponse(response, intensity=None)[source]

Add a 1 or 0 to signify a correct / detected or incorrect / missed trial.

This is essential to advance the staircase to a new intensity level!

Supplying an intensity value here indicates that you did not use the recommended intensity in your last trial and the staircase will replace its recorded value with the one you supplied here.

calculateNextIntensity()

Based on current intensity, counter of correct responses, and current direction.

getExp()

Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached

getOriginPathAndFile(originPath=None)

Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.

If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).

property intensity

The intensity (level) of the current staircase

next()

Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex.

If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code such as:

staircase = data.StairHandler(.......)
for eachTrial in staircase:  # automatically stops when done
    # do stuff

or:

staircase = data.StairHandler(.......)
while True:  # ie forever
    try:
        thisTrial = staircase.next()
    except StopIteration:  # we got a StopIteration error
        break  # break out of the forever loop
    # do stuff here for the trial
property paramEstimate

The estimated parameters of the psychometric function.

Returns:

A dictionary whose keys correspond to the names of the estimated parameters.

Return type:

dict of floats

property posterior

The marginal posterior distributions.

Returns:

A dictionary whose keys correspond to the names of the estimated parameters.

Return type:

dict of np.ndarrays

printAsText(stimOut=None, dataOut=('all_mean', 'all_std', 'all_raw'), delim='\t', matrixOnly=False)

Exactly like saveAsText() except that the output goes to the screen instead of a file

property prior

The marginal prior distributions.

Returns:

A dictionary whose keys correspond to the names of the parameters.

Return type:

dict of np.ndarrays

saveAsExcel(fileName, sheetName='data', matrixOnly=False, appendFile=True, fileCollisionMethod='rename')

Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).

It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the file. So you could have a single file named after your experiment and then have one worksheet for each participant. Or you could have one file for each participant and then multiple sheets for repeated sessions etc.

The file extension .xlsx will be added if not given already.

The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase / intensity level on every trial and the corresponding responses of the participant on every trial.

Parameters:
fileName: string

the name of the file to create or append. Can include relative or absolute path.

sheetName: string

the name of the worksheet within the file

matrixOnly: True or False

If set to True then only the data itself will be output (no additional info)

appendFile: True or False

If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.

fileCollisionMethod: string

Collision method passed to handleFileCollision() This is ignored if appendFile is True.

saveAsJson(fileName=None, encoding='utf-8-sig', fileCollisionMethod='rename')[source]

Serialize the object to the JSON format.

Parameters:
  • fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.

  • encoding (string, optional) – The encoding to use when writing the file.

  • fileCollisionMethod (string) – Collision method passed to handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.

Notes

Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.

saveAsPickle(fileName, fileCollisionMethod='rename')

Basically just saves a copy of self (with data) to a pickle file.

This can be reloaded if necessary and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsText(fileName, delim=None, matrixOnly=False, fileCollisionMethod='rename', encoding='utf-8-sig')

Write a text file with the data

Parameters:
fileName: a string

The name of the file, including path if needed. The extension .tsv will be added if not included.

delim: a string

the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)

matrixOnly: True/False

If True, prevents the output of the extraInfo provided at initialisation.

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

setExp(exp)

Sets the ExperimentHandler that this handler is attached to

Do NOT attempt to set the experiment using:

trials._exp = myExperiment

because it needs to be performed using the weakref module.

property startIntensity

MultiStairHandler

class psychopy.data.MultiStairHandler(stairType='simple', method='random', conditions=None, nTrials=50, randomSeed=None, originPath=None, name='', autoLog=True)[source]

A Handler to allow easy interleaved staircase procedures (simple or QUEST).

Parameters for the staircases, as used by the relevant StairHandler or QuestHandler (e.g. the startVal, minVal, maxVal…) should be specified in the conditions list and may vary between each staircase. In particular, the conditions must include a startVal (because this is a required argument to the above handlers), a label to tag the staircase and a startValSd (only for QUEST staircases). Any parameters not specified in the conditions file will revert to the default for that individual handler.

If you need to customize the behaviour further you may want to look at the recipe on Coder - interleave staircases.

Params:
stairType: ‘simple’, ‘quest’, or ‘questplus’
Use a StairHandler, a QuestHandler, or a

QuestPlusHandler.

method: ‘random’, ‘fullRandom’, or ‘sequential’

If random, stairs are shuffled in each repeat but not randomized more than that (so you can’t have 3 repeats of the same staircase in a row unless it’s the only one still running). If fullRandom, the staircase order is “fully” randomized, meaning that, theoretically, a large number of subsequent trials could invoke the same staircase repeatedly. If sequential, don’t perform any randomization.

conditions: a list of dictionaries specifying conditions

Can be used to control parameters for the different staircases. Can be imported from an Excel file using psychopy.data.importConditions MUST include keys providing, ‘startVal’, ‘label’ and ‘startValSd’ (QUEST only). The ‘label’ will be used in data file saving so should be unique. See Example Usage below.

nTrials=50

Minimum trials to run (but may take more if the staircase hasn’t also met its minimal reversals. See StairHandler

randomSeedint or None

The seed with which to initialize the random number generator (RNG). If None (default), do not initialize the RNG with a specific value.

Example usage:

conditions=[
    {'label':'low', 'startVal': 0.1, 'ori':45},
    {'label':'high','startVal': 0.8, 'ori':45},
    {'label':'low', 'startVal': 0.1, 'ori':90},
    {'label':'high','startVal': 0.8, 'ori':90},
    ]
stairs = data.MultiStairHandler(conditions=conditions, nTrials=50)

for thisIntensity, thisCondition in stairs:
    thisOri = thisCondition['ori']

    # do something with thisIntensity and thisOri

    stairs.addResponse(correctIncorrect)  # this is ESSENTIAL

# save data as multiple formats
stairs.saveDataAsExcel(fileName)  # easy to browse
stairs.saveAsPickle(fileName)  # contains more info
Raises:

ValueError – If an unknown randomization option was passed via the method keyword argument.

_startNewPass()[source]

Create a new iteration of the running staircases for this pass.

This is not normally needed by the user - it gets called at __init__ and every time that next() runs out of trials for this pass.

_terminate()

Remove references to ourself in experiments and terminate the loop

abortCurrentTrial(action='random')[source]

Abort the current trial (staircase).

Calling this during an experiment abort the current staircase used this trial. The current staircase will be reshuffled into available staircases depending on the action parameter.

Parameters:

action (str) – Action to take with the aborted trial. Can be either of ‘random’, or ‘append’. The default action is ‘random’.

Notes

  • When using action=’random’, the RNG state for the trial handler is not used.

addData(result, intensity=None)[source]

Deprecated 1.79.00: It was ambiguous whether you were adding the response (0 or 1) or some other data concerning the trial so there is now a pair of explicit methods:

  • addResponse(corr,intensity) #some data that alters the next

    trial value

  • addOtherData(‘RT’, reactionTime) #some other data that won’t

    control staircase

addOtherData(name, value)[source]

Add some data about the current trial that will not be used to control the staircase(s) such as reaction time data

addResponse(result, intensity=None)[source]

Add a 1 or 0 to signify a correct / detected or incorrect / missed trial

This is essential to advance the staircase to a new intensity level!

getExp()

Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached

getOriginPathAndFile(originPath=None)

Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.

If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).

property intensity

The intensity (level) of the current staircase

next()

Advances to next trial and returns it.

This can be handled with code such as:

staircase = data.MultiStairHandler(.......)
for eachTrial in staircase:  # automatically stops when done
    # do stuff here for the trial

or:

staircase = data.MultiStairHandler(.......)
while True:  # ie forever
    try:
        thisTrial = staircase.next()
    except StopIteration:  # we got a StopIteration error
        break  # break out of the forever loop
    # do stuff here for the trial
printAsText(delim='\t', matrixOnly=False)[source]

Write the data to the standard output stream

Parameters:
delim: a string

the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)

matrixOnly: True/False

If True, prevents the output of the extraInfo provided at initialisation.

saveAsExcel(fileName, matrixOnly=False, appendFile=False, fileCollisionMethod='rename')[source]

Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and with OpenOffice (>=3.0).

It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that the data from each staircase will be save in the same file, with the sheet name coming from the ‘label’ given in the dictionary of conditions during initialisation of the Handler.

The file extension .xlsx will be added if not given already.

The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase/intensity level on every trial and the corresponding responses of the participant on every trial.

Parameters:
fileName: string

the name of the file to create or append. Can include relative or absolute path

matrixOnly: True or False

If set to True then only the data itself will be output (no additional info)

appendFile: True or False

If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.

fileCollisionMethod: string

Collision method passed to handleFileCollision() This is ignored if append is True.

saveAsJson(fileName=None, encoding='utf-8-sig', fileCollisionMethod='rename')

Serialize the object to the JSON format.

Parameters:
  • fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.

  • encoding (string, optional) – The encoding to use when writing the file.

  • fileCollisionMethod (string) – Collision method passed to handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.

Notes

Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.

saveAsPickle(fileName, fileCollisionMethod='rename')[source]

Saves a copy of self (with data) to a pickle file.

This can be reloaded later and further analyses carried out.

Parameters:

fileCollisionMethod: Collision method passed to handleFileCollision()

saveAsText(fileName, delim=None, matrixOnly=False, fileCollisionMethod='rename', encoding='utf-8-sig')[source]

Write out text files with the data.

For MultiStairHandler this will output one file for each staircase that was run, with _label added to the fileName that you specify above (label comes from the condition dictionary you specified when you created the Handler).

Parameters:
fileName: a string

The name of the file, including path if needed. The extension .tsv will be added if not included.

delim: a string

the delimiter to be used (e.g. ‘ ‘ for tab-delimited, ‘,’ for csv files)

matrixOnly: True/False

If True, prevents the output of the extraInfo provided at initialisation.

fileCollisionMethod:

Collision method passed to handleFileCollision()

encoding:

The encoding to use when saving a the file. Defaults to utf-8-sig.

setExp(exp)

Sets the ExperimentHandler that this handler is attached to

Do NOT attempt to set the experiment using:

trials._exp = myExperiment

because it needs to be performed using the weakref module.

FitWeibull

class psychopy.data.FitWeibull(xx, yy, sems=1.0, guess=None, display=1, expectedMin=0.5, optimize_kws=None)[source]

Fit a Weibull function (either 2AFC or YN) of the form:

y = chance + (1.0-chance)*(1-exp( -(xx/alpha)**(beta) ))

and with inverse:

x = alpha * (-log((1.0-y)/(1-chance)))**(1.0/beta)

After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [alpha, beta])

_doFit()

The Fit class that derives this needs to specify its _evalFunction

eval(xx, params=None)

Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.

inverse(yy, params=None)

Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.

FitLogistic

class psychopy.data.FitLogistic(xx, yy, sems=1.0, guess=None, display=1, expectedMin=0.5, optimize_kws=None)[source]

Fit a Logistic function (either 2AFC or YN) of the form:

y = chance + (1-chance)/(1+exp((PSE-xx)*JND))

and with inverse:

x = PSE - log((1-chance)/(yy-chance) - 1)/JND

After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [PSE, JND])

_doFit()

The Fit class that derives this needs to specify its _evalFunction

eval(xx, params=None)

Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.

inverse(yy, params=None)

Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.

FitNakaRushton

class psychopy.data.FitNakaRushton(xx, yy, sems=1.0, guess=None, display=1, expectedMin=0.5, optimize_kws=None)[source]

Fit a Naka-Rushton function of the form:

yy = rMin + (rMax-rMin) * xx**n/(xx**n+c50**n)

After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [rMin, rMax, c50, n])

Note that this differs from most of the other functions in not using a value for the expected minimum. Rather, it fits this as one of the parameters of the model.

_doFit()

The Fit class that derives this needs to specify its _evalFunction

eval(xx, params=None)

Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.

inverse(yy, params=None)

Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.

FitCumNormal

class psychopy.data.FitCumNormal(xx, yy, sems=1.0, guess=None, display=1, expectedMin=0.5, optimize_kws=None)[source]

Fit a Cumulative Normal function (aka error function or erf) of the form:

y = chance + (1-chance)*((special.erf((xx-xShift)/(sqrt(2)*sd))+1)*0.5)

and with inverse:

x = xShift+sqrt(2)*sd*(erfinv(((yy-chance)/(1-chance)-.5)*2))

After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [centre, sd] for the Gaussian distribution forming the cumulative)

NB: Prior to version 1.74 the parameters had different meaning, relating to xShift and slope of the function (similar to 1/sd). Although that is more in with the parameters for the Weibull fit, for instance, it is less in keeping with standard expectations of normal (Gaussian distributions) so in version 1.74.00 the parameters became the [centre,sd] of the normal distribution.

_doFit()

The Fit class that derives this needs to specify its _evalFunction

eval(xx, params=None)

Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.

inverse(yy, params=None)

Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.

importConditions()

psychopy.data.importConditions(fileName, returnFieldNames=False, selection='')[source]

Imports a list of conditions from an .xlsx, .csv, or .pkl file

The output is suitable as an input to TrialHandler trialList or to MultiStairHandler as a conditions list.

If fileName ends with:

  • .csv: import as a comma-separated-value file

    (header + row x col)

  • .xlsx: import as Excel 2007 (xlsx) files.

    No support for older (.xls) is planned.

  • .pkl: import from a pickle file as list of lists

    (header + row x col)

The file should contain one row per type of trial needed and one column for each parameter that defines the trial type. The first row should give parameter names, which should:

  • be unique

  • begin with a letter (upper or lower case)

  • contain no spaces or other punctuation (underscores are permitted)

selection is used to select a subset of condition indices to be used It can be a list/array of indices, a python slice object or a string to be parsed as either option. e.g.:

  • “1,2,4” or [1,2,4] or (1,2,4) are the same

  • “2:5” # 2, 3, 4 (doesn’t include last whole value)

  • “-10:2:” # tenth from last to the last in steps of 2

  • slice(-10, 2, None) # the same as above

  • random(5) * 8 # five random vals 0-7

functionFromStaircase()

psychopy.data.functionFromStaircase(intensities, responses, bins=10)[source]

Create a psychometric function by binning data from a staircase procedure. Although the default is 10 bins Jon now always uses ‘unique’ bins (fewer bins looks pretty but leads to errors in slope estimation)

usage:

intensity, meanCorrect, n = functionFromStaircase(intensities,
                                                  responses, bins)
where:
intensities

are a list (or array) of intensities to be binned

responses

are a list of 0,1 each corresponding to the equivalent intensity value

bins

can be an integer (giving that number of bins) or ‘unique’ (each bin is made from aa data for exactly one intensity value)

intensity

a numpy array of intensity values (where each is the center of an intensity bin)

meanCorrect

a numpy array of mean % correct in each bin

n

a numpy array of number of responses contributing to each mean

bootStraps()

psychopy.data.bootStraps(dat, n=1)[source]

Create a list of n bootstrapped resamples of the data

SLOW IMPLEMENTATION (Python for-loop)

Usage:

out = bootStraps(dat, n=1)

Where:
dat

an NxM or 1xN array (each row is a different condition, each column is a different trial)

n

number of bootstrapped resamples to create

out
  • dim[0]=conditions

  • dim[1]=trials

  • dim[2]=resamples


Back to top