psychopy.tools.mathtools.concatenate

psychopy.tools.mathtools.concatenate(matrices, out=None, dtype=None)[source]

Concatenate matrix transformations.

Chain multiply matrices describing transform operations into a single matrix product, that when applied, transforms points and vectors with each operation in the order they’re specified.

Parameters:
  • matrices (list or tuple) – List of matrices to concatenate. All matrices must all have the same size, usually 4x4 or 3x3.

  • out (ndarray, optional) – Optional output array. Must be same shape and dtype as the expected output if out was not specified.

  • dtype (dtype or str, optional) – Data type for computations can either be ‘float32’ or ‘float64’. If out is specified, the data type of out is used and this argument is ignored. If out is not provided, ‘float64’ is used by default.

Returns:

Matrix product.

Return type:

ndarray

See also

  • multMatrix : Chain multiplication of matrices.

Notes

  • This function should only be used for combining transformation matrices. Use multMatrix for general matrix chain multiplication.

Examples

Create an SRT (scale, rotate, and translate) matrix to convert model-space coordinates to world-space:

S = scaleMatrix([2.0, 2.0, 2.0])  # scale model 2x
R = rotationMatrix(-90., [0., 0., -1])  # rotate -90 about -Z axis
T = translationMatrix([0., 0., -5.])  # translate point 5 units away

# product matrix when applied to points will scale, rotate and transform
# in that order.
SRT = concatenate([S, R, T])

# transform a point in model-space coordinates to world-space
pointModel = np.array([0., 1., 0., 1.])
pointWorld = np.matmul(SRT, pointModel.T)  # point in WCS
# ... or ...
pointWorld = matrixApply(SRT, pointModel)

Create a model-view matrix from a world-space pose represented by an orientation (quaternion) and position (vector). The resulting matrix will transform model-space coordinates to eye-space:

# eye pose as quaternion and vector
stimOri = quatFromAxisAngle([0., 0., -1.], -45.0)
stimPos = [0., 1.5, -5.]

# create model matrix
R = quatToMatrix(stimOri)
T = translationMatrix(stimPos)
M = concatenate(R, T)  # model matrix

# create a view matrix, can also be represented as 'pos' and 'ori'
eyePos = [0., 1.5, 0.]
eyeFwd = [0., 0., -1.]
eyeUp = [0., 1., 0.]
V = lookAt(eyePos, eyeFwd, eyeUp)  # from viewtools

# modelview matrix
MV = concatenate([M, V])

You can put the created matrix in the OpenGL matrix stack as shown below. Note that the matrix must have a 32-bit floating-point data type and needs to be loaded transposed since OpenGL takes matrices in column-major order:

GL.glMatrixMode(GL.GL_MODELVIEW)

# pyglet
MV = np.asarray(MV, dtype='float32')  # must be 32-bit float!
ptrMV = MV.ctypes.data_as(ctypes.POINTER(ctypes.c_float))
GL.glLoadTransposeMatrixf(ptrMV)

# PyOpenGL
MV = np.asarray(MV, dtype='float32')
GL.glLoadTransposeMatrixf(MV)

Furthermore, you can convert a point from model-space to homogeneous clip-space by concatenating the projection, view, and model matrices:

# compute projection matrix, functions here are from 'viewtools'
screenWidth = 0.52
screenAspect = w / h
scrDistance = 0.55
frustum = computeFrustum(screenWidth, screenAspect, scrDistance)
P = perspectiveProjectionMatrix(*frustum)

# multiply model-space points by MVP to convert them to clip-space
MVP = concatenate([M, V, P])
pointModel = np.array([0., 1., 0., 1.])
pointClipSpace = np.matmul(MVP, pointModel.T)

Back to top