Reach-dependent reorientation of rotational dynamics in motor cortex

Subjects, behavior and data pre-processingAll data used here have been analyzed in previous studies. Animal protocols were approved by the Stanford University Animal Care and Use Committee. Two adult male rhesus macaque monkeys (Macaca mulatta), J and N, performed the “maze” center-out delayed-reaching task for juice reward19. Briefly, the monkey initially fixated a central point on a vertical screen with the eye and a cursor floating slightly above the fingertips, then a slightly jittering target was shown. After a brief delay (0–1000 ms), a Go cue was presented (cessation of target jitter, filling in of target, and disappearance of fixation point), and the monkey was required to make a fast, accurate reach to the target. On some trials, the monkey was required to avoid virtual barriers presented at the same time as the target, eliciting curved reaches to produce 72 different reaching “conditions” (36 straight, 36 curved). For both monkeys, data were collected using two 96-electrode “Utah” arrays (Blackrock Microsystems, Salt Lake City, UT), one implanted in PMd and one in M1. Both single-units and stable multi-units were included for analysis. Spiking activity was binned at 1 millisecond, smoothed with a 20 ms Gaussian kernel, trial-averaged, and down-sampled to 10 millisecond resolution. Except where noted, all analyses were performed on each dataset (monkey and array) independently.Fitting dynamics to single conditionsCondition-specific dynamics were examined by fitting, for each condition independently, a discrete-time linear dynamical system to population activity from 150 ms prior to movement onset to 850 ms after movement onset. If an eigenvalue in a fit LDS had a half-life >10 s, it was reduced to 10 s. We fit the LDS using reduced-rank regression to find the optimal low-rank LDS that explained maximum variance in that condition’s neural activity. We determined the dimensionality of the LDS via cross-validation (see below64). To quantify the fit of linear dynamics, for each condition we simulated the dynamics forward from the population state 150 ms prior to movement onset, then quantified the population variance explained in the observed neural activity for that condition.To perform the cross-validation mentioned above, for each condition independently we assigned trials into one of two partitions, then smoothed and trial-averaged each partition independently to produce independent estimates of firing rates. We denote these estimates X and Y. We estimated the singular vectors of population activity from X. We then, for different dimensionalities k, projected X on to its top k singular vectors and quantified the variance explained in Y. We repeated this procedure 100 times per condition, and selected the value of k that maximized the average variance explained across folds and conditions.Controlling for the expected similarity of conditions’ eigenvaluesThere are two reasons that eigenvalues for pairs of conditions may differ: true differences and estimation noise. To quantify the expected dissimilarity due to estimation noise, we simulated 2 sets of 20 trials from each condition’s firing rate via a Poisson process, then smoothed and trial-averaged each partition to estimate firing rates. We then fit linear dynamics independently to each partition. Finally, we quantified the mean distance between the eigenvalues between the two estimates of the same condition’s firing rates. This provides an estimate of the eigenvalue distance attributable to estimation noise under the Poisson assumption. We repeated this procedure 500 times per condition to estimate a distribution of the mean distance between eigenvalues. We then compared this null distribution to the true distribution of eigenvalue-distances between pairs of conditions. We quantified differences in distribution by quantifying the Receiver Operator Characteristic Area Under the Curve (ROC-AUC) to quantify the discriminability between the two distributions. As an additional control that did not assume Poisson firing rate statistics, we repeated the above control, but generated separate estimates of each condition’s activity by randomly partitioning trials in half. We then identically fit linear dynamics to each partition, quantified the eigenvalue distance, then quantifying the ROC-AUC between the generated null and empirical distributions.Factorizing neural activity using temporal basis functionsTo identify rotations conserved in motor cortex activity across conditions, without assuming preservation of rotational planes across conditions, we exploited the convenient fact that any diagonalizable LDS:$${{\bf{x}}}\left(t+1\right)={{\bf{Mx}}}\left(t\right)$$
(1)
can be re-written on a single condition (defined with an initial state) as:$${{\bf{x}}}(t)={{\bf{Lb}}}(t)$$
(2)
expressed as the product of loading matrix L and a set of temporal basis functions b (see Supplement for derivation). Briefly, the loading matrix captures information about the eigenvectors and initial state of the LDS, while the temporal basis functions describe the temporal patterns produced by the eigenvalues. Importantly, two LDSs with the same eigenvalues will have the same temporal basis functions, even if their eigenvectors differ. Assuming neural activity on each condition has the same eigenvalues, but different eigenvectors, these temporal basis functions will be shared by neural activity across conditions, just with (potentially) different loading matrices. Neural activity on each condition is therefore just a (potentially) different linear transformation on the same temporal basis functions, so the temporal basis functions can be recovered via a Singular Value Decomposition on the neural activity across conditions concatenated into a (neurons x conditions)-by-time matrix. The top k singular vectors over the time index are assigned as the temporal basis functions, while the top k singular vectors over neurons and conditions (weighted by the singular values) can be partitioned into condition-specific loading matrices. Activity on condition c, a neurons-by-time matrix X(c), is then expressed as:$${{{\bf{X}}}}^{\left(c\right)}\approx {{{\bf{L}}}}^{\left(c\right)}{{\bf{B}}}$$
(3)
where L(c) is a neurons-by-k loading matrix for condition c, and B is a k-by-time matrix where each row of the matrix is a temporal basis function. This procedure is related to the Higher-Order Singular Value Decomposition. To quantify the approximation provided by this factorization, for each condition we multiplied that condition’s loading matrix by the shared temporal basis functions and computed the population variance explained in that condition’s neural activity.De-mixing temporal basis functionsLike many matrix factorizations, the factorization in Eq. (3) is only unique up to an invertible linear transformation. To allow for later analyses that considered the different frequencies of rotations, we aimed to “de-mix” the temporal basis functions into as pure of frequencies as possible—corresponding to functions attributable to single (pairs of) eigenvalue(s). To this end, we fit an LDS to the temporal basis functions from 150 ms prior to movement to 850 ms after movement onset and projected the temporal basis functions onto the eigenvectors of this LDS:$${{\bf{B}}}:={{{\bf{E}}}}^{-{{\bf{1}}}}{{\bf{B}}}$$
(4)
This operation does not affect the quality of the factorization, as we correspondingly multiplied the loading matrix of each condition by the eigenvectors:$${{{\bf{L}}}}^{\left(c\right)}:={{{\bf{L}}}}^{\left(c\right)}{{\bf{E}}}$$
(5)
The eigenvalues of the fit LDS can then be matched one-to-one with temporal basis functions.Cross-validating the number of temporal basis functionsTo cross-validate the number of temporal basis functions to use, we assigned trials to one of two partitions, then smoothed and trial-averaged each partition to produce independent estimates of each condition’s firing rates. We estimated loading matrices and temporal basis functions for one partition, varying the number of temporal basis functions used. We then reconstructed, for each condition, the neural activity as the product of that condition’s loading matrix and temporal basis functions. We repeated this procedure over 100 folds, and selected the number of temporal basis functions to maximize the average variance explained in the held-out partition across conditions. After the fact, we visually screened temporal basis functions, and discarded the last basis function if it was “unpaired”; that is, if it did not have a similar-frequency basis function with a phase offset.Quantifying the recoverability of temporal basis functionsBasis function decompositions, such as the Fourier transform, have been applied widely to decompose complicated datasets into linear sums of more interpretable functions. We therefore needed a method to ensure that the factorization described above was identifying rotations in motor cortex activity, and not simply acting as an arbitrary basis function decomposition. To measure this, we exploited a key distinction between rotations and oscillations. Rotations require that each temporal basis function be embedded in a distinct dimension, so that each sine/cosine pair of temporal basis functions occupies a plane, and each different-frequency rotation occupies a different plane. In general, basis function expansions will not meet these requirements. We therefore attempted to “recover” the temporal basis functions from neural activity as:$${{{\bf{B}}}{{{\mbox{‘}}}}}^{(c)}={{{\bf{L}}}}^{(c){\dagger} }{{{\bf{X}}}}^{(c)}$$
(6)
where B’(c) is the recovered temporal basis functions for condition c and † indicates pseudo-inversion. We quantified the “recoverability” of temporal basis functions on condition c as the variance explained in the temporal basis functions by the projected neural activity. If this metric is 1, it suggests that the condition’s loading matrix is invertible, and therefore fulfills the requirements of rotations. More specifically, we expect recoverability to fail when two (or more) temporal basis functions are assigned the same dimension in state space by the loading matrix. As each temporal basis functions dynamically supports each other, this overlap would preclude neural activity from acting as a dynamical system.Control for data smoothingOne potential concern is that the factorization described above works well on neural data simply because the data are smoothed before analysis, and smoothing limits the frequency spectrum and therefore allows neural activity to be reconstructed by a small number of temporal basis functions. To test these possibilities, for each trial of each condition independently we shuffled time bins before smoothing and averaging, to disrupt the temporal structure of motor cortex activity while preserving inter-neuron spike correlations and total spike counts. We then extracted temporal basis functions, with the number of temporal basis functions matched to the original dataset, and quantified the population variance explained and recoverability (defined above) of the temporal basis functions. We compared this null distribution to the data distribution using a Wilcoxon signed-rank test to test for differences in distributions.Control for stereotyped time course of behaviorOne potential concern is that the factorization described above can find common frequencies between conditions simply because reaches take similar amounts of time, and motor cortex activity is likely modulated by movement onset, offset, and speed. If this hypothesis were correct, then time-warping reaches and the corresponding motor cortex activity to equalize reach duration should strengthen the recoverability of the temporal basis functions, as aspects of motor cortex activity are warped to become more aligned. To test this, we used linear time-warping to equalize all reaches at 600 ms duration, then time-warped motor cortex activity correspondingly. For the post-reach activity, in which motor cortex firing rates are still changing, we tested two methods: one warped the post-reach activity to uniformly last 250 ms, and the other truncated post-reach activity at 850 ms after movement onset. Both methods produced similar results; the latter variant is shown in Fig. 3i. For both methods we then extracted the same number of temporal basis function as in the original dataset, and then quantified the recoverability of the temporal basis functions as a quantification of model fit. We then quantified the correlation between how much a reach condition’s duration differed from the mean and the recoverability. This was done for both the original and warped datasets.Subspace excursion anglesA set of vectors may be high-dimensional because they vary slightly from an otherwise low-dimensional plane, or because they are truly spread out and ‘fill’ the high-dimensional space. For example, 3-dimensional vectors can be chosen from within a cube or from within a narrow cone; either way, the space spanned is 3-dimensional, but if the vectors are chosen from the cube they will sweep ‘farther’ into the three dimensions. We attempted to distinguish these possibilities for the planes occupied by neural dynamics using a metric, Subspace Excursion Angles. More generally, in this analysis we sought to understand whether, for a set of low-dimensional subspaces {S(1), …, S(c)} of a high-dimensional space, these subspaces varied only slightly relative to one another, forming small angles, or whether the objects truly “pivoted” far into many different dimensions and formed large angles with one another. Here, each S(i) was the dimension(s) for a given component on a given condition (taken from the column(s) of loading matrices), embedded in high-dimensional neural state space. Existing methods of estimating dimensionality do not distinguish slight variations from more substantial variations, as long as the occupancy of each additional dimension is above the noise level. To better characterize the geometry of this set of subspaces, we took one of the observed subspaces S(i) as a seed and found the greatest subspace angle (the principal angle) to any other observed subspace S(j). We then replaced S(i) with the union of S(i) and the vector in S(j) that formed the greatest angle with S(i). We repeated this process to find the next greatest angle to the augmented S(i), and so forth. This search was optimized over all possible sequences to find the sequence with the largest angles.Estimating the dimensionality of each rotation’s orientationsWe supplemented our estimates of each rotation’s dimensionality in three ways. First, we used cross-validation to measure the dimensionality of each rotation’s orientation. For this measure, we defined this dimensionality as the number of dimensions in the loading matrices with an SNR > 1 (analogous to the cross-validation procedure in ref. 64). To estimate this dimensionality, we partitioned trials in half to extract two independent estimates of each condition’s neural activity. We then extracted temporal basis functions from partition 1 and used regression to fit loading matrices to both partitions separately. We then performed PCA on the dimensions occupied by these rotations (the loading matrices) in partition 1. Sweeping over dimensions retained, we quantified the alignment index between the dimensionality-bottlenecked estimate of the loading matrices from partition 1, and the unmodified loading matrices from partition 2. We averaged alignment indices over 100 folds and all conditions, and chose the dimensionality that maximized the alignment index between estimates. Second, we performed PCA on each rotation across conditions independently, to produce an estimate of the number of dimensions in neural state space needed to capture 80% of the rotation’s variance across conditions. Finally, we used the previous PCA to calculate the participation ratio of that rotation across conditions.Measuring the across-condition alignment of rotationsWe found that neural activity always contained rotations at a small number of fixed frequencies, but on different conditions neural dynamics potentially occupied dissimilar rotational planes. To determine whether each of the rotations occupied different planes on different conditions, or whether only some of the rotations did, we probed each rotation separately. To do so, we first grouped the pairs of identified temporal basis functions that shared the same frequency into rotations (a sine and cosine pair). Empirically, this produced 3–4 pairs of same-frequency sinusoidal temporal basis functions, and a single activity mode with zero frequency. We could correspondingly then group columns of each condition’s loading matrix that corresponded to single rotations. We then computed the alignment index for pairs of conditions for each rotation. For the state space location, the analysis was identical but with a single dimension instead of a pair. To determine whether one rotation ever used the same dimensions as a different frequency rotation on a different condition (Fig. 4b), we performed the same analysis but with the two planes coming from different rotations (different column pairs of L(c)). As before, the two planes came from different conditions. To put these alignment indices into context, it was necessary to understand what to expect due to estimation noise (gray distribution in Fig. 4b). To find the distribution expected from estimation noise, for each condition we randomly assigned trials into one of two partitions, then smoothed and trial-averaged each partition to estimate firing rates. We then used linear regression to identify the loading matrix for both partitions independently. Finally, we quantified the alignment index for rotational planes of the same frequency between these independent estimates of the same condition in either partition. We repeated this procedure 36 times to estimate a distribution of alignment indices.Explaining variability in jPCA-identified rotationsTo link our findings with previous work, we applied jPCA to neural activity27. As in the original work, we applied jPCA with mean-subtraction, a soft-normalization constant of 5, and after reconstructing neural activity with temporal basis functions (which we note is a population-level version of previously-used “PC smoothing”). Based on visual inspection and the results in the original work, we kept the first two of the discovered jPCA planes for M1 as “rotational”, while for PMd we kept only the first plane. To quantify the fit of jPCA to the total activity, we computed the percentage of population variance contained in the plane(s). To quantify total rotational activity on condition c we quantified the variance due to rotational temporal basis functions from the decomposition in our Eq. (3). To compare to jPCA, prior to quantifying the variance or extracting temporal basis functions we soft-normalized and mean subtracted activity identically to jPCA. We then quantified the Pearson correlation of this variance measure with the variance in each jPCA plane across conditions, for each retained jPCA plane independently. To quantify the alignment between neural activity and the jPCA plane on condition c (Fig. 4f), we computed the alignment index between the total rotations in neural activity and the jPCA plane for each condition. We then quantified the Pearson correlation of the alignment index with the variance in each jPCA plane across conditions, for each retained jPCA plane independently.Predicting rotations from other rotations or the state space locationOn a single condition, that condition’s loading matrix describes the state space location and each rotation’s parameters: the rotational plane, along with the initial location of the population state within the plane. To demonstrate the rotations’ parameters were not independent, we predicted the parameters of one rotation from another’s on the same condition. Each rotation’s parameters on a single condition are described by a pair of columns in the loading matrix. We therefore used linear regression to predict the pair of columns describing one rotation’s parameters from the pair of columns describing a different rotation’s parameters. This was done by vectorizing each pair of columns for the predictor rotation and the predicted rotation, with each condition acting as an observation, then using ridge regression (lambda = 0.1) to relate the two rotations. We cross-validated our model with 6-fold cross-validation: fitting temporal basis functions, loading matrices, and the regression model to 5/6 of the conditions, then testing on the remaining 1/6. To quantify model performance, we reshaped the output of the model back into a pair of column vectors, and then multiplied by the two associated temporal basis functions to produce the “predicted rotation”. We correspondingly multiplied the two empirical columns of the loading matrix by the same temporal basis functions to produce the “true rotation”. We then quantified the variance explained in the true rotation by the predicted rotation. We repeated this procedure over all conditions and pairs of rotations.We similarly fit a nonlinear version of this model using kernel ridge regression65 (lambda = 0.1, Gaussian Kernel, length-scale = 400). Parameters for this and all other kernel regressions were optimized by grid search. As a null distribution, we independently shuffled predictor and predicted rotations between conditions, quantified variance explained by model output, and tested for difference in the true distributions used a Wilcoxon signed-rank test to test for differences in distributions.To extend this procedure to the state space location, we used the column of the loading matrix associated with state space location as a predictor, and predicted the (vectorized) remainder of the loading matrix using ridge regression (lambda = 0.1). We identically used 6-fold cross-validation for this model. To quantify model performance, we concatenated the observed state space location column with the (reshaped) output of the model, multiplied the resulting matrix by the temporal basis functions, then computed variance explained in that condition’s neural activity. We similarly fit a nonlinear version of this model using kernel ridge regression (lambda = 0.1, Gaussian Kernel, length-scale = 400). As a null distribution, we independently shuffled state space locations and rotations between conditions, quantified variance explained by model output, and tested for difference in the true distributions used a Wilcoxon signed-rank test to test for differences in distributions.LDR encodingOn a single condition, the state space location and rotation’s parameters are given by that condition’s loading matrix. By contrast, the temporal basis functions describing the frequencies of these rotations are common across conditions. To predict neural activity from kinematics, we therefore used linear regression to predict a condition’s loading matrix from that condition’s kinematics. To both regularize the model and to allow for model interpretability, we reduced the dimensionality of the kinematics on each condition using linear dimensionality reduction. We regressed each condition’s hand position over time against two kinematic basis functions to extract four “reach parameters” that described target location and reach curvature along the x- and y-axis. We vectorized each condition’s loading matrix, and used ridge regression (lambda = 0.1) to fit a map from these four coefficients to a condition’s loading matrix. We used 6-fold cross-validation, fitting temporal basis functions, loading matrices, and the regression model to 5/6 of the conditions, then quantifying model performance on the remaining 1/6 of the conditions. We quantified model performance by reshaping model output, multiplying by the temporal basis functions, then quantifying variance explained in neural activity. We similarly fit a nonlinear version of this model using kernel regression (Ornstein-Uhlenbeck kernel, length-scale = 103). As a null distribution, we shuffled neural activity and kinematics between conditions, quantified variance explained by model output, and tested for difference from the true distributions using a Wilcoxon signed-rank test.Extracting kinematic basis functionsTo perform sequence-to-sequence mapping of an entire neural trajectory to an entire kinematic trajectory, we required a low-dimensional and meaningful summary representation of reaching kinematics over the entire trial. We therefore performed an SVD on the x- and y-coordinate velocity of the hand on all conditions in all our datasets (Fig. 6d). The top two velocity singular vectors were interpretable. The first velocity singular vector was a unimodal trace, with rapid rise from zero beginning around movement onset, peaking around 250 ms after movement onset, and then slower decay toward zero. The second velocity singular vector was a bi-phasic trace, with alternating up and down modes, and approximately represented the derivative of the first vector. The first velocity singular vector therefore mostly contained information about the straight reaching velocity component, while the second velocity singular vector contained information about reach curvature or target overshoot. We fit these two kinematic basis functions as a skew-normal curve and the derivative of a skew-normal curve. Decoding results were essentially identical if the raw vectors from SVD were used, but this fit ensures that position and velocity variants of the model are identical. We optimized parameters using gradient descent. After fitting these components, we analytically integrated them to obtain the equivalent basis functions over position.Controlling for nonlinear LDR encodingOne potential concern in nonlinear regression models is that the model will collapse points in the test set towards the nearest points in the training set, acting not as a continuous regression model but as a nearest-neighbor regression model. To control for this possibility, we divided target locations into sextants. We trained nonlinear LDR encoding models on 5/6 of the sextants and tested on the held-out sextant. For comparison, we trained a nonlinear LDR encoding model that used nearest neighbor regression in place of kernel regression with the same cross-validation. This explicitly compares nonlinear regression’s performance to memorization of the training set. We compared distributions of variance explained using either approach with a Wilcoxon signed-rank test.Comparing the LDR encoding with standard instantaneous encoding modelsTo compare our encoding model based on LDR to standard, instantaneous tuning models, we fit several forms of encoding model to motor cortex activity. In particular, we compared:

Linear regression to predict motor cortex activity as a linear combination of baseline firing rate, position, extent, velocity, speed, velocity angle, and acceleration.

Kernel regression to predict motor cortex activity from hand position.

Kernel regression to predict motor cortex activity from hand velocity.

For all regressions, we included a 100 ms causal lag between kinematics and neural activity. For linear regression, we used ridge regression (lambda = 0.1). For kernel regression, we used the Ornstein-Uhlenbeck kernel with a length-scale of 1012 for position encoding and 1011 for velocity encoding. For each encoding method, we then quantified the variance explained for motor cortex activity, using each condition as a data point for the distributions. Finally, we compared these distributions to the distributions obtained from LDR encoding using a Wilcoxon signed-rank test.LDR-based decodingWe fit an LDR-based linear decoder that directly used the state space location and rotation orientations to decode reach kinematics. The predictors for this decoder consisted of the vectorized loading matrices of each condition. The targets of the model consisted of the four reach parameters described earlier, which capture target position and reach curvature. We used 6-fold cross-validation: fitting temporal basis functions, loading matrices, and the decoder to 1/2 of the conditions (trial-averaged). For the remaining 1/2 of the conditions, we decoded from single-trials rather than trial-averaged activity. To decode, we estimated the loading matrix for each trial by pseudo-inverting the temporal basis functions and multiplying with the trial’s spike counts (see Supplementary). We then vectorized this estimated loading matrix and used the model to predict the four coefficients. Finally, we multiplied these coefficients by the kinematic basis functions to produce an estimate of reach position over time. We quantified model fit as the variance explained in reach position over time. As a null distribution, we shuffled neural activity and kinematics between conditions, quantified variance explained, and compared with the true distributions using a Wilcoxon signed-rank test.Comparing LDR-based decoding to instantaneous decoding methodsWe benchmarked our decoding methods against advanced versions of more traditional decoding models, which decode the value of kinematic parameters at an instant in time from motor cortex firing rates at an earlier instant in time. As exemplary instantaneous decoding methods, we used:

Linear regression to decode hand position or hand velocity, with a lag of 100 ms, on spiking activity smoothed with a 20 or 100 millisecond Gaussian kernel.

Kernel regression to decode hand position or hand velocity, with a lag of 100 ms, on spiking activity smoothed with a 20 or 100 millisecond Gaussian kernel, trained on a subset of single-trial data.

Same as above, but trained on trial-averaged activity.

Linear or kernel regression to decode hand position or hand velocity, with a lag of 100 ms, on single-trial trajectories inferred by Gaussian Process Factor Analysis (GPFA).

For linear regression, we used ridge regression (lambda = 1). For kernel regression, we used the Ornstein-Uhlenbeck kernel with a length-scale of 102 and an L2-penalty of 0.1. For each decoding method, we quantified the variance explained in hand position for each condition. We compared each of these distributions to the data distribution using a Wilcoxon signed-rank test.Quantifying the trial-to-trial fidelity of decoding within conditionAn ideal kinematic decoder would follow variations in individual movements, not categorically decode condition identity and output kinematics corresponding to the condition. We wished to test whether the LDR-based decoder acted akin to a nearest neighbor classifier, detecting which condition a trial was most similar to and returning the corresponding average reach. To test whether our decoder was continuous or categorical, we asked whether it could decode the residuals of the reach parameters, within a condition. Specifically, we considered the third and fourth reach parameters, which describe curvature. We then quantified the Pearson correlations between the decoded and true reach parameters.Reporting summaryFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Hot Topics

Related Articles