EFTA02693104.pdf
dataset_11 pdf 729.7 KB • Feb 3, 2026 • 5 pages
Reconstructing 31) hand movements from EEC Bradberry et al.
Supplemental Materials and Methods
Experimental procedure. The Institutional Review Board of the University of Maryland at College Park
approved the experimental procedure. After giving informed consent, five healthy, right-handed subjects sat
upright in a chair and executed self-initiated center-out reaches to self-selected push button targets near eye-
level (Fig. I). We instructed subjects to attempt to make uniformly distributed random selections of the eight
targets without counting. The elbow of the reaching arm was unsupported, and the non-reaching arm relaxed in
the lap. Subjects took approximately 4 s to reach to the peripheral target and then return to the center target. To
mitigate the influence of eye movements on reconstruction, subjects were instructed to fixate an LED on the
center target throughout data collection and to only blink when their hand was resting at the center target. To
ensure the minimization of eye movements, a researcher monitored the subjects' eyes during data collection,
and the correlation between electro-ocular activity and hand kinematics was analyzed off-line (supplemental
Fig. 1). For each subject, the experiment concluded after each target was acquired at least ten times. While the
required movements were familiar to the subjects, none of the subjects had previous experience with the task.
Data collection. A 64-sensor Electro-Cap was placed on the head according to the extended International 10-20
system with car-linked reference and used to collect 58 channels of EEG activity. Continuous EEG signals were
sampled at 1000 Hz and amplified 500 times via a Synamps I acquisition system and Neuroscan v.4.2 software.
Additionally the EEG signals were band-pass filtered from 0.5 to 100 Hz and notch filtered at 60 Hz. Electro-
ocular activity was measured with a bipolar sensor montage with sensors attached superior and inferior to the
orbital fossa of the right eye for vertical eye movements and to the external canthi for horizontal eye
movements. Hand position was sampled at 100 Hz using an Optotrak motion sensing system (Northern Digital,
Inc) that tracked an infrared LED secured to the fingertip with double-sided adhesive tape. Event markers of
push button presses and releases were sent from the apparatus containing the push buttons to the Neuroscan and
Optotrak systems for off-line synchronization of EEG and kinematic data.
Signal pre-processing. For computational efficiency and to match the sampling rate of the kinematic data, the
EEG data were decimated from 1 kHz to 100 Hz by applying a low-pass anti-aliasing filter with a cutoff
frequency of 40 Hz and then downsampling by a factor of 10. A zero-phase, fourth-order, low-pass Butterworth
filter with a cutoff frequency of 1 Hz was then applied to the kinematic and EEG data. The cutoff frequency
was determined experimentally with influence by previous non-invasive and ECoG studies that demonstrated
the importance of low frequencies for non-invasive decoding (Jerbi et al., 2007; Schalk et al., 2007; Walden et
al., 2008; Bradberry et al.. 2008, 2009a). Next, the temporal difference of the EEG data was computed (i.e.,
;Ed= [t —1] where ;[r] and ii„[t] are respectively the backward-differenced and pre-differenced EEG
voltage of sensor n at time t). In order to examine relative sensor contributions in the scalp map analysis
described in a section below, data from each EEG sensor were standardized according to Equation I:
v„ [t] —fly
[t] = * for all n from 1 to N
Cl,
where S,[t] and v.[t] are respectively the standardized and differenced voltage at sensor n at time t, A . and
ar. are the mean and standard deviation of v,, respectively, and N is the number of sensors.
Decoding method. To continuously decode hand velocity from the EEG signals, a linear decoding model was
employed similar to that described by Georgopoulos et al. (2005) for MEG signals. In general, the model finds a
linear combination of past and present time series data from multiple EEG sensors that reconstructs the current
kinematic sample of a dimension of hand velocity. In equation form:
PI I,
x[t]— .41—1]= +EEk,,,,,Szit -k] (2)
k•O
EFTA_R1_02036183
EFTA02693104
Reconstructing 3D hand um\ emems from LEG Bradberry et al.
N L
y[t]— y[t —1]= ay +Ezb„„..s.„ [t-k] (3)
aml 4-0
N L
Z[i]— Zit — 11= az +ZEb„k„S„[I—k] (4)
ROI 4.0
where x[t]— x[t —1], y[1]— y[t —1] , and z[t]— z[t —1] are the horizontal, vertical, and depth velocities of the
hand at time sample t respectively, N is the number of EEG sensors, L (= 10) is the number of time lags,
S„ [t —k] is the standardized difference in voltage measured at EEG sensor n at time lag k, and the a and b
variables are weights obtained through multiple linear regression. The number of lags (L=10, corresponding to
100 ms) was chosen based on a previous study that reconstructed hand kinematics from neural signals acquired
with MEG (Bradberry et al., 2009a). The three most frontal sensors (FP I, FPZ, and FP2 of the International 10-
20 system) were excluded from the analysis to further mitigate the influence of any eye movements on
reconstruction, resulting in an N of 55 sensors.
For each subject, the collected continuous data contained approximately 80 trials. An 8x8-fold cross-validation
procedure was employed to assess the decoding accuracy. In this procedure, the entire continuous data were
divided into 8 parts, 7 parts were used for training, and the remaining part was used for testing. The velocity
data and EEG data were synchronized, so that if m samples of velocity were to be reconstructed then the aligned
m samples of EEG data from a single sensor were used along with 10 lagged versions of these m EEG samples
for a total of m(10 + I) samples per sensor (plus one for the offset a). Based on the sampling rate of 100 Hz and
collection duration of approximately 5 minutes per subject, m was about 3750 samples per training fold and
26,250 samples per testing fold. The cross-validation procedure was considered complete when each of the 8
combinations of training and testing data were exhausted, and the mean Pearson correlation coefficient (r)
between measured and reconstructed kinematics was computed across folds. Prior to computing r, the kinematic
signals were smoothed with a zero-phase, fourth-order, low-pass Butterworth filter with a cutoff frequency of 1
Hz.
Sensor Sensitivity Curves. Curves depicting the relationship between decoding accuracy and the number of
sensors used in the decoding method were plotted for the x, y, and z dimensions of hand velocity. First, for each
subject, each of the 55 sensors was assigned a rank according to
R — ±4b 2 bmk) 2 bmks 2 for all n from to N (5)
L 1El kno
where R is the rank of sensor n, and the b variables are the best regression weights. This ranking procedure is
similar to the one described by Sanchez et al. (2004). Next, the decoding method with cross-validation as
described above and ranking method were iteratively executed using backward elimination with a decrement
step of three (52 highest-ranked sensors, 49 highest-ranked sensors, 46 highest-ranked sensors, etc.). The mean
and standard error of the mean (SEM) of r values computed across subjects were plotted against the number of
sensors.
Scalp maps of sensor contributions. To graphically assess the relative contributions of scalp regions to the
reconstruction of hand velocity, the across-subject mean of the magnitude of the best b vectors (from Eqs. 2-4)
was projected onto a time series (-100-0 ms in increments of 10 ms) of scalp maps. These spatial renderings of
sensor contributions were produced by the topoplot function of EEGLAB, an open-source MATLAB toolbox
for electrophysiological data processing (Delorme and Makeig, 2004; http://sccn.ucsd.edu/eeglab/), that
performs biharmonic spline interpolation of the sensor values before plotting them (Sandwell, 1987). To
examine which time lags were the most important for decoding, for each scalp map, the percentage of
reconstruction contribution was defined as
EFTA_R1_02036184
EFTA02693105
Reconstructing 31) hand movements from LEG Bradherry et al.
N
kix 2 + basy 2 kfr2
+
%Ti =100%x 7 1, for all i from 0 to L (6)
EE./b„b2 + b.4.2 + b„,e 2
n=1 k
where %7; is the percentage of reconstruction contribution for a scalp map at time lag i.
Source estimation with sLORETA. To better estimate the sources of hand velocity encoding, we used
standardized low-resolution brain electromagnetic tomography (sLORETA) software version 20081104
(Pascual-Marqui, 2002; http://www.uzh.ch/keyinst/loreta.htm). Preprocessed EEG signals from all 55 channels
for each subject were fed to sLORETA to estimate current sources. These EEG signals had been pre-processed
in the same manner as for decoding: standardized, downsampled, and low-pass filtered. First, r values were
computed between the squared time series of each of the 55 sensors with the 6239 time series from the
sLORETA solution and then averaged across subjects. Second, the maximum r was assigned to each voxel after
being multiplied by the regression weight b (from Eqs. 2-4) of its associated sensor. The regression weights had
been pulled from the regression solution at time lag —60 ms, which had the highest percentage of reconstruction
contribution. Third, for visualization purposes, the highest 5% of the voxels (r values weighted by b) were set to
the value one, and the rest of the r values were set to zero. Finally these binary-thresholded r values were
plotted onto axial slices of the brain from the Colin27 volume (Holmes et al., 1998), the magnetic resonance
imaging (MRI) template that best illustrated our regions of interest. All reported coordinates of regions of
interest are in Montreal Neurological Institute (MNI) space.
Movement variability. For each subject, three measures of movement variability were computed: the coefficient
of variation (CV) for movement time (MT), the CV for movement length (ML), and the kurtosis of movement.
MT and ML were computed on a trial basis with a trial defined as the release of a pushbutton to the press of a
pushbutton (center-to-target or target-to-center). The mean and SD of the measures were then computed, and
the SD was divided by the mean to produce the CV. Kurtosis was defined as
E(h —14)4
k— . 3 (7)
ck
where k is the kurtosis, El) is the expected value operator, h is the hand velocity, and ph and cr„ are
respectively the mean and SD of the hand velocity. Single trials of velocity profiles for x, y, and z dimensions
were resampled to normalize for length and then concatenated before computing kurtosis. The relationship
between movement variability and decoding accuracy was examined by computing r between the quantities.
The sample sizes were small (n = 5) for decoding accuracy and each measure of movement variability, so
10,000 r values were bootstrapped for each comparison, and the median and confidence intervals of the
resultant non-Gaussian distributions were calculated using the bias-corrected and accelerated (BC,) percentile
method (Efron and Tibshirani, 1998).
Reference
Sandwell DT (1987) Biharmonic spline interpolation of GEOS-3 and SEASAT altimeter data. Geophys Res
Lett 2:139-142.
EFTA_R1_02036185
EFTA02693106
Rceonstructinu 1F) hand movements from Bracken), et al.
Supplemental Figure 1
A 0.2 B- 02 — X Vela*
Y Veloaly
Z Velocity
a 0.15 (0 015
0
01
O
0.05 gco 0.05
0
0
-100 -80 -40 -20 0 -100 60 -40 -20 0
ms ms
Figure 1. Cross-correlation between electrooculographic activity and hand velocity. When reconstructing a
behavioral variable from neural activity, it is important to ensure the minimization of co-occurring, correlated
behavioral variables that may simultaneously influence neural activity. To this end for our reaching task, in
addition to instructing subjects to fixate a center LED, we needed to confirm that electrooculographic (EOG)
activity only minimally correlated with hand velocity. We computed Pearson's correlation coefficient (r)
between EOG velocity and hand velocity across 10 time lags (-100 ms) with both signals low-pass filtered at 1
Hz as in the case of EEG decoding reported in the main portion of this paper. The across-subject mean (ri = 5)
correlation coefficients exhibited low correlation of vertical (A) and horizontal (B) EOG velocities with x
(solid), y (dashed), and z (dotted) dimensions of hand velocity.
1
EFTA_R1_02036186
EFTA02693107
Reconstructimi ,11) hand mmeinems from FIE(1 Bradberry et al.
Supplemental Table I
Table Sl. Comparison to most relevant off-line decoding studies of hand kinematics
Neural Reaching / Drawing average
Subjects Cued? ry ry r,
Data Task r
Wessberg et al., monkeys single 3D; table41 of 4 food Yes 0.50* 0.45* 0.65* 0.53
2000 (n = 2) units tray positions-*mouth
Kim et al., 2006 monkey single 3D; table-H of 4 food Yes — — 0.44;
(n = 1) units tray positions-*mouth-*
table
Bradberry et al., humans MEG 2D; center of DT-* 1 of 4 Yes 0.48* 0.32* — 0.40
2009a (n = 5) peripheral targets of
DT-* center of DT
Present study humans EEG 3D; center PB41 of 8 No 0.19 0.38 0.32 0.29
(n = 5) peripheral PBs-*center
PB
DT: drawing tablet, PB: push button
* Since Wessberg et al. (2000) provide the evolution of r over time, and the duration of our task is
approximately 5 minutes; we used their reported r„, ry, and r2 values at 5 minutes into their task.
; For the Kim et al. (2006) study, we computed the average between their reported r during movement and r
during rest for their best decoding method.
; For the Bradberry et al. (2009a) study, r, and ry were taken from only the pre-exposure phase (no novel
visuomotor transformation imposed).
1
EFTA_R1_02036187
EFTA02693108
Entities
0 total entities mentioned
No entities found in this document
Document Metadata
- Document ID
- be00cb0b-d6ea-4578-bbbe-14ab986c0193
- Storage Key
- dataset_11/EFTA02693104.pdf
- Content Hash
- 91c4635c2f8a6a82a3aafd9d2b175acc
- Created
- Feb 3, 2026