Loading Events

Theory of Task-Adapted Dynamics in Large Recurrent Neural Networks

CMSA MEMBER SEMINAR

When: February 20, 2026
12:00 pm - 1:00 pm
Where: CMSA, 20 Garden St, Common Room
Address: 20 Garden Street, Cambridge 02138, United States
Speaker: Blake Bordelon (Harvard)

Recurrent neural networks (RNNs) encode expressive and flexible dynamical systems which can adapt to perform tasks by modifying the internal connections between neurons. In this work we analyze the structure of the dynamical systems encoded in RNNs after being trained to perform a learning task. We derive a mean field theory of the dynamics of RNNs before and after learning. Our theory predicts heterogeneous activity and tuning of single neurons, but precise, deterministic predictions for population level autocorrelation and outputs of the network. Further, our theory enables us to interpolate between different operating regimes for RNN learning including (1) reservoir computing regime where internal adaptations do not adapt to data as the model outputs fit the provided data and (2) a feature-learning where the internal dynamics of the network change significantly due to task learning and reflect temporal properties of the learning task. These different regimes exhibit different levels of chaotic activity, oscillatory behaviors, and length generalization properties as feature learning enables maintenance of temporal patterns over longer periods than the supervision period. We apply this theory to a biologically grounded motor learning task where a recurrent population is trained to output EMG signals from macaque motor units during an oriented reaching task. We find that many levels of feature-learning strength give rise to high quality fits of the EMG data, resulting in a family of solutions that are compatible with the neural data. Based on work with David Clark, Jacob Zavatone Veth, and Cengiz Pehlevan.