Mathematics Colloquia and Seminars

Return to Colloquia & Seminar listing

Two math neuro talks (see abstract)

Mathematical Biology

Speaker: Anandita De / Avinash, UC Davis
Location: 2112 MSB
Start time: Mon, Apr 25 2022, 3:10PM

Speaker: Anandita De
Title: Common neural manifolds are highly nonlinear
Abstract: Neuroscientists are now able to record from tens of thousands of neurons in different brain regions over periods of time. To make sense of such huge amounts of data, activities of neurons are plotted in neural activity space, where each axis represents a neuron, and each point represents the activity of all the neurons at a given time. It is often observed that these points lie on a low dimensional manifold. The structure of these manifolds gives us insight into the information encoded by these neurons and computations performed by these networks. These manifolds could be linear (close to Euclidean space) or highly irregular and nonlinear. Principal Component Analysis (PCA) is often used to uncover this low dimensional structure or as a first step in other non-linear methods. In this work, we ask if data arising from common tuning curve models of neurons lie on linear low dimensional manifolds which can be discovered by PCA. We find that manifolds for these tuning curves are highly nonlinear. We show that the number of linear dimensions needed to explain 95% of variance in the data grows exponentially or supra-exponentially with the number of variables actually encoded by the neurons. In this talk, I will review these models and discuss how results from probability theory can be used to find the linear dimension of data arising from these models. This work is joint with Rishidev Chaudhuri.

Speaker: Avinash
Title: Choice-selective sequences in cortical inputs to striatum provide a potential substrate for credit assignment
Abstract: A central question in reinforcement learning in the brain is how actions and outcomes separated in time become associated with each other. Current theories suggest that a key component of the brain's algorithms in solving such problems is by estimating the "value" of different choices (formally, the expected total future rewards resulting from a choice) using errors in reward prediction, and then preferentially making choices with higher value. However, how this value is computed in the brain remains a mystery. Multiple lines of evidence have implicated the nucleus accumbens, a part of the ventral striatum that receives cortical inputs as well as dopaminergic signals associated with reward prediction errors, in learning value. Here we demonstrate, through computational modeling, how the recorded choice-selective sequences of neural firing in cortical inputs to nucleus accumbens that bridged a delay period between action (choice) and outcome (reward) can create the precisely timed reward prediction error seen in dopamine neurons. This, in turn, can support the neural implementation of reinforcement learning algorithms for associating actions and outcomes separated in time, both in a circuit model based on dopamine-dependent changes in connection strength between neurons and one based on dopamine-dependent changes in neural dynamics. We test and experimentally confirm the core predictions of our models by examining the effect of disruption of the cortical inputs on behavioral performance. As a result, we provide specific proposals for how reinforcement learning could be implemented by neural circuitry and suggest a computational role for choice-selective sequences of neural activity.



Can also join remotely. Contact organizer for Zoom link.