Skip to main content

WTI Integration Conference 2022

January 11, 2022 (online)

Yale researchers came together for virtual talks by WTI faculty about current and future interdisciplinary research to advance the understanding of human cognition. Under the theme of “Integration,” presentations illuminated and fostered collaborative research, bridging neuroscience data and theories across scales, techniques, species, and fields.

Yale researchers came together for virtual talks by Wu Tsai Faculty Members about current and future interdisciplinary research to advance the understanding of human cognition. 

Integration Agenda

12:30 pm | Welcoming remarks

Nick Turk-Browne, Director of the Wu Tsai Institute and Professor of Psychology 

Daniel Colón-Ramos, Associate Director of the Wu Tsai Institute and Dorys McConnell Duberg Professor of Neuroscience and of Cell Biology 

John Lafferty, Associate Director of the Wu Tsai Institute and John C. Malone Professor of Statistics and Data Science 

Kelley Remole, Managing Director of the Wu Tsai Institute

Session 1: Parallel datasets

Talks describing multi-modal datasets, perhaps ripe for computational translation

Session chair: Phil Corlett

Speakers: Evelyn Lake, Damon Clark, Phil Corlett

Evelyn Lake: Assistant Professor, Radiology and Biomedical Imaging


We implement simultaneous whole-brain functional magnetic resonance imaging (fMRI) and ‘mesoscale’ cortex-wide one-photon calcium (Ca2+) imaging in murine models. Recent developments from our group include the longitudinal acquisition of these data from awake animals, models of injury and disease including hydrocephalus, traumatic brain injury, and Alzheimer’s disease, as well as data from across the murine lifespan (postnatal day 10 to 12 months-of-age). To close knowledge gaps across scales, and species, simultaneous multi-modal imaging approaches that leverage the strengths of complementary modalities in order to deepen our understanding of the pathophysiological changes that manifest in clinically accessible biomedical imaging approaches are needed. In our hands, the blood-oxygen-level-dependent (BOLD) fMRI signal provides a direct link to a clinical setting. The simultaneously acquired mesoscale Ca2+ imaging data offer a high spatiotemporal resolution measure of brain activity with cell-type specificity. Acquired together, these modes offer a unique window into changes in spontaneous activity caused by altered neurovascular coupling, or network and circuit restructuring, that may be predictive of injury, disease, progression, prognosis, age, or treatment response. With hundreds of data sets already acquired, we are searching for unique collaborative opportunities to interrogate these data from multiple perspectives.

Damon Clark: Associate Professor, Molecular, Cellular and Developmental Biology


Illusory visual motion percepts can reveal the underlying architecture of motion computation. Here, I will present work connecting two visual illusions across animals and sensory modalities. In the first illusion, a static image can induce percepts of motion in humans. We found that flies are subject to the same illusion and, using the powerful genetic tools in flies, we determined its neural underpinnings. Remarkably, humans possess the same processing steps essential to the illusion in flies, and fly circuit-inspired human psychophysics shows that similar mechanisms may underlie the analogous human illusory motion percept. In the second illusion, negative correlations in luminous intensity over space and time generate counter-intuitive motion percepts in a wide variety of animals. In collaboration with the Emonet lab (Yale MCDB), we showed that fly olfaction is subject to the same illusion when odor motion is presented across the two antennae. In collaboration with the McDougal lab (Yale Psychology), we have shown that auditory rising and falling tone detection in humans is also subject the same illusion. Stimuli with these negative correlations allowed us to identify new olfactory and auditory illusory percepts and revealed that these distinct olfactory and auditory computations are rooted in correlation detectors. Overall, these studies exemplify how algorithmic descriptions can connect different animals and modalities to the underlying mathematics of sensory perception.

Phil Corlett: Associate Professor, Psychiatry


Paranoia is the belief that harm is intended by others. It may arise from selective pressures to infer and avoid social threats, particularly in ambiguous or changing circumstances. We propose that uncertainty may be sufficient to elicit learning differences in paranoid individuals, without social threat. We used reversal learning behavior and computational modeling to estimate belief updating across individuals with and without mental illness, online participants, and rats chronically exposed to methamphetamine, an elicitor of paranoia in humans. Paranoia is associated with a stronger prior on volatility, accompanied by elevated sensitivity to perceived changes in the task environment. Methamphetamine exposure in rats recapitulates this impaired uncertainty-driven belief updating and rigid anticipation of a volatile environment. Our work provides evidence of fundamental, domain-general learning differences in paranoid individuals. This paradigm enables further assessment of the interplay between uncertainty and belief-updating across individuals and species. In future we will identify more granular neural mechanisms, developmental and species level constraints of volatility-guided belief change, with implications for how we understand human cognition in individuals, and groups, and through changing world circumstances.

1:45 pm | Session 2: Computational approaches

Talks highlighting integrative algorithms and models used in neuroscience and related fields

Session chair: Smita Krishaswamy

Speakers: Michael Nitabach, Steve Zucker, John Murray, David van Dijk, Priya Panda, Abhishek Bhattacharjee

Michael Nitabach: Professor, Cellular and Molecular Physiology


Insects adapt their response to stimuli, such as odors, according to their pairing with positive or negative reinforcements, such as sugar or shock. Our recent electrophysiological and imaging findings in Drosophila melanogaster flies uncovered cholinergic and dopaminergic neural circuit mechanisms supporting the acquisition, forgetting, and replacement of memories. Now our computational modelling reveals the dopaminergic synaptic plasticity rules by which these circuits underly rapid memory acquisition, transfer from short-term to long-term, and exploration/exploitation trade-off. Because of similarities in dopaminergic reinforcement mechanisms in fly and mammalian brain, this computational model could be applied to, and provide insights into, flexible memory encoding in mammalian models and human learning paradigms.

Steve Zucker: Professor, Computer Science


Fundamental in visual neuroscience is organizing neurons into functional circuits based on their responses to an ensemble of stimuli. We characterize this organization with a novel manifold of neurons. It is kind of an inverse to the usual approach – embedding trials in ‘neural coordinates’ – because each point on our manifold is a neuron. Nearby neurons on the manifold respond similarly to similar stimuli. Approximating “coordinates” on this manifold reveals how stimulus/response is distributed across the neuronal population.

Our manifolds are mathematically related to functional networks. If the underlying network consists of separate, or largely decomposable components, the manifold would be disconnected; this approximates the retina. If the circuit consists of many related components, the manifold would be higher-dimensional and much more connected, as is the case in cortex (V1). I leave it to the reader to predict what type of manifold characterizes deep artificial neural networks, the current foundation of AI and the preferred model of the visual system.


This project addresses the three themes. It involves parallel datasets (from mouse retina and cortex; and from CNNs); it is highly computational (manifold learning); and it is a collaboration (with the Field lab at Duke and the Stryker lab at UCSF).

John Murray: Associate Professor, Psychiatry


Human cognitive behavior requires the ability to learn and flexibly perform a diversity of tasks without detrimental interference, yet the computational and neural bases of multi-task cognition remain unknown. Multi-task learning is also an active area of research in artificial intelligence and machine learning, exploring how models can develop internal representations that support generalization and efficient learning while avoiding catastrophic forgetting. The challenge of understanding multi-task function and learning therefore lies at the intersection of neuroscience, psychology, and artificial intelligence, and raises many questions. What kinds of neural representations support multi-task function? How do multi-task representations shape learning through inductive bias? How are multi-task representations organized in the human brain? In this talk I will present research which addresses these questions using multiple computational and empirical approaches including artificial neural network modeling, human task behavior, and human neuroimaging.

David van Dijk: Assistant Professor, Internal Medicine


Understanding how dynamic brain function underlies cognition remains one of the great unanswered questions in science. The complexity of brain circuits, and the view of the brain as a collection of discrete functional territories (parcellation) have limited our ability to match broad signatures of neural activity to animal behavior. Recent advances in imaging technology allow recording of whole-brain activity with cell type-specificity and high spatial and temporal resolution. However, lack of advanced tools that can analyze the vast troves of resulting data are a major barrier to deriving biologically relevant signatures. We propose a novel machine learning approach, based on ideas from natural language processing, that models spatiotemporal sequences of mouse brain activity and relates these to fundamental cognitive processes. Systems that facilitate natural language processing have recently taken huge steps forward. At their heart lie models that use so-called ‘self-attention’ mechanisms to learn patterns in sentence contexts. We show that these models can be used to derive meaning from temporal measurement of brain activity instead of text. Our model is able to accurately predict future brain states and can infer spatiotemporal patterns in neocortical imaging data. We believe that this approach has the potential to revolutionize the ability to analyze and model brain function based on previously impenetrable complex, multi-modal spatiotemporal datasets.

Abhishek Bhattacharjee: Associate Professor, Computer Science


In this talk, I will present an overview of multiple ongoing research directions in my group that focus on building next-generation computer architectures, compilers, and runtimes for the brain sciences. Topics will range from the design of implantable brain-computer interfaces to software systems for cognitive modeling, and more.

Priya Panda: Assistant Professor, Electrical Engineering


Neuromorphic Computing has emerged as an energy-efficient alternative to Deep Learning that takes in the event-driven and compute-in memory principles of the brain to enable efficient and secure machine intelligence. In this presentation, I will shed light on recent algorithmic techniques of using bio-inspired spiking neural networks for vision and related applications which bring huge benefit in terms of latency, accuracy, interpretability, and robustness. I will also briefly talk about in-memory computing based hardware techniques that can be used to improve adversarial security of Deep Neural Networks. Particularly, I will delve into memristive crossbars and how we can use their inherent non-idealities (such as, interconnect parasitics, device variations, etc.) and the current signatures for improving the adversarial robustness and adversary detection capability of deep neural networks mapped onto them.

3:35 pm | Session 3: Collaborative opportunities

Talks showcasing an existing study at one level of analysis that could be extended to another level or studies using one technique that could be performed with another technique.

Session chair: Kristen Brennand

Speakers: Jaime Grutzendler, Amy Arnsten, Steve Chang, George Dragoi, Kristen Brennand

Note: Session 3 will not have a live question and answer period after the individual talks. Instead, some speakers have provided individual Zoom links (included below the abstracts) for interested attendees to join immediately following the conference.

Jaime Grutzendler: Professor, Neurology


Correlated neural activity between different brain regions and between hemispheres is dependent on proper axonal connectivity and is critical for normal cognition. We have evidence that amyloid plaques in Alzheimer disease cause profound disruption of axonal structure and function. Intravital two photon calcium imaging of single axons demonstrates that the presence of a few spheroids can markedly disrupt the propagation of action potentials causing conduction blocks. The size of spheroids is critical as these structures act as capacitors/electric current sinks in a size dependent manner. We provide evidence that therapeutic approaches aimed at reducing spheroid size could significantly improve axonal conduction. We also have developed novel methodologies for in vivo imaging of myelination at single axon resolution in a label free manner using multispectral confocal reflectance microscopy. With this technique we have perform lifelong imaging of axons and described developmental and degenerative changes with relevance to aging and Alzheimer’s disease. Our data and methods are ripe for potential collaborations with laboratories interested in: 1) computational modeling of axonal conduction in health and disease, 2) Developing methods for assessing axonal function in humans to correlate with our single axon findings, 3) imaging of myelination at various resolutions with multimodal methodologies that could leverage our optical myelin imaging strategies, 4) modeling of the role of CNS myelin abnormalities in neurodegeneration-associated conduction deficits.

Amy Arnsten: Professor, Neuroscience


Three Yale labs are exploring how calcium signaling orchestrates brain state in humans, macaques and rodents using computational and experimental approaches. The Murray lab has shown hierarchical gradients across cortex in humans, macaques and mice, with increased time scales needed for increasing sensory and reward integration, including increasing gradients of two calcium-related proteins, NMDAR-GluN2B and calbindin, that the Arnsten lab finds essential to macaque dorsolateral prefrontal cortex (dlPFC) working memory physiology. The current focus is on the Cav1.2 L-type calcium channel (CACNA1C), where gain-of-function mutations increase risk of cognitive impairment and mental disorders. Cav1.2 levels have an inverted-U influence on dlPFC neuronal firing: they’re essential for persistent firing, but high levels reduce firing via K+ channel opening, as occurs during stress. The Addy lab has found that L-type calcium channels reduce dopamine reward signals in rat, suggesting that Cav1.2 may influence changes in brain state, switching from thoughtful, top-down regulation and reward integration, to a more habitual, anhedonic state. We hope for a 3–way collaboration on how Cav1.2 signaling alters brain state in monkey (Arnsten) and rodent (Addy) brains, with computational modeling (Murray) of CACNA1C expression across the human brain, examining how this channel influences neuronal firing.

Steve Chang: Associate Professor, Psychology


Social gaze interaction powerfully shapes interpersonal communication. However, compared to social perception, very little is known about the neuronal underpinnings of real-life social gaze interaction. Here, we studied a large number of neurons spanning four regions in primate prefrontal-amygdala networks and demonstrate robust single-cell foundations of interactive social gaze in the orbitofrontal, dorsomedial prefrontal, and anterior cingulate cortices, in addition to the amygdala. Many neurons in these areas exhibited temporally diverse social discriminability, with a selectivity bias for looking at a conspecific compared to an object. Notably, a large proportion of neurons in each brain region parametrically tracked the gaze of oneself, other, or their relative gaze positions, providing substrates for social gaze monitoring. Furthermore, several neurons displayed selective encoding of mutual eye contact in an agent-specific manner. These findings provide evidence of widespread implementations of interactive social gaze neurons in the primate prefrontal-amygdala networks during social gaze interaction.

George Dragoi: Associate Professor, Psychiatry


We learn and remember multiple new experiences throughout the day. The neural principles enabling continuous rapid learning and formation of distinct representations of numerous sequential experiences without major interference are not understood. To understand this process, here we interrogated ensembles of hippocampal place cells as rats explored 15 novel linear environments interleaved with sleep sessions over continuous 16-hour periods. Remarkably, we found that a population of place cells were selective to environment orientation. This orientation selectivity property biased the network-level discrimination and re/mapping between multiple environments. Generalization of prior experience with different environments consequently improved network predictability of future novel environmental representations via strengthened generative predictive codes. These coding schemes reveal a high-capacity, high-efficiency neuronal framework for rapid representation of numerous sequential experiences with optimal discrimination-generalization balance and reduced interference.

Kristen Brennand: Professor, Psychiatry


Each person’s distinct genetic, epigenetic, and environmental risk profile predisposes them to some phenotypes and confers resilience to others. My laboratory seeks to decode highly complex genetic insights into medically actionable information, better connecting the expanding list of genetic loci associated with human disease to pathophysiology. Our goal is to improve diagnostics, predict clinical trajectories, and identify pre-symptomatic points of therapeutic intervention. Towards this, we employ a functional genomics approach that integrates stem cell models and genome engineering to resolve the impact of patient-specific variants across cell types, genetic backgrounds, and environmental conditions. Individually small risk effects combine to yield much larger impacts in aggregate, but the interactions between the myriad variants remain undetermined. We seek to uncover disease-associated interactions within and between the cell types of the brain, querying the impacts of complex genetic risk within increasingly sophisticated neuronal circuits. Thus, we strive to translate risk “variants to genes”, “genes to pathways”, and “pathways to circuits”, revealing the convergent, additive, and synergistic relationships between risk factors within and between the cell types of the brain.

4:40 pm | Funding announcement and closing remarks

Nick Turk-Browne

Daniel Colón-Ramos

John Lafferty

Kelley Remole