Computational Neuroscience Workshop
Hosted by RuCCS/Psychology/Computer Science
"Computation, Cognition and the Brain"
Wed., May 30 - Fri., June 1, 2018
Room 2400, Academic Bldg, College Ave Campus, 15 Seminary Pl, New Brunswick, NJ

This 2 1/2 day workshop focuses on how the brain codes different aspects of experience. The workshop is jointly sponsored by the Center for Cognitive Science and the Psychology and Computer Science Departments at Rutgers University. The workshop will have 5 sessions, with three eminent experimental or computational neuroscience speakers per session. We will address big questions surrounding coding, representation, and decision-making from both neurobiological and computational perspectives. Discussion and audience participation will be encouraged. Attendees will include faculty and students from over 20 institutions, representing nearly 40 different departments.

Conference Organizers: Drs. Randy Gallistel, Brian McLaughlin, Dimitris Metaxas, Sara Pixley, and David Vicario.

 

To RSVP please click HERE. You will be taken to the Google RSVP form. Registration is not required, but seating is limited and it will help us plan for the event. If for some reason you need to cancel your RSVP, please email This email address is being protected from spambots. You need JavaScript enabled to view it..


Jump to:


Program

(click here for printable version)

Wednesday, May 30th

08:30 – 09:00  Coffee and Light Breakfast (Provided)
09:00 – 09:15 Opening Remarks
   
Processing Levels: What mechanisms for encoding experiential statistics operate at the circuit versus cellular levels?
Moderator: David Vicario (Rutgers)
09:15 – 09:55 Daniel Margoliash (U of Chicago): "Birdsong from ion channels through nonlinear dynamics to behavior: Some answers, many questions."
09:55 – 10:35 Michael Long (NYU, School of Medicine): "Uncovering circuit principles that enable robust behavioral sequences"
10:35 – 10:50 Coffee Break
10:50 – 11:30 TBD
11:30 – 12:20 Open Discussion
   
12:20 –  02:00 Lunch Break (Lunch NOT Provided. Click Here for Food Options)
   
Decision Making: How is accumulating evidence represented and how are decisions based on those representations?
Moderator: TBD
02:00 – 02:40 Jonathan Pillow (Princeton): "Latent models of stepping and ramping: an update on the debate over single-trial dynamics in LIP"
02:40 – 03:20 Konrad Kording (UPenn): "Rate fluctuations not steps dominate LIP activity during decision-making"
03:20 – 03:30 Coffee Break
03:30 – 04:10 Michael Shadlen (Columbia): "I haven't decided yet"
04:10 – 05:00 Open Discussion
   
04:40 – 05:00 Daily wrap up

 

Thursay, May 31st

08:30 – 09:00  Coffee and Light Breakfast (Provided)
09:00 – 09:15 Opening Remarks
   
Space and Time: How are simple quantities coded in neural firing and stored in memory for subsequent access?
Moderator: John McGann (Rutgers)
09:15 – 09:55 Randy Gallistel (Rutgers): "Finding numbers in the brain"
09:55 – 10:35 Russell Epstein (UPenn): "Anchoring the cognitive map: Neural mechanisms for landmark-based navigation"
10:35 – 10:50 Coffee Break
10:50 – 11:30 David Huber (UMass): "A memory retrieval model of grid cells: The function of grid cells may be something other than spatial position"
11:30 – 12:20 Open Discussion
   
12:20 –  02:00 Lunch Break (Lunch NOT Provided. Click Here for Food Options)
   
Higher Visual Perception: How do visual statistics subserve object encoding in the brain?
Moderator: Dimitri Metaxis (Rutgers)
02:00 – 02:40 Anitha Pasupathy (U of Washington): "Encoding things and stuff: multiplexed form and texture signals in primate V4"
02:40 – 03:20 Jack Gallant (Berkeley): "A deep convolutional energy model of ventral stream areas V1, V2 and V4"9
03:20 – 03:30 Coffee Break
03:30 – 04:10 James DiCarlo (MIT): "Reverse engineering human visual intelligence"
04:10 – 05:00 Open Discussion/Daily Wrap Up
   
05:00 – 07:00 Wine Reception (Provided)

 

Friday, June 1st

08:30 –  09:00 Coffee and Light Breakfast (Provided)
09:00 –  09:15 Opening Remarks
   
Encoding Theory: What are computational mechanisms for information processing in the brain?
Moderator: Pernille Hemmer (Rutgers)
09:15 – 09:55  Aurel Lazar (Columbia): "Representation and processing mechanisms in the early olfactory and visual systems of the fruit fly"
09:55 – 10:35 Eero Simoncelli (NYU): "Efficient distribution of resources in neural populations provides an embedding of environmental statistics"Efficient distribution of resources in neural populations provides an embedding of environmental statistics"
10:35 – 10:50 Coffee Break
10:50 – 11:30 Tatyana Sharpee (Salk): "Cortical representation of natural stimuli"
11:30 – 12:20 Open Discussion
   
12:20 –  12:30 Closing Remarks 
12:30 –  02:00 Conclusion Lunch Reception (Provided)

 


Parking & Directions

The venue's address is 15 Seminary Pl, New Brunswick, NJ 08901. Visitor parking is free and will be available in Lot 11, Lot 16, Lot 26, Lot 30 and the Parking Deck. (click each parking lot name for directions).

Click here for a PDF version of the map below.

 


Local Restaurants

Lunch will not be provided. For a list of New Brunswick restaurants, please visit the New Brunswick City Market website.  

To view a detailed map of the closest restaurants and cafes, please click here.


Abstracts

James DiCarlo (MIT): Reverse engineering human visual intelligence

Neuroscience is hard at work on one of our last great scientific quests — to reverse engineer the human mind and its intelligent behavior.  Yet neuroscience is still in its infancy, and forward engineering approaches that aim to emulate human intelligence in artificial systems (AI) are also still in their infancy.  The challenges of reverse engineering the human mind machine can only be solved by tightly coupling the efforts of brain and cognitive scientists (hypothesis generation and data acquisition), with forward engineering efforts using neurally-mechanistic computational models (hypothesis instantiation and data prediction).  As this approach discovers the correct neural network models, those models will not only encapsulate our understanding of complex brain systems, they will be the basis of next-generation computing and novel brain interfaces (chemical, genetic, optical, electronic, etc.) for therapeutic and augmentation goals (e.g, brain disorders).   To make this vision concrete, I will discuss one aspect of perceptual intelligence — object categorization and detection — and I will describe how work in brain science, cognitive science and computer science has converged to create deep neural networks that have recently made dramatic leaps:  Not only are these neural network models reaching human performance for many images and tasks, but we have found that their internal workings largely emulate the previously mysterious internal workings of the primate ventral visual stream.   Yet, our recent results show that the primate ventral visual stream still outperforms current generation artificial deep neural networks, and they point to what is importantly missing from current deep neural network models.  More broadly, we believe that the community is poised to embrace a powerful new paradigm for systems neuroscience research.

 

Russell Epstein (UPenn): Anchoring the cognitive map: Neural mechanisms for landmark-based navigation

Seventy years of research, stretching back to Tolman's classic 1948 paper, suggests that humans and animals use internal representations of space ("cognitive maps") to guide navigation from place to place. To use a cognitive map, however, a navigator must be able to anchor it to the perceptible world; that is, they must be able to use fixed features of the environment (“landmarks”) to determine their current location and heading. What are the neural mechanisms that mediate this ability? In this talk, I will present work from rodents and humans that delineates a neural circuit for landmark-based navigation that encompasses the hippocampus, retrosplenial/medial parietal region, and a scene-selective part of the visual system known at the occipital place area (OPA). First, I will show that hippocampal place fields in mice are controlled by environmental geometry during spatial reorientation, paralleling classical behavior results. Second, I will show that the retrosplenial/medial parietal region in humans supports a mechanism that allows location and heading codes to be indexed to local geometry. Third, I will show that the OPA supports the perceptual analysis of local geometry, including (unexpectedly) a representation of the pathway structure of the visible environment. Beyond illuminating the neural mechanisms for landmark-based navigation, these results suggest ways in which the brain might code cognitive maps of complex, segmented, and hierarchical real-world environments.

 

Jack Gallant (Berkeley): A deep convolutional energy model of ventral stream areas V1, V2 and V4

The ventral stream areas V1, V2 and V4 are crucial for visual object recognition. Good computational models of V1 neurons already exist, but current models of V2 and V4 neurons are poor. To build better models we recorded from neurons while awake animals viewed clips of large, full color natural movies. Because neurons could be recorded for several days, we collected responses to hundreds of thousands (up to over 1 million) distinct movie frames, for hundreds of different V1, V2 and V4 neurons. We fit these data using a new deep convolutional energy model. A two-stage version of the model is used to model V1 and V2, and a three-stage version is used for V4. Deep convolutional energy models fit to V1 and V2 neurons approach the noise-ceiling of prediction performance. Predictions of V4 neuron responses are somewhat lower, but they are as good as the classical model fit to V1 neurons. Furthermore, the model predicts V4 responses to various types of synthetic curvature stimuli in previous studies of V4. Finally, these models can be used to visualize and help interpret the response properties of each neuron. The deep convolutional energy model thus presents a unified framework for modeling and understanding neurons in the early and intermediate ventral stream.

 

Randy Gallistel (Rutgers): Finding numbers in the brain

Numbers are symbols that refer to quantities and enter into computational operations. Numbers in the brain must represent both discrete and continuous quantities, because computations with discrete quantities (numerosities) often involve or yield continuous quantities (e.g., rates and probabilities). Most of the quantities that enter into the brain’s computations reside in memory. The most basic question then is the coding question: How does the brain encode a quantity in a memory medium so that it is accessible to computation? If neuroscientific theorizing has suggested an answer, I have not heard it. To find the answer, we must form a clear idea of what to look for. Numbers in the brains must: 1) be generable; 2) cover an indefinitely large range; 3) be signed; 4) be noiseless; 5) include the identities; 6) represent precision as well as magnitude; 7) be closed under the arithmetic operations; 8) be writable from spike trains to the memory medium; 9) be readable from the memory medium into spike trains; 10) be thermodynamically stable; 11) be compact; 12) satisfy Weber’s law; 13) deal gracefully with both circular and linear addition. Is there any hope that plastic synapses can be made to satisfy these properties? If not, what medium could?

 

David Huber (UMass): A memory retrieval model of grid cells: The function of grid cells may be something other than spatial position

Entorhinal grid cells exhibit a precise hexagonal firing pattern as a function of spatial position during navigation within an enclosure. Path integration models assume that grid cells provide the spatial information necessary for hippocampal place cells. However, during development, place cells exist prior to grid cells, and inactivation of the hippocampus eliminates the grid firing pattern. Developmental models explain the grid firing pattern through hippocampal feedback, but these models do not assign a function to grid cells. We present a memory retrieval model of grid cells in which the feedforward function of grid cells is to represent something other than spatial position (e.g., surface texture). If memories (aka place cells) are equally spaced within the multidimensional cognitive map representing a known enclosure, the triggering of memories by spatial position can produce a grid firing pattern for cells that represent attributes found throughout the enclosure (e.g., the surface is smooth here, and there, and everywhere), revealing a grid firing pattern as an artifact of memory retrieval. We developed a model implementation of this theory in which border cells and head direction cells serve as the spatial determinants of place cells, with place cells gradually consolidating to achieve equal spacing within a multidimensional space. This explains the experience dependent nature of grid cells and more generally reconciles a memory account of the medial temporal lobe with the ubiquity of grid cells.

 

Konrad Kording (UPenn): Rate fluctuations not steps dominate LIP activity during decision-making

The idea that lateral intraparietal cortex (LIP) integrates information for and against a decision, is one of the most popular models in neuroscience. However, a recent statistical analysis has suggested that LIP does not integrate information but that individual neurons' activities jump. The result was based on a model comparison, which is often hard to interpret. There are two worries that can render comparisons problematic. (1) Important aspects of variance are contained in neither model. (2) The analysis is complicated, making it hard to verify. We thus followed up with a simple approach for model comparison: crossvalidation. We find evidence that baseline fluctuations describe much of the variance, which are properly modeled by neither the original paper's drift-diffusion model, nor simple ramp or step models. Moreover, we find that our straightforward analysis strategy prefers ramping models, both with and without trial-by-trial baseline fluctuations. We find that the specific choice of model selection method has a huge influence on the results.

 

Michael Long (NYU School of Medicine): Uncovering circuit principles that enable robust behavioral sequences

Abstract: For us to interact with the outside world, our brains must plan and dictate our actions and behaviors. In many cases, we learn to reproducibly execute a well-defined series of muscle movements to perform impressive feats, such as hitting a golf ball or playing the violin. How does the brain step through a reliable sequence of premotor commands for behavior? To address this issue, we study the cellular and circuit mechanisms that enable the production of the zebra finch song, a highly stable behavior executed with a high degree of precision. We use a range of behaviorally-relevant variables to test two categorically distinct models for explaining the population dynamics underlying this behavior, namely asynfire chain (synchronous presynaptic neurons) and a polychronous architecture (coordinated delays enable activation of postsynaptic neurons). From this work, we can begin to understand the large-scale circuit motifs that underlie sequence generation across a variety of brain regions.

 

Aurel Lazar (Columbia): Representation and Processing Mechanisms in the Early Olfactory and Visual Systems of the Fruit Fly

Abstract to come.

 

Daniel Margoliash (U of Chicago): Birdsong from ion channels through nonlinear dynamics to behavior: Some answers, many questions

Birdsong production provides a rich substrate to explore questions of brain and behavior. We have recently described how the distribution of magnitudes of ion currents in the basal ganglia projecting neurons of the song system of a given individual are related to features of that individual bird's own song. This arises through sensorimotor learning–related active processes, during development and in adulthood. Challenging adult birds with delayed auditory feedback rapidly induces abnormalities in song initiation behavior in concert with changes the ion current magnitudes. Modeling from the perspective of non-linear dynamics provides an opportunity to extend such results and place them in a systems concept. Work to date suggests the importance of activity throughout the song motor control system including ascending input from the brainstem in song initiation and potentially throughout episodic singing behavior.  Several sets of results indicate tight coordination between forebrain and peripheral syringeal activity, and we have identified neurons in the forebrain that encode features of syringeal vocal gestural movements. Interpretation of the latter result is one of several points of active discussion between labs comparing "gestural" and "clock" models of vocal motor control (see Mike Long's talk).  The totality of these studies introduces new concepts in neuronal representations of learning and memory, and regulation of ongoing behavior, that can help to address a many of the conceptual issues this workshop will explore.  I will attempt to highlight a number of these during my talk.

 

Jonathan Pillow (Princeton): Latent models of stepping and ramping: An update on the debate over single-trial dynamics in LIP

Trial-averaged firing rates in the macaque lateral intraparietal (LIP) cortex exhibit gradual "ramping" that mirrors the time-course of evidence accumulation during sensory decision-making. However, ramping that appears in trial-averaged responses does not necessarily indicate that a neuron's spike rate ramps on single trials; a ramping average could also arise from instantaneous steps that occur at different times on each trial. In recent work we have sought to address this problem by developing explicit latent variable models of stepping and ramping dynamics with spike train observations, and using them to perform statistical model comparison.  Specifically, we analyzed LIP spike responses using spike train models with: (1) ramping "accumulation-to-bound" dynamics; and (2) discrete "stepping" dynamics, in which the spike rate jumps instantaneously to one of two rates at a random time on each trial. In this talk, I will present several extensions to the original models, which include incorporating spike history and a lower bound on firing rate, and using alternate methods for model comparison. More generally, I will discuss the power of latent variable modeling approaches, and insights into underlying single-trial firing rates that latent variable models can provide.

 

Anitha Pasupathy (U of Washington): Encoding things and stuff: Multiplexed form and texture signals in primate V4

I am interested in understanding how midlevel processing stages of the primate ventral visual pathway encode visual stimuli and how these representations might underlie our ability to segment visual scenes and recognize objects. Our primary focus is area V4. In my talk, I will present results from two recent experiments that demonstrate that many V4 neurons jointly encode both the shape and surface texture of visual stimuli. I will describe our efforts to develop image-computable models to explain how these properties might arise and discuss why this coding strategy may be advantageous for segmentation in natural scenes.

 

Michael Shadlen (Columbia): I have not decided yet

Abstract to come.

 

Tatyana Sharpee (Salk): Cortical representation of natural stimuli

Abstract: In this talk I will describe our recent findings of several organizing principles for how natural visual and auditory stimuli are presented across stages of cortical processing. For visual processing, I will describe how signals in the secondary cortical visual area build on the outputs provided by the first cortical visual area, and how they relate to representation found in subsequent visual areas, such as area V4. I will also discuss differences in how the auditory and visual systems achieve invariance. We find that auditory neurons gain invariance primarily along suppressive dimensions, whereas visual neurons gain invariance by integrating positive responses.

 

Eero Simoncelli (NYU): Efficient distribution of resources in neural populations provides an embedding of environmental statistics

Abstract: The mammalian brain is a metabolically expensive device, and evolutionary pressures have presumably driven it to make productive use of its resources. For early stages of sensory processing, this concept can be expressed more formally as an optimality principle: the brain maximizes the information that is encoded about relevant sensory variables, given available resources.  I'll describe a specific instantiation of this hypothesis that predicts a direct relationship between the distribution of sensory attributes encountered in the environment, and the selectivity and response levels of neurons within a population that encodes those attributes.  This allocation of neural resources, in turn, imposes direct limitations on the ability of the organism to discriminate different values of the encoded attribute. I'll show that these physiological and perceptual predictions are borne out in a variety of visual and auditory attributes.  Finally, I'll show that this encoding of sensory information provides a natural substrate for subsequent computation (in particular, Bayesian estimation), which can make use of the knowledge of environmental (prior) distributions that is embedded in the population structure.

 


Flier

Click here for PDF Version

 

Contact RuCCS

Psychology Building Addition
152 Frelinghuysen Road
Piscataway, NJ 08854-8020


Phone:

  • 848-445-1625
  • 848-445-6660
  • 848-445-0635


Fax:

  • 732-445-6715