Joel T. Kaardal, Ph.D.
Neural computation and machine intelligence

Publications

My research can be broadly described as being at the intersection of computational neuroscience, machine learning, and physics. In the field of computational neuroscience, the nervous system is studied from the perspective of its role as an information processing system. More intuitively, computational neuroscience focuses on the "programming" of neural structures such as individual neurons, neural circuits and networks, and so on. To learn about these computations, it is often necessary to construct parametric models that can replicate the behavior of neural programs using data recorded from the nervous system. The modeling bears substantial similarity to models and procedures being developed in current machine learning and, more generally, artificial intelligence research. The following publications present novel methods that may be used to reverse-engineer the computations of individual neurons and may be extended to the study of other analogous information processing systems.
Decoding the Computations of Sensory Neurons. [full text]

Brief summary: The nervous system is composed of vast networks of neurons that encode sensory information about the outside world (what we see, hear, smell, etc.). From the view of computational neuroscience, the sensory systems are information processing systems where the fundamental unit of computation is the neuron. From a computational perspective, the neuron is a rather simple computational node that either produces an "action potential" (or "spike") or remains silent given its electrochemical inputs. While the responses of a single neuron have limited capability to represent information about the outside world, the formation of neural networks by connecting outputs to inputs between neurons leads to the sophisticated computations that make it possible for us to identify objects and events and respond to them (or not). This volume is my doctoral dissertation in physics from UC San Diego and covers much the same material from the low-rank method and functional basis publications below but with occasional enhancements, greater detail, and some new material. In particular, Chapter 2, which focuses on the functional basis method, features improved analyses that I decided to perform motivated by what I learned over my tenure as a graduate student and Chapter 4 includes more details about the Bayesian optimization approach to finding reasonable settings for the regularization parameters in the low-rank method.

Citation: Joel T. Kaardal. Decoding the computations of sensory neurons. PhD diss., UC San Diego, 2017.

A Low-Rank Method for Characterizing High-Level Neural Computations. [full text]

Brief summary: Until the publication of this paper, recovering receptive fields of auditory neurons has been largely limited to descriptions with a single component or dimension. In this paper, the goal was not only to find multicomponent receptive fields in the auditory system, but to do so for high-level auditory neurons recorded from regions field L and the caudal mesopallium (CM) of the zebra finch. Reconstructing the receptive fields of high-level sensory neurons is exceptionally challenging since they require structured stimuli to elicit a response. The finite time length of experiments makes drawing stimuli from uncorrelated noise distributions impractical and therefore stimuli are drawn from correlated statistical distributions. Unfortunately, these correlations can lead to poor sampling of the stimulus space and cause second-order dimensionality reduction techniques that are used to recover multicomponent receptive fields to overfit.

To address these challenges, a new dimensionality reduction technique called the low-rank maximum noise entropy (MNE) method was developed and was shown to be capable of recovering multicomponent receptive fields of high-level auditory neurons. The multicomponent receptive fields were recovered with statistical significance relative to single component methods as measured via the predictive power evaluated on the cross-validation sets. This was particularly notable because other standard dimensionality reduction techniques failed to improve over the single component model casting doubt on the significance of the multicomponent receptive fields these other approaches recovered. Finally, in constrast to neurons recorded from low-level vision regions, the population of auditory neurons from field L and CM were found to be overwhelmingly better described by logical AND operations instead of logical OR operations suggesting a potential difference in how information is processed in auditory systems vs. visual systems (or high-level vs. low-level sensory processes).

Citation: Joel T. Kaardal, Frédéric E. Theunissen, and Tatyana O. Sharpee. A low-rank method for characterizing high-level neural computations. Frontiers in Computational Neuroscience, 11:68, 2017.

Identifying Functional Bases for Multidimensional Neural Computations. [full text]

Brief summary: While dimensionality reduction techniques provide insight into the subspace of stimulus space relevant to a neural response, the recovery of receptive fields alone yields an incomplete description of the underlying functional neural circuitry. Here, the functional neural circuitry is defined as the input-output functions that describe neural computations rather than physiological connectivity. A functional basis was proposed that gives an interpretable view of the neural computation by finding linear combinations of the components of the receptive field that capture the functional inputs of a neural computation. As an intuitive example, suppose a vision neuron elicits a spike when presented with either the color blue or purple in a red-green-blue stimulus space. A dimensionality reduction technique may then recover a receptive field with the two orthogonal components red and blue (and reject green since it is irrelevant to the neural response). A functional basis, on the other hand, identifies that the neuron is responsive specifically to blue and purple inputs.

This paper focuses on two particular hypotheses for the input-output function; namely the Boolean operations, logical OR and logical AND. After several demonstrations of using Boolean operations to recover receptive fields of synthetic neurons, Boolean operations are used to identify the functional inputs to individual retinal ganglion cells (RGCs). The results showed that logical OR was the overwhelmingly favored description of the neural computations performed by RGCs. Furthermore, it was found that, despite the modulation apparent in the receptive field components, that the functional basis components were Gaussians in distinct but overlapping positions in stimulus space. These Gaussian components beared a resemblance to bipolar cell receptive fields which are known to have simple Gaussian receptive fields.

Citation: Joel Kaardal, Jeffrey D. Fitzgerald, Michael J. Berry, and Tatyana O. Sharpee. Identifying functional bases for multidimensional neural computations Neural computation, 25(7):1870-1890, 2013.