Joel T. Kaardal, Ph.D.
Neural computation and machine intelligence
Below is a list of projects I have worked on with links to brief tutorials and the source code. Feel free to
with any questions, comments, or suggestions!
• General Solvers
Interior-point method (python):
an interior-point method for nonlinear programming.
generic solver that may be used to find local optima of linear and nonlinear optimization problems with either convex or nonconvex objective functions. The objective function may be subjected to equality or inequality constraints. The solver can flexibly change backends to find solutions either using CPUs or GPUs and includes automatic differentiation functionality. A limited-memory Broyden-Fletcher-Golfarb-Shanno (L-BFGS) algorithm is implemented for large scale problems.
Augmented Lagrangian (MATLAB/Octave):
an augmented Lagrangian method for solving nonlinear programs.
similar purpose to the
but searches for local optima of nonlinear programs using a first-order, projected gradient descent algorithm. The use of first-order optimization algorithms can give the augmented Lagrangian an advantage over interior-point methods for solving large scale problems, but may be much more sensitive to the user-defined hyperparameter settings.
Convolutional autoencoder (MATLAB/Octave):
two-dimensional convolutional autoencoder (neural network) for data compression.
least squares objective function, backpropagation gradient, and network model for a convolutional autoencoder with an arbitrary number of hidden layers and hidden units. Solutions can be found using gradient based algorithms such as stochastic gradient descent, conjugate gradient descent, and L-BFGS.
Reinforcement learning of tic-tac-toe:
tabular Q-learning and policy gradient function approximation implementations to teach a computer to play tic-tac-toe.
this project demos two different approaches to reinforcement learning of the game tic-tac-toe:
policy gradient techniques including REINFORCE and actor-critic implemented in python with tensorflow via an arbitrarily deep dense feedforward neural network with residual connections.
Maximum noise entropy (C):
first and second-order maximum noise entropy modeling of receptive fields.
computes the weights of a linear or quadratic classifier to reconstruct multicomponent receptive fields of sensory neurons with regularization options including LASSO, ridge, elastic net, and early stopping. The theoretical background of this model may be found in
Fitzgerald, Rowekamp, Sincich, & Sharpee, 2011
. The code uses BLAS and openMP.
Low-rank maximum noise entropy (python):
low-rank second-order maximum noise entropy modeling of receptive fields.
reconstruct the linear and quadratic feature spaces of a classifier when the feature space is poorly sampled. The
package explicitly reduces the number of weights that need to be optimized relative to
full-rank second-order maximum noise entropy
through explicit bilinear factorization of the quadratic weights and nuclear-norm and is related to the model described in
Kaardal, Theunissen, & Sharpee, 2017
. The software is written modularly for easy customization.
Functional bases (MATLAB/Octave):
compute functional inputs of neurons and other classifiers using Boolean operations.
identify functional bases that describe the functional inputs of neurons by taking linear combinations of the receptive field components. The software assumes that the input activations are modeled by logistic functions and the overall neural response computes Boolean operations on these input activations as described in the publication
Kaardal, Fitzgerald, Berry, & Sharpee
. This code is limited to logical OR and logical AND operations but can be easily extended to other operations.
Biophysical neural network (C):
simulate the electrical activity of a biophysical neural network.
electrical activity of each neuron follows the classic Hodgkin-Huxley model where current is injected from external stimuli and/or other neurons in the network. The coupled differential equations representing the network as a whole are solved numerically using leapfrog integration.
Galactic collisions (python):
simulating the collision of two disk galaxies.
Note: I wrote this for a class I taught and will keep the solution hidden from future students. It can be made available by
simulates the collision of two galaxies where the galaxies are modeled as disks (like the Milky Way) composed of "particles" that approximate stellar bodies that orbit around a large central mass (a "black hole"). The script automatically computes the Keplerian orbits of the galaxies given a user defined eccentricity.
CSV/Text navigator (python)
a memory-efficient CSV or delimited text file reader.
CSVNAV provides a class that can be used to read and navigate through large CSV or text files (with line delimiters) while consuming only minimal memory resources. This is accomplished by only keeping track of pointers to the beginning of lines in the file and only adding pointers as lines are accessed. The class also provides a method to group data by column values in a CSV which also is performed with low memory resources by mapping line pointers. An example usage of this class is when using large CSV files for machine learning as it makes it easy to construct randomly shuffled batches without holding the full dataset in memory. For more detail, see the README.md through the source link.
Simple task scheduler (python/GNU screen):
schedule a list of tasks to be run in parallel over a network.
a simple task scheduler that can run through a list of terminal commands that may be run in parallel on a single host or multiple hosts on a network. The user defines the number of slots on each host and the scheduler will wait until a new slot opens before submitting the next task. This has only been tested on UNIX/Linux type systems and requires an SSH server and GNU Screen be installed on the host(s).