Joel T. Kaardal, Ph.D.
Neural computation and machine intelligence
General Solvers

Interior-point method (python): an interior-point method for nonlinear programming. [tutorial] [source]

Description: generic solver that may be used to find local optima of linear and nonlinear optimization problems with either convex or nonconvex objective functions. The objective function may be subjected to equality or inequality constraints. The solver can flexibly change backends to find solutions either using CPUs or GPUs and includes automatic differentiation functionality. A limited-memory Broyden-Fletcher-Golfarb-Shanno (L-BFGS) algorithm is implemented for large scale problems.

Augmented Lagrangian (MATLAB/Octave): an augmented Lagrangian method for solving nonlinear programs. [tutorial] [source]

Description: similar purpose to the interior-point method but searches for local optima of nonlinear programs using a first-order, projected gradient descent algorithm. The use of first-order optimization algorithms can give the augmented Lagrangian an advantage over interior-point methods for solving large scale problems, but may be much more sensitive to the user-defined hyperparameter settings.
Modeling

Convolutional autoencoder (MATLAB/Octave): two-dimensional convolutional autoencoder (neural network) for data compression. [tutorial] [source]

Description: least squares objective function, backpropagation gradient, and network model for a convolutional autoencoder with an arbitrary number of hidden layers and hidden units. Solutions can be found using gradient based algorithms such as stochastic gradient descent, conjugate gradient descent, and L-BFGS.

Deep reinforcement learning of tic-tac-toe (python): builds and optimizes a simple residual network to teach a computer to play tic-tac-toe. [tutorial] [source]

Description: construct a basic feed-forward residual neural network using the package Theano. The script allows for an arbitrary number of hidden units and hidden layers to be defined and automatically computes the gradient for backpropagation. The optimization can then be performed on either CPUs or GPUs.

Maximum noise entropy (C): first and second-order maximum noise entropy modeling of receptive fields. [tutorial] [source]

Description: computes the weights of a linear or quadratic classifier to reconstruct multicomponent receptive fields of sensory neurons with regularization options including LASSO, ridge, elastic net, and early stopping. The theoretical background of this model may be found in Fitzgerald, Rowekamp, Sincich, & Sharpee, 2011. The code uses BLAS and openMP.

Low-rank maximum noise entropy (python): low-rank second-order maximum noise entropy modeling of receptive fields. [tutorial] [source]

Description: reconstruct the linear and quadratic feature spaces of a classifier when the feature space is poorly sampled. The mner package explicitly reduces the number of weights that need to be optimized relative to full-rank second-order maximum noise entropy through explicit bilinear factorization of the quadratic weights and nuclear-norm and is related to the model described in Kaardal, Theunissen, & Sharpee, 2017. The software is written modularly for easy customization.

Functional bases (MATLAB/Octave): compute functional inputs of neurons and other classifiers using Boolean operations. [tutorial] [source]

Description: identify functional bases that describe the functional inputs of neurons by taking linear combinations of the receptive field components. The software assumes that the input activations are modeled by logistic functions and the overall neural response computes Boolean operations on these input activations as described in the publication Kaardal, Fitzgerald, Berry, & Sharpee. This code is limited to logical OR and logical AND operations but can be easily extended to other operations.
Simulations

Biophysical neural network (C): simulate the electrical activity of a biophysical neural network. [tutorial] [source]

Description: electrical activity of each neuron follows the classic Hodgkin-Huxley model where current is injected from external stimuli and/or other neurons in the network. The coupled differential equations representing the network as a whole are solved numerically using leapfrog integration.

Galactic collisions (python): simulating the collision of two disk galaxies. [tutorial]
Note: I wrote this for a class I taught and will keep the solution hidden from future students. It can be made available by request.

Description: simulates the collision of two galaxies where the galaxies are modeled as disks (like the Milky Way) composed of "particles" that approximate stellar bodies that orbit around a large central mass (a "black hole"). The script automatically computes the Keplerian orbits of the galaxies given a user defined eccentricity.
Utilities

Simple task scheduler (python/GNU screen): schedule a list of tasks to be run in parallel over a network. [source]

Description: a simple task scheduler that can run through a list of terminal commands that may be run in parallel on a single host or multiple hosts on a network. The user defines the number of slots on each host and the scheduler will wait until a new slot opens before submitting the next task. This has only been tested on UNIX/Linux type systems and requires an SSH server and GNU Screen be installed on the host(s).