**Convolutional autoencoder (MATLAB/Octave):** two-dimensional convolutional autoencoder (neural network) for data compression.

[tutorial] [source]
**Description:** least squares objective function, backpropagation gradient, and network model for a convolutional autoencoder with an arbitrary number of hidden layers and hidden units. Solutions can be found using gradient based algorithms such as stochastic gradient descent, conjugate gradient descent, and L-BFGS.

**Reinforcement learning of tic-tac-toe:** tabular Q-learning and policy gradient function approximation implementations to teach a computer to play tic-tac-toe.

[tutorial]
[tabular demo]
[tabular source]
[approximate source]
**Description:** this project demos two different approaches to reinforcement learning of the game tic-tac-toe:

- tabular Q-learning written in javascript that you can teach yourself (see the demo link above) and
- policy gradient techniques including REINFORCE and actor-critic implemented in python with tensorflow via an arbitrarily deep dense feedforward neural network with residual connections.

**Maximum noise entropy (C):** first and second-order maximum noise entropy modeling of receptive fields.

[tutorial] [source]
**Description:** computes the weights of a linear or quadratic classifier to reconstruct multicomponent receptive fields of sensory neurons with regularization options including LASSO, ridge, elastic net, and early stopping. The theoretical background of this model may be found in

Fitzgerald, Rowekamp, Sincich, & Sharpee, 2011. The code uses BLAS and openMP.

**Low-rank maximum noise entropy (python):** low-rank second-order maximum noise entropy modeling of receptive fields.

[tutorial] [source]
**Description:** reconstruct the linear and quadratic feature spaces of a classifier when the feature space is poorly sampled. The

*mner* package explicitly reduces the number of weights that need to be optimized relative to

full-rank second-order maximum noise entropy through explicit bilinear factorization of the quadratic weights and nuclear-norm and is related to the model described in

Kaardal, Theunissen, & Sharpee, 2017. The software is written modularly for easy customization.

**Functional bases (MATLAB/Octave):** compute functional inputs of neurons and other classifiers using Boolean operations.

[tutorial] [source]
**Description:** identify functional bases that describe the functional inputs of neurons by taking linear combinations of the receptive field components. The software assumes that the input activations are modeled by logistic functions and the overall neural response computes Boolean operations on these input activations as described in the publication

Kaardal, Fitzgerald, Berry, & Sharpee. This code is limited to logical OR and logical AND operations but can be easily extended to other operations.