Connectionists: open-source for massively parallel neural encoding/decoding of visual stimuli/scenes

Aurel A. Lazar aurel at ee.columbia.edu
Fri Jun 14 10:27:49 EDT 2013


Source code for encoding and decoding of natural and synthetic  visual scenes (videos) with
Time Encoding Machines consisting of Gabor or center surround  receptive fields in cascade
with Integrate-and-Fire neurons is available at  http://www.bionet.ee.columbia.edu/code/vtem
The code is written in Python/PyCuda and runs on single GPUs.

The current release supports grayscale videos. Stay  tuned for color and multi-GPUs implementations.

A visual demonstration of decoding a short video stimulus encoded  with a Video Time Encoding Machine
consisting of 100,000 Hodgkin-Huxley neurons is available at:  http://www.bionet.ee.columbia.edu/research/nce

Aurel
http://www.bionet.ee.columbia.edu/



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20130614/883dea62/attachment.html>


More information about the Connectionists mailing list