<html><body><div>Dear colleagues,<br><br>We would like to draw your attention to some work by our R&D group towards the hard problem of perception for robots. Real world perception benefits significantly from spatial and temporal context information. However, this kind of information cannot be captured in primarily feedforward architectures, like contemporary deep learning ConvNets. We demonstrate that a highly recurrent self-supervised network exposed to continuous video can learn enough about the visual world to serve as the basis for a highly accurate visual object tracker.<br><br>These exciting results show how cognitive neuroscience ideas of recurrent feedback and prediction can be scalably implemented using conventional machine learning methods, and applied to an important problem in robotics. They also point to the importance of neuromorphic chip development and the high degree of flexibility possible for neuromorphic implementation details. <br></div><div><br data-mce-bogus="1"></div><div>"Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network"<br>Link: https://arxiv.org/abs/1607.06854<br><br>Best regards,<br>Patryk Laurent and Filip Piękniewski </div><div>--</div><div>Patryk Laurent, Ph.D.</div><div>Brain Corporation</div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div></body></html>