June 6, Wednesday 14:15, Room 303, Jacobs Building

Title: Stability and topology in reservoir computing

Lecturer: Hananel Hazan

Lecturer homepage : http://hananel.wordpress.com/

Affiliation : The University of Haifa

 

Jaeger, Maass and others have put forth the paradigm of "reservoir computing" and The Liquid State Machine (LSM) framework as a way of computing with highly recurrent neural networks. Both frameworks is a collection of neurons randomly connected with each other of fixed weights. Amongst other things, it has been shown to be effective in temporal pattern recognition; and has been held as a model appropriate to explain how certain aspects of the brain work. In this work we show that although it is known that both models does have generalizability properties and thus is robust to errors in input, it is NOT resistant to errors in the model itself. Thus small malfunctions or distortions make previous training ineffective. Thus this model as currently presented cannot be thought of as appropriate as a biological model; and it also suggests limitations on the applicability in the pattern recognition sphere. However, we show that, with the enforcement of topological constraints on the reservoir, in particular that of small world topology, the model is indeed fault tolerant. Thus this implies that "natural" computational systems must have specific topologies and the uniform random connectivity is not appropriate. If time permits we will discuss on new application with our colleagues Paolo Avesani and Diego Sona, using the Liquid State Machine as fMRI function approximation.