|Department:||Ingenieurwissenschaften und Informatik|
|Full text PDF:||http://vts.uni-ulm.de/docs/2015/9482/vts_9482_14315.pdf|
The human brain has the capability to organize the neurons (experience-adapted connections) to perform specific tasks faster and much more efficient than any digital computer in existence today. Pattern recognition and image processing are well known examples. This is because the computing concept of the brain is completely different from that of conventional digital computers. The key concept is the massive parallel and nonlinear collective processing of large number of signals that are continuous in time and amplitude. Artificial neural networks imitate the computing concept of the brain in order to solve different tasks faced in many scientific disciplines as efficient as possible. Neural networks with feedback represent a special class known as recurrent neural networks. Very-large-scale integration technology has been shown to fit well as implementation medium for neural networks. Compared with digital technology, certain computations are less area and/or power consuming when performed in analog. Analog signal processing systems outperform digital systems by several orders of magnitude in terms of speed and/or power consumption. We focus in this thesis on continuous-time and discrete-time recurrent neural networks as dynamical solvers for equalization and channel decoding tasks. We perform a deep analysis of their rich dynamics. Applying this approach, there is no need for learning (training) strategies. First steps towards continuous-time joint equalization and decoding are presented as well.