Learning spike time codes through supervised and unsupervised structural plasticity
Date of Issue2016
School of Electrical and Electronic Engineering
Large-scale spiking neural networks (SNN) are typically implemented on the chip by using mixed analog-digital circuits. While the models of the network components (neurons, synapses, and dendrites) are implemented by analog VLSI techniques, the connectivity information of the network is stored in an on-chip digital memory. Since the connectivity information is virtual, the user has full flexibility in reconfiguring the network. When a new task is encountered, a software model of the network is trained in computer and the trained weights are downloaded to the digital memory. Hence, the analog part works in conjunction with the digital part to form a properly weighted SNN suitable for the task at hand. However, very few hardware systems emulating SNN have been reported to solve real-world pattern recognition tasks with performance comparable to their software counterparts. A major challenge in obtaining a classification accuracy comparable to a software implementation of a system on a computer is that statistical variations in VLSI devices diminish the accuracy of synaptic weights. Most of the current neuromorphic systems require high-resolution synaptic weights and hence are affected by this problem. In this thesis, for enhanced stability to mismatch and efficient hardware implementation we have considered neurons with binary synapses for recognition of spatiotemporal spike trains. To compensate for the reduced computational power provided by binary synapses we look into more bio-physical models of neurons. Inspired by the nonlinear properties of dendrites in biological neurons, the proposed networks incorporate neurons having multiple dendrites with a lumped nonlinearity (two compartment model). We have shown that such a neuron with nonlinear dendrites (NNLD) has higher memory capacity than their linear counterpart and networks employing them provide state-of-the-art performance. Since binary synapses are considered, learning happens through formation and elimination of connections between the inputs and the dendritic branches to modify the structure or morphology of the network. Hence, the learning involves network rewiring (NRW) of the spiking neural network similar to structural plasticity observed in its biological counterparts. However, for structural plasticity based learning rules high dimensional sparse inputs are required. A popular way to map any input pattern into a high dimensional space enforcing sparse coding is by using non-overlapping binary valued receptive fields. But this method, though suitable for rate coded inputs, cannot be applied to spike-coded inputs. To overcome this issue, taking inspiration from Liquid State Machine (LSM), a popular model for reservoir computing, we have used a spiking neural network reservoir to convert the spike train inputs to a high dimensional sparse spatiotemporal spike encoding. Subsequently, we have used this high dimensional spike data to train our NNLD network having binary synapses through the proposed learning rule. We have shown that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced architecture. Furthermore, we have shown that due to the use of binary synapses, our proposed method is more robust against statistical variations. We have also looked into the on-chip implementation of NNLD and NRW learning rule for online training of the proposed readout. This is the first contribution of this thesis. The second contribution arises from increasing memory capacity of NNLD-based networks in recognizing high dimensional spike trains. A morphological learning algorithm inspired by the 'Tempotron', i.e., a recently proposed temporal learning algorithm is presented in this thesis. Unlike 'Tempotron', the proposed learning rule uses a technique to automatically adapt the neuronal ring threshold during training. Experimental results indicate that our NNLD with binary or 1-bit synapses can obtain similar accuracy as a traditional Tempotron with 4-bit synapses in classifying single spike random latency and pair-wise synchrony patterns. We have also presented results of applying this rule to real life spike classification problems from the field of tactile sensing. The two earlier contributions are supervised learning rules for training NNLD with binary synapses. The third contribution of this thesis is to provide an unsupervised learning rule for training NNLD. We have proposed a novel Winner-Take-All (WTA) architecture employing an array of NNLDs with binary synapses and an online unsupervised structural plasticity rule for training it. The proposed unsupervised learning rule is inspired by spike timing dependent plasticity (STDP) but differs for each dendrite based on its activation level. It trains the WTA network through formation and elimination of connections between inputs and synapses. To demonstrate the performance of the proposed network and learning rule, we have employed it to solve two, four and six class classification of random Poisson spike time inputs. The results indicate that by proper tuning of the inhibitory time constant of the WTA, a trade-off between specificity and sensitivity of the network can be achieved. We use the inhibitory time constant to set the number of subpatterns per pattern we want to detect. We show that while the percentage of successful trials are 92%, 88% and 82% for two, four and six class classification when no pattern subdivisions are made, it increases to 100% when each pattern is subdivided into 5 or 10 subpatterns. However, the former scenario of no pattern subdivision is more jitter resilient than the later ones. Apart from bio-realism and performance, an additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed algorithms are less affected by VLSI mismatch. The algorithms proposed in this thesis can find direct application in classifying spatio-temporal spike patterns arriving from the domain of Brain Machine Interfaces (BMI), Tactile sensors, sensory prosthesis, etc. However, the main contribution of this thesis is that it tends to remove the long-standing bias towards using neural networks with high-resolution weights and weight update based learning rules. This thesis shall trigger the usage of neural networks with binary synapses and connection-based learning rules for solving pattern recognition tasks in hardware. While traditional networks had to incorporate sparsity by using lesser number of neurons, our learning rules inherently form sparse networks by making sparse connections between inputs and dendrites.