Category Archives: Publication

LCA Paper Published

Our LCA paper is published!

W. Woods and C. Teuscher, “Fast and Accurate Sparse Coding of Visual Stimuli With a Simple, Ultralow-Energy Spiking Architecture,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 7, pp. 2173-2187, July 2019. DOI: https://doi.org/10.1109/TNNLS.2018.2878002

 

New Chaos Paper

Our new paper is is accepted in the Chaos journal: P. Banda, J. Caughman, M. Cenek, C. Teuscher, “Shift-Symmetric Configurations in Two-Dimensional Cellular Automata: Irreversibility, Insolvability, and Enumeration.”

Abstract: The search for symmetry as an unusual yet profoundly appealing phenomenon, and the origin of regular, repeating configuration patterns have been for a long time a central focus of complexity science, and physics. Here, we introduce group-theoretic concepts to identify and enumerate the symmetric inputs, which result in irreversible system behaviors with undesired effects on many computational tasks. The concept of so-called configuration shift-symmetry is applied on two-dimensional cellular automata as an ideal model of computation. The results show the universal insolvability of “non-symmetric” tasks regardless of the transition function. By using a compact enumeration formula and bounding the number of shift-symmetric configurations for a given lattice size, we efficiently calculate how likely a configuration randomly generated from a uniform or density-uniform distribution turns shift-symmetric. Further, we devise an algorithm detecting the presence of shift-symmetry in a configuration. The enumeration and probability formulas can directly help to lower the minimal expected error for many crucial (non-symmetric) distributed problems, such as leader election, edge detection, pattern recognition, convex hull/minimum bounding rectangle, and encryption. Besides cellular automata, the shift-symmetry analysis can be used to study the non-linear behavior in various synchronous rule-based systems that include inference engines, Boolean networks, neural networks, and systolic arrays.

New Asynchronous RBN paper accepted

Our latest paper “On the Sparse Percolation of Damage in Finite Non-Synchronous Random Boolean Networks” (M. Ishii, C. Teuscher, J. Gores) was accepted by Physica D. It is authored by a tlab undergrad and a high school student. Way to go!

Pre-print: https://doi.org/10.1016/j.physd.2019.05.011

The paper presents an inventory of non-synchronous updating schemes and their effect on the sparse percolation (SP) of damage in finite random Boolean networks (RBNs). The results contribute to better understanding the robustness and information processing capabilities of complex systems with more biologically-plausible updating schemes.

Fast and Accurate Sparse Coding of Visual Stimuli With a Simple, Ultralow-Energy Spiking Architecture

Walt Woods, Ph. D. candidate, and Christof Teuscher, Electrical and Computer Engineering Faculty, co-authored “Fast and Accurate Sparse Coding of Visual Stimuli with a Simple, Ultra-Low-Energy Spiking Architecture,” published in IEEE Transactions on Neural Networks and Learning Systems.

The V1 visual layer of mammalian brains has been identified as performing Sparse Coding (SC) to help the rest of the brain process imagery received from the eyes.  Sparse coding is the compression of input stimulus in such a way that retains key details while saving energy by not transmitting irrelevant details.  In this work, Woods et al. proposed a new architecture named the Simple Spiking Locally Competitive Algorithm (SSLCA).  The proposed SSLCA uses spiking signals, inspired by the spiking signals used in biological brains, to take visual information and re-encode that information in a sparse format for easier processing.  The architecture was enabled through the use of memristors, next-generation nanodevices with dynamic resistances. Using these devices to weight and transmit information between the image input and the resulting sparse code, the SSLCA consumes only 1% of the energy and processes images at a rate 21 times higher than previously proposed sparse coding architectures.  Even though memristors are noisy devices that do not produce clean signals, the architecture was shown to be resistant to write variances of up to 27% and read variances of up to 40%. Woods et al. also researched the combination of such a sparse coding device with a state-of-the-art deep neural network for image processing, showing that, like the V1 cortex, the SSLCA can compress visual information efficiently while retaining necessary details for good classification performance.  Sparse coding architectures such as the proposed SSLCA could be used to greatly reduce communication bandwidth between visual sensors and other processing algorithms, such as deep learning networks.
Full paper: https://doi.org/10.1109/TNNLS.2018.2878002