Category Archives: Publication

Two New Reservoir Computing Papers Published

N. Babson and C. Teuscher, Reservoir Computing with Complex Cellular Automata, Complex Systems, 28(4), 2019 pp. 433–455.
https://doi.org/10.25088/ComplexSystems.28.4.433

S. J. D. Tran and C. Teuscher, “Hierarchical Memcapacitive Reservoir Computing Architecture,” 2019 IEEE International Conference on Rebooting Computing (ICRC), San Mateo, CA, USA, 2019, pp. 1-6. https://doi.org/10.1109/ICRC.2019.8914716

Nature paper on “Adversarial explanations for understanding image classification decisions and improved neural network robustness”

Our latest work was published in Nature Machine Intelligence this week: Woods, W., Chen, J. & Teuscher, C. Adversarial explanations for understanding image classification decisions and improved neural network robustness. Nat Mach Intell (2019) doi:10.1038/s42256-019-0104-6

“Deep neural networks can be led to misclassify an image when minute changes that are imperceptible to humans are introduced. While for some networks this ability can cast doubt on the reliability of the model, it also offers explainability for networks that use more robust regularization.”

Abstract: For sensitive problems, such as medical imaging or fraud detection, neural network (NN) adoption has been slow due to concerns about their reliability, leading to a number of algorithms for explaining their decisions. NNs have also been found to be vulnerable to a class of imperceptible attacks, called adversarial examples, which arbitrarily alter the output of the network. Here we demonstrate both that these attacks can invalidate previous attempts to explain the decisions of NNs, and that with very robust networks, the attacks themselves may be leveraged as explanations with greater fidelity to the model. We also show that the introduction of a novel regularization technique inspired by the Lipschitz constraint, alongside other proposed improvements including a half-Huber activation function, greatly improves the resistance of NNs to adversarial examples. On the ImageNet classification task, we demonstrate a network with an accuracy-robustness area (ARA) of 0.0053, an ARA 2.4 times greater than the previous state-of-the-art value. Improving the mechanisms by which NN decisions are understood is an important direction for both establishing trust in sensitive domains and learning more about the stimuli to which NNs respond.

Open Access pre-print: https://arxiv.org/abs/1906.02896

Podcast: https://www.stitcher.com/podcast/the-data-skeptic-podcast/e/67341825

Reddit thread: https://www.reddit.com/r/MachineLearning/comments/ds0st4/r_adversarial_explanations_for_understanding

Shift-symmetric configurations in two-dimensional cellular automata: Irreversibility, insolvability, and enumeration

Now available online: P. Banda, J. Caughman, M. Cenek, C. Teuscher, “Shift-symmetric configurations in two-dimensional cellular automata: Irreversibility, insolvability, and enumeration,” Chaos 29, 063120 (2019), https://doi.org/10.1063/1.5089889

 
Symmetry is a synonym for beauty and rarity, and generally perceived as something desired. In this paper, we investigate an opposing side of symmetry and show how it can irreversibly “corrupt” a computation, and restrict a system’s dynamics and its potentiality. We demonstrate this fundamental phenomenon, which we call “configuration shift-symmetry,” affecting many crucial distributed tasks on the simplest gridlike synchronous system of cellular automation. We show how to count these symmetric inputs depending on a lattice size and its prime factorization, how likely they are encountered, and how to detect them.

On the sparse percolation of damage in finite non-synchronous random Boolean networks,

Now available online: M. Ishii, J. Gores, C. Teuscher, “On the sparse percolation of damage in finite non-synchronous random Boolean networks,” Physica D, 398:84-91, 2019. https://doi.org/10.1016/j.physd.2019.05.011