Richard Granger

Keynote Speaker

Department of Psychological and Brain Sciences and Thayer School of Engineering
Dartmouth College
(October 17, 2014)

“From Percept to Concept: Proposed Brain Circuit Computation”

The human brain is an engineering marvel. The quest for building an artificial brain, and artificial intelligence, has been stymied by the fact that some tasks that seem to be very easy and automatic for the human brain (such as language learning) are much more difficult for an artificial brain. How would an artificial brain tell the story of what is happening in a video or picture, for example? In his talk, Dr. Granger discussed the development of algorithms for use in artificial brain networks through actual data from human research. His work focuses on creating algorithms that approach different human brain functions, such as attention, language, and semantics learning, in hopes that these can help in understanding how similar processes are constructed in humans.

Easy tasks for humans are often the most difficult for artificial systems, and vice versa. Many cognitive tasks are ill-specified, and the only reason we know that current impressive engineering systems for vision and language can be outperformed is that biological systems outperform them. Development of algorithms derived from brain circuits may thus be a highly pragmatic path to engineering designs of substantially more intelligent systems, as well as leading to a scientific understanding of how cognition arises from brains. Current artificial neural network/”deep learning” models represent a surprisingly modest subset of brainlike algorithms. This perhaps accounts for the current wide gap between the capabilities of even the most advanced extant artificial systems and human capabilities (e.g. rapid learning from few instances, learning by being taught, attentional mechanisms, navigation, structure, temporal sequences, and semantic language meaning). This gap is large in most realms other than statistical “big data” analysis. The assortment of architectural layouts across brain structures, although richly diverse, is nonetheless sharply constrained — by allometry, repeated design, component precision, Amdahl fractions of specific algorithms (see Granger 2011; 2015) — giving rise to a circumscribed “instruction set” of derived elemental operations from which all complex perceptual and cognitive abilities presumably may be composed. Derived brain circuit algorithms include many that are not typically thought of as primitive: sequence completion, hierarchical clustering, retrieval trees, hash coding, and compression are all (unexpectedly) directly derived from the structure and operation of particular circuits (see Rodriguez et al., 2004; Granger 2006).

 A number of software and hardware implementations of these brain circuit systems have been analyzed for computational costs and efficacy and carefully tested against standard approaches on known data sets (images, videos, speech, robotics, navigation), with published positive results and field tests (Moorkanikara et al., 2009; Chandrashekar et al., 2012; 2013; Bowen et al., 2015; Nunes et al., 2015). If the derived instructions constitute the basic operations from which complex mental abilities are constructed, it may be possible to establish a unified formalism for description of human faculties from perception and learning to reasoning and language; this is an ongoing study topic (Rodriguez & Granger 2015). Also of interest are tests of the limits of these capabilities. Initial results unexpectedly suggest that these brain mechanisms are equivalent to nested- stack pushdown grammars, long noted as the estimated size of human natural languages but far short of Turing- complete, and even short of fully context-sensitive grammars. These families of nested-stack grammars are nonetheless very computationally powerful; both their capabilities and their limits may be of scientific and engineering interest (Rodriguez and Granger, 2015).