Spikes: Exploring the Neural Code Fred Rieke, David Warland, Rob de de Ruyter van Steveninck, William Bialek  
More Details

Our perception of the world is driven by input from the sensory nerves. This input arrives encoded as sequences of identical spikes. Much of neural computation involves processing these spike trains. What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons.The authors invite the reader to play the role of a hypothetical observer inside the brain who makes decisions based on the incoming spike trains. Rather than asking how a neuron responds to a given stimulus, the authors ask how the brain could make inferences about an unknown stimulus from a given neural response. The flavor of some problems faced by the organism is captured by analyzing the way in which the observer can make a running reconstruction of the sensory stimulus as it evolves in time. These ideas are illustrated by examples from experiments on several biological systems.Intended for neurobiologists with an interest in mathematical analysis of neural data as well as the growing number of physicists and mathematicians interested in information processing by "real" nervous systems, Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory. A quantitative framework is used to pose precise questions about the structure of the neural code. These questions in turn influence both the design and analysis of experiments on sensory neurons.

0262681080
Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting Eugene M. Izhikevich  
More Details

In order to model neuronal behavior or to interpret the results of modeling studies, neuroscientists must call upon methods of nonlinear dynamics. This book offers an introduction to nonlinear dynamical systems theory for researchers and graduate students in neuroscience. It also provides an overview of neuroscience for mathematicians who want to learn the basic facts of electrophysiology.

Dynamical Systems in Neuroscience presents a systematic study of the relationship of electrophysiology, nonlinear dynamics, and computational properties of neurons. It emphasizes that information processing in the brain depends not only on the electrophysiological properties of neurons but also on their dynamical properties.

The book introduces dynamical systems, starting with one- and two-dimensional Hodgkin-Huxley-type models and continuing to a description of bursting systems. Each chapter proceeds from the simple to the complex, and provides sample problems at the end. The book explains all necessary mathematical concepts using geometrical intuition; it includes many figures and few equations, making it especially suitable for non-mathematicians. Each concept is presented in terms of both neuroscience and mathematics, providing a link between the two disciplines.

Nonlinear dynamical systems theory is at the core of computational neuroscience research, but it is not a standard part of the graduate neuroscience curriculum—or taught by math or physics department in a way that is suitable for students of biology. This book offers neuroscience students and researchers a comprehensive account of concepts and methods increasingly used in computational neuroscience.

An additional chapter on synchronization, with more advanced material, can be found at the author's website, www.izhikevich.com.

0262090430
Spiking Neuron Models: Single Neurons, Populations, Plasticity Wulfram Gerstner, Werner M. Kistler  
More Details

This introduction to spiking neurons can be used in advanced-level courses in computational neuroscience, theoretical biology, neural modeling, biophysics, or neural networks. It focuses on phenomenological approaches rather than detailed models in order to provide the reader with a conceptual framework. The authors formulate the theoretical concepts clearly without many mathematical details. While the book contains standard material for courses in computational neuroscience, neural modeling, or neural networks, it also provides an entry to current research. No prior knowledge beyond undergraduate mathematics is required.

0521890799
Modeling Brain Function: The World of Attractor Neural Networks Daniel J. Amit  
More Details

Exploring one of the most exciting and potentially rewarding areas of scientific research, the study of the principles and mechanisms underlying brain function, this book introduces and explains the techniques brought from physics to the study of neural networks and the insights they have stimulated. Substantial progress in understanding memory, the learning process, and self-organization by studying the properties of models of neural networks have resulted in discoveries of important parallels between the properties of statistical, nonlinear cooperative systems in physics and neural networks. The author presents a coherent and clear, nontechnical view of all the basic ideas and results. More technical aspects are restricted to special sections and appendices in each chapter.

0521421241
Corticonics: Neural Circuits of the Cerebral Cortex M. Abeles  
More Details

This book fulfills that need by combining studies of anatomy and physiology and mathematical and computer modeling to obtain a quantitative description of cortical functions. The material is presented didactically; it starts with descriptive anatomy and comprehensively examines all aspects of modeling. The book gradually leads the reader from a macroscopic view of cortical anatomy and a description of standard electrophysiological properties of single neurons to neural network models and synchronous firing chains of neurons. Along the way, the most modern trends of neural network modeling are explored.

0521376173
Theory of Cortical Plasticity Leon N. Cooper, Nathan Intrator, Brian S. Blais, Harel Z. Shouval  
More Details

Presents a theory of cortical plasticity & shows how this theory leads to experiments that test both its assumptions and consequences.

9812387919
Parallel Distributed Processing, Vol. 1: Foundations David E. Rumelhart, James L. McClelland, PDP Research Group  
More Details

What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind.The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network.Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.David E. Rumelhart is Professor of Psychology at the University of California, San Diego. James L. McClelland is Professor of Psychology at Carnegie-Mellon University. A Bradford Book.

026268053X
Parallel Distributed Processing, Vol. 2: Psychological and Biological Models James L. McClelland, Jerome Feldman, Patrick Hayes, David E. Rumelhart  
More Details

What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind.The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network.Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.David E. Rumelhart is Professor of Psychology at the University of California, San Diego. James L. McClelland is Professor of Psychology at Carnegie-Mellon University. A Bradford Book.

0262631105
Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain Randall C. O'Reilly, Yuko Munakata  
More Details

The goal of computational cognitive neuroscience is to understand how the brain embodies the mind by using biologically based computational models comprising networks of neuronlike units. This text, based on a course taught by Randall O'Reilly and Yuko Munakata over the past several years, provides an in-depth introduction to the main ideas in the field. The neural units in the simulations use equations based directly on the ion channels that govern the behavior of real neurons, and the neural networks incorporate anatomical and physiological properties of the neocortex. Thus the text provides the student with knowledge of the basic biology of the brain as well as the computational skills needed to simulate large-scale cognitive phenomena.The text consists of two parts. The first part covers basic neural computation mechanisms: individual neurons, neural networks, and learning mechanisms. The second part covers large-scale brain area organization and cognitive phenomena: perception and attention, memory, language, and higher-level cognition. The second part is relatively self-contained and can be used separately for mechanistically oriented cognitive neuroscience courses. Integrated throughout the text are more than forty different simulation models, many of them full-scale research-grade models, with friendly interfaces and accompanying exercises. The simulation software (PDP++, available for all major platforms) and simulations can be downloaded free of charge from the Web. Exercise solutions are available, and the text includes full information on the software.

0262650541
Semantic Cognition: A Parallel Distributed Processing Approach Timothy T. Rogers, James L. McClelland  
More Details

This groundbreaking monograph offers a mechanistic theory of the representation and use of semantic knowledge, integrating the strengths and overcoming many of the weaknesses of hierarchical, categorization-based approaches, similarity-based approaches, and the approach often called "theory theory." Building on earlier models by Geoffrey Hinton in the 1980s and David Rumelhart in the early 1990s, the authors propose that performance in semantic tasks arises through the propagation of graded signals in a system of interconnected processing units. The representations used in performing these tasks are patterns of activation across units, governed by weighted connections among them. Semantic knowledge is acquired through the gradual adjustment of the strengths of these connections in the course of day-to-day experience.The authors show how a simple computational model proposed by Rumelhart exhibits a progressive differentiation of conceptual knowledge, paralleling aspects of cognitive development seen in the work of Frank Keil and Jean Mandler. The authors extend the model to address aspects of conceptual knowledge acquisition in infancy, disintegration of conceptual knowledge in dementia, "basic-level" effects and their interaction with expertise, and many findings introduced to support the idea that semantic cognition is guided by naive, domain-specific theories.

0262681579
Bayesian Brain: Probabilistic Approaches to Neural Coding Kenji Doya, Shin Ishii, Alexandre Pouget, Rajesh P.N. Rao  
More Details

A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.After an overview of the mathematical concepts, including Bayes' theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation.

026204238X
Probabilistic Models of the Brain: Perception and Neural Function Rajesh P. N. Rao, Bruno A. Olshausen, Michael S. Lewicki  
More Details

Neurophysiological, neuroanatomical, and brain imaging studies have helped to shed light on how the brain transforms raw sensory information into a form that is useful for goal-directed behavior. A fundamental question that is seldom addressed by these studies, however, is why the brain uses the types of representations it does and what evolutionary advantage, if any, these representations confer. It is difficult to address such questions directly via animal experiments. A promising alternative is to use probabilistic principles such as maximum likelihood and Bayesian inference to derive models of brain function.This book surveys some of the current probabilistic approaches to modeling and understanding brain function. Although most of the examples focus on vision, many of the models and techniques are applicable to other modalities as well. The book presents top-down computational models as well as bottom-up neurally motivated models of brain function. The topics covered include Bayesian and information-theoretic models of perception, probabilistic theories of neural coding and spike timing, computational models of lateral and cortico-cortical feedback connections, and the development of receptive field properties from natural signals.

0262182246
Methods in Neuronal Modeling - 2nd Edition: From Ions to Networks Idan Segev  
More Details

Much research focuses on the question of how information is processed in nervous systems, from the level of individual ionic channels to large-scale neuronal networks, and from "simple" animals such as sea slugs and flies to cats and primates. New interdisciplinary methodologies combine a bottom-up experimental methodology with the more top-down-driven computational and modeling approach. This book serves as a handbook of computational methods and techniques for modeling the functional properties of single and groups of nerve cells.The contributors highlight several key trends: (1) the tightening link between analytical/numerical models and the associated experimental data, (2) the broadening of modeling methods, at both the subcellular level and the level of large neuronal networks that incorporate real biophysical properties of neurons as well as the statistical properties of spike trains, and (3) the organization of the data gained by physical emulation of the nervous system components through the use of very large scale circuit integration (VLSI) technology.The field of neuroscience has grown dramatically since the first edition of this book was published nine years ago. Half of the chapters of the second edition are completely new; the remaining ones have all been thoroughly revised. Many chapters provide an opportunity for interactive tutorials and simulation programs. They can be accessed via Christof Koch's Website.Contributors : Larry F. Abbott, Paul R. Adams, Hagai Agmon-Snir, James M. Bower, Robert E. Burke, Erik de Schutter, Alain Destexhe, Rodney Douglas, Bard Ermentrout, Fabrizio Gabbiani, David Hansel, Michael Hines, Christof Koch, Misha Mahowald, Zachary F. Mainen, Eve Marder, Michael V. Mascagni, Alexander D. Protopapas, Wilfrid Rall, John Rinzel, Idan Segev, Terrence J. Sejnowski, Shihab Shamma, Arthur S. Sherman, Paul Smolen, Haim Sompolinsky, Michael Vanier, Walter M. Yamada.

0262112310
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems Peter Dayan, Laurence F. Abbott  
More Details

Theoretical neuroscience provides a quantitative basis for describing what nervous systems do, determining how they function, and uncovering the general principles by which they operate. This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory.The book is divided into three parts. Part I discusses the relationship between sensory stimuli and neural responses, focusing on the representation of information by the spiking activity of neurons. Part II discusses the modeling of neurons and neural circuits on the basis of cellular and synaptic biophysics. Part III analyzes the role of plasticity in development and learning. An appendix covers the mathematical methods used, and exercises are available on the book's Web site.

0262541858