2024

Representational drift as a result of implicit regularization

Ratzon A, Derdikman D, Barak O. Representational drift as a result of implicit regularization. eLife. 2024 May 2;12. https://doi.org/10.7554/eLife.90069
 

Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.

@article{3d749472a7f64f039013c366773c35a9,
title = "Representational drift as a result of implicit regularization",
abstract = "Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.",
keywords = "artificial neural network, CA1, mouse, neuroscience, noise, regularization, representational drift, theoretical neuroscience, Neurons/physiology, Neural Networks, Computer, Rats, Machine Learning, Learning, Animals, CA1 Region, Hippocampal/physiology",
author = "Aviv Ratzon and Dori Derdikman and Omri Barak",
note = "{\textcopyright} 2023, Ratzon et al.",
year = "2024",
month = may,
day = "2",
doi = "10.7554/eLife.90069",
language = "אנגלית",
volume = "12",
journal = "eLife",
issn = "2050-084X",
publisher = "eLife Sciences Publications",

}

Trained recurrent neural networks develop phase-locked limit cycles in a working memory task

Pals M, Macke JH, Barak O. Trained recurrent neural networks develop phase-locked limit cycles in a working memory task. PLoS Computational Biology. 2024 Feb 5;20(2):e1011852. e1011852. https://doi.org/10.1371/journal.pcbi.1011852
 

Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or ‘frame of reference’. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.

@article{61be3217d6b6421d9e2c5bc424251fb2,
title = "Trained recurrent neural networks develop phase-locked limit cycles in a working memory task",
abstract = "Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or {\textquoteleft}frame of reference{\textquoteright}. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.",
keywords = "Memory, Short-Term, Brain, Neural Networks, Computer",
author = "Matthijs Pals and Macke, {Jakob H.} and Omri Barak",
note = "Copyright: {\textcopyright} 2024 Pals et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
year = "2024",
month = feb,
day = "5",
doi = "10.1371/journal.pcbi.1011852",
language = "אנגלית",
volume = "20",
pages = "e1011852",
journal = "PLoS Computational Biology",
issn = "1553-734X",
publisher = "Public Library of Science",
number = "2",

}

Revealing and reshaping attractor dynamics in large networks of cortical neurons

Beer C, Barak O. Revealing and reshaping attractor dynamics in large networks of cortical neurons. PLoS Computational Biology. 2024 Jan 19;20(1):e1011784. e1011784. https://doi.org/10.1371/journal.pcbi.1011784
 

Attractors play a key role in a wide range of processes including learning and memory. Due to recent innovations in recording methods, there is increasing evidence for the existence of attractor dynamics in the brain. Yet, our understanding of how these attractors emerge or disappear in a biological system is lacking. By following the spontaneous network bursts of cultured cortical networks, we are able to define a vocabulary of spatiotemporal patterns and show that they function as discrete attractors in the network dynamics. We show that electrically stimulating specific attractors eliminates them from the spontaneous vocabulary, while they are still robustly evoked by the electrical stimulation. This seemingly paradoxical finding can be explained by a Hebbian-like strengthening of specific pathways into the attractors, at the expense of weakening non-evoked pathways into the same attractors. We verify this hypothesis and provide a mechanistic explanation for the underlying changes supporting this effect.

@article{f5621078a49b488da4f1d097a79454c5,
title = "Revealing and reshaping attractor dynamics in large networks of cortical neurons",
abstract = "Attractors play a key role in a wide range of processes including learning and memory. Due to recent innovations in recording methods, there is increasing evidence for the existence of attractor dynamics in the brain. Yet, our understanding of how these attractors emerge or disappear in a biological system is lacking. By following the spontaneous network bursts of cultured cortical networks, we are able to define a vocabulary of spatiotemporal patterns and show that they function as discrete attractors in the network dynamics. We show that electrically stimulating specific attractors eliminates them from the spontaneous vocabulary, while they are still robustly evoked by the electrical stimulation. This seemingly paradoxical finding can be explained by a Hebbian-like strengthening of specific pathways into the attractors, at the expense of weakening non-evoked pathways into the same attractors. We verify this hypothesis and provide a mechanistic explanation for the underlying changes supporting this effect.",
keywords = "Neurons/physiology, Learning/physiology, Brain",
author = "Chen Beer and Omri Barak",
note = "Copyright: {\textcopyright} 2024 Beer, Barak. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
year = "2024",
month = jan,
day = "19",
doi = "10.1371/journal.pcbi.1011784",
language = "אנגלית",
volume = "20",
pages = "e1011784",
journal = "PLoS Computational Biology",
issn = "1553-734X",
publisher = "Public Library of Science",
number = "1",

}

2023

Active experience, not time, determines within-day representational drift in dorsal CA1

Khatib D, Ratzon A, Sellevoll M, Barak O, Morris G, Derdikman D. Active experience, not time, determines within-day representational drift in dorsal CA1. Neuron. 2023 Aug 2;111(15):2348-2356.e5. https://doi.org/10.1016/j.neuron.2023.05.014
 

Memories of past events can be recalled long after the event, indicating stability. But new experiences are also integrated into existing memories, indicating plasticity. In the hippocampus, spatial representations are known to remain stable but have also been shown to drift over long periods of time. We hypothesized that experience, more than the passage of time, is the driving force behind representational drift. We compared the within-day stability of place cells’ representations in dorsal CA1 of the hippocampus of mice traversing two similar, familiar tracks for different durations. We found that the more time the animals spent actively traversing the environment, the greater the representational drift, regardless of the total elapsed time between visits. Our results suggest that spatial representation is a dynamic process, related to the ongoing experiences within a specific context, and is related to memory update rather than to passive forgetting.

@article{7ea33e0a9a5b488e869c3982eb73148e,
title = "Active experience, not time, determines within-day representational drift in dorsal CA1",
abstract = "Memories of past events can be recalled long after the event, indicating stability. But new experiences are also integrated into existing memories, indicating plasticity. In the hippocampus, spatial representations are known to remain stable but have also been shown to drift over long periods of time. We hypothesized that experience, more than the passage of time, is the driving force behind representational drift. We compared the within-day stability of place cells{\textquoteright} representations in dorsal CA1 of the hippocampus of mice traversing two similar, familiar tracks for different durations. We found that the more time the animals spent actively traversing the environment, the greater the representational drift, regardless of the total elapsed time between visits. Our results suggest that spatial representation is a dynamic process, related to the ongoing experiences within a specific context, and is related to memory update rather than to passive forgetting.",
keywords = "CA1, hippocampus, one-photon Ca2+ imaging, place cells, reconsolidation, remapping, representational drift, Mental Recall, Gravitation, Animals, Mice, Hippocampus, Place Cells",
author = "Dorgham Khatib and Aviv Ratzon and Mariell Sellevoll and Omri Barak and Genela Morris and Dori Derdikman",
note = "Copyright {\textcopyright} 2023 Elsevier Inc. All rights reserved.",
year = "2023",
month = aug,
day = "2",
doi = "10.1016/j.neuron.2023.05.014",
language = "אנגלית",
volume = "111",
pages = "2348--2356.e5",
journal = "Neuron",
issn = "0896-6273",
publisher = "Cell Press",
number = "15",

}

Mathematical models of learning and what can be learned from them

Barak O, Tsodyks M. Mathematical models of learning and what can be learned from them. Current Opinion in Neurobiology. 2023 Jun;80:102721. https://doi.org/10.1016/j.conb.2023.102721
 

Learning is a multi-faceted phenomenon of critical importance and hence attracted a great deal of research, both experimental and theoretical. In this review, we will consider some of the paradigmatic examples of learning and discuss the common themes in theoretical learning research, such as levels of modeling and their corresponding relation to experimental observations and mathematical ideas common to different types of learning.

@article{27771596204c474ea66ab0134f16040c,
title = "Mathematical models of learning and what can be learned from them",
abstract = "Learning is a multi-faceted phenomenon of critical importance and hence attracted a great deal of research, both experimental and theoretical. In this review, we will consider some of the paradigmatic examples of learning and discuss the common themes in theoretical learning research, such as levels of modeling and their corresponding relation to experimental observations and mathematical ideas common to different types of learning.",
author = "Omri Barak and Misha Tsodyks",
note = "Publisher Copyright: {\textcopyright} 2023 Elsevier Ltd",
year = "2023",
month = jun,
doi = "10.1016/j.conb.2023.102721",
language = "אנגלית",
volume = "80",
journal = "Current Opinion in Neurobiology",
issn = "0959-4388",
publisher = "Elsevier Ltd.",

}

Identifying regulation with adversarial surrogates

Teichner R, Shomar A, Barak O, Brenner N, Marom S, Meir R et al. Identifying regulation with adversarial surrogates. Proceedings of the National Academy of Sciences of the United States of America. 2023 Mar 15;120(12):e2216805120. e2216805120. https://doi.org/10.1073/pnas.2216805120
 

Homeostasis, the ability to maintain a relatively constant internal environment in the face of perturbations, is a hallmark of biological systems. It is believed that this constancy is achieved through multiple internal regulation and control processes. Given observations of a system, or even a detailed model of one, it is both valuable and extremely challenging to extract the control objectives of the homeostatic mechanisms. In this work, we develop a robust data-driven method to identify these objectives, namely to understand: “what does the system care about?”. We propose an algorithm, Identifying Regulation with Adversarial Surrogates (IRAS), that receives an array of temporal measurements of the system and outputs a candidate for the control objective, expressed as a combination of observed variables. IRAS is an iterative algorithm consisting of two competing players. The first player, realized by an artificial deep neural network, aims to minimize a measure of invariance we refer to as the coefficient of regulation. The second player aims to render the task of the first player more difficult by forcing it to extract information about the temporal structure of the data, which is absent from similar “surrogate” data. We test the algorithm on four synthetic and one natural data set, demonstrating excellent empirical results. Interestingly, our approach can also be used to extract conserved quantities, e.g., energy and momentum, in purely physical systems, as we demonstrate empirically.

@article{8ffd15f8a1d24c359e03db7feb722ef3,
title = "Identifying regulation with adversarial surrogates",
abstract = "Homeostasis, the ability to maintain a relatively constant internal environment in the face of perturbations, is a hallmark of biological systems. It is believed that this constancy is achieved through multiple internal regulation and control processes. Given observations of a system, or even a detailed model of one, it is both valuable and extremely challenging to extract the control objectives of the homeostatic mechanisms. In this work, we develop a robust data-driven method to identify these objectives, namely to understand: “what does the system care about?”. We propose an algorithm, Identifying Regulation with Adversarial Surrogates (IRAS), that receives an array of temporal measurements of the system and outputs a candidate for the control objective, expressed as a combination of observed variables. IRAS is an iterative algorithm consisting of two competing players. The first player, realized by an artificial deep neural network, aims to minimize a measure of invariance we refer to as the coefficient of regulation. The second player aims to render the task of the first player more difficult by forcing it to extract information about the temporal structure of the data, which is absent from similar “surrogate” data. We test the algorithm on four synthetic and one natural data set, demonstrating excellent empirical results. Interestingly, our approach can also be used to extract conserved quantities, e.g., energy and momentum, in purely physical systems, as we demonstrate empirically.",
keywords = "artificial neural networks, biological control, biological regulation, computational biology, data analysis, Algorithms, Homeostasis",
author = "Ron Teichner and Aseel Shomar and Omri Barak and Naama Brenner and Shimon Marom and Ron Meir and Danny Eytan",
note = "Publisher Copyright: Copyright {\textcopyright} 2023 the Author(s).",
year = "2023",
month = mar,
day = "15",
doi = "10.1073/pnas.2216805120",
language = "אנגלית",
volume = "120",
pages = "e2216805120",
journal = "Proceedings of the National Academy of Sciences of the United States of America",
issn = "0027-8424",
publisher = "National Academy of Sciences",
number = "12",

}

The Simplicity Bias in Multi-Task RNNs: Shared Attractors, Reuse of Dynamics, and Geometric Representation

Turner E, Barak O. The Simplicity Bias in Multi-Task RNNs: Shared Attractors, Reuse of Dynamics, and Geometric Representation. Advances in Neural Information Processing Systems. 2023;36.
 

How does a single interconnected neural population perform multiple tasks, each with its own dynamical requirements? The relation between task requirements and neural dynamics in Recurrent Neural Networks (RNNs) has been investigated for single tasks. The forces shaping joint dynamics of multiple tasks, however, are largely unexplored. In this work, we first construct a systematic framework to study multiple tasks in RNNs, minimizing interference from input and output correlations with the hidden representation. This allows us to reveal how RNNs tend to share attractors and reuse dynamics, a tendency we define as the "simplicity bias". We find that RNNs develop attractors sequentially during training, preferentially reusing existing dynamics and opting for simple solutions when possible. This sequenced emergence and preferential reuse encapsulate the simplicity bias. Through concrete examples, we demonstrate that new attractors primarily emerge due to task demands or architectural constraints, illustrating a balance between simplicity bias and external factors. We examine the geometry of joint representations within a single attractor, by constructing a family of tasks from a set of functions. We show that the steepness of the associated functions controls their alignment within the attractor. This arrangement again highlights the simplicity bias, as points with similar input spacings undergo comparable transformations to reach the shared attractor. Our findings propose compelling applications. The geometry of shared attractors might allow us to infer the nature of unknown tasks. Furthermore, the simplicity bias implies that without specific incentives, modularity in RNNs may not spontaneously emerge, providing insights into the conditions required for network specialization.

@article{c2f9c3ac56a14dafa31b22ca751cdf3c,
title = "The Simplicity Bias in Multi-Task RNNs: Shared Attractors, Reuse of Dynamics, and Geometric Representation",
abstract = "How does a single interconnected neural population perform multiple tasks, each with its own dynamical requirements? The relation between task requirements and neural dynamics in Recurrent Neural Networks (RNNs) has been investigated for single tasks. The forces shaping joint dynamics of multiple tasks, however, are largely unexplored. In this work, we first construct a systematic framework to study multiple tasks in RNNs, minimizing interference from input and output correlations with the hidden representation. This allows us to reveal how RNNs tend to share attractors and reuse dynamics, a tendency we define as the {"}simplicity bias{"}. We find that RNNs develop attractors sequentially during training, preferentially reusing existing dynamics and opting for simple solutions when possible. This sequenced emergence and preferential reuse encapsulate the simplicity bias. Through concrete examples, we demonstrate that new attractors primarily emerge due to task demands or architectural constraints, illustrating a balance between simplicity bias and external factors. We examine the geometry of joint representations within a single attractor, by constructing a family of tasks from a set of functions. We show that the steepness of the associated functions controls their alignment within the attractor. This arrangement again highlights the simplicity bias, as points with similar input spacings undergo comparable transformations to reach the shared attractor. Our findings propose compelling applications. The geometry of shared attractors might allow us to infer the nature of unknown tasks. Furthermore, the simplicity bias implies that without specific incentives, modularity in RNNs may not spontaneously emerge, providing insights into the conditions required for network specialization.",
author = "Elia Turner and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2023 Neural information processing systems foundation. All rights reserved.; 37th Conference on Neural Information Processing Systems, NeurIPS 2023 ; Conference date: 10-12-2023 Through 16-12-2023",
year = "2023",
language = "אנגלית",
volume = "36",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

2022

Dynamic compartmental computations in tuft dendrites of layer 5 neurons during motor behavior

Otor Y, Achvat S, Cermak N, Benisty H, Abboud M, Barak O et al. Dynamic compartmental computations in tuft dendrites of layer 5 neurons during motor behavior. Science. 2022 Apr 15;376(6590):267-275. https://doi.org/10.1126/science.abn1421
 

Tuft dendrites of layer 5 pyramidal neurons form specialized compartments important for motor learning and performance, yet their computational capabilities remain unclear. Structural-functional mapping of the tuft tree from the motor cortex during motor tasks revealed two morphologically distinct populations of layer 5 pyramidal tract neurons (PTNs) that exhibit specific tuft computational properties. Early bifurcating and large nexus PTNs showed marked tuft functional compartmentalization, representing different motor variable combinations within and between their two tuft hemi-trees. By contrast, late bifurcating and smaller nexus PTNs showed synchronous tuft activation. Dendritic structure and dynamic recruitment of the N-methyl-D-aspartate (NMDA)–spiking mechanism explained the differential compartmentalization patterns. Our findings support a morphologically dependent framework for motor computations, in which independent amplification units can be combinatorically recruited to represent different motor sequences within the same tree.

@article{e9cccc42f4d14164b6bfdce59211f6d5,
title = "Dynamic compartmental computations in tuft dendrites of layer 5 neurons during motor behavior",
abstract = "Tuft dendrites of layer 5 pyramidal neurons form specialized compartments important for motor learning and performance, yet their computational capabilities remain unclear. Structural-functional mapping of the tuft tree from the motor cortex during motor tasks revealed two morphologically distinct populations of layer 5 pyramidal tract neurons (PTNs) that exhibit specific tuft computational properties. Early bifurcating and large nexus PTNs showed marked tuft functional compartmentalization, representing different motor variable combinations within and between their two tuft hemi-trees. By contrast, late bifurcating and smaller nexus PTNs showed synchronous tuft activation. Dendritic structure and dynamic recruitment of the N-methyl-D-aspartate (NMDA)–spiking mechanism explained the differential compartmentalization patterns. Our findings support a morphologically dependent framework for motor computations, in which independent amplification units can be combinatorically recruited to represent different motor sequences within the same tree.",
keywords = "Action Potentials/physiology, Dendrites/physiology, Motor Cortex, Neurons, Pyramidal Cells/physiology",
author = "Yara Otor and Shay Achvat and Nathan Cermak and Hadas Benisty and Maisan Abboud and Omri Barak and Yitzhak Schiller and Alon Poleg-Polsky and Jackie Schiller",
note = "Publisher Copyright: Copyright {\textcopyright} 2022 The Authors.",
year = "2022",
month = apr,
day = "15",
doi = "10.1126/science.abn1421",
language = "אנגלית",
volume = "376",
pages = "267--275",
journal = "Science",
issn = "0036-8075",
publisher = "American Association for the Advancement of Science",
number = "6590",

}

Cancer progression as a learning process

Shomar A, Barak O, Brenner N. Cancer progression as a learning process. iScience. 2022 Mar 18;25(3):103924. 103924. https://doi.org/10.1016/j.isci.2022.103924
 

Drug resistance and metastasis—the major complications in cancer—both entail adaptation of cancer cells to stress, whether a drug or a lethal new environment. Intriguingly, these adaptive processes share similar features that cannot be explained by a pure Darwinian scheme, including dormancy, increased heterogeneity, and stress-induced plasticity. Here, we propose that learning theory offers a framework to explain these features and may shed light on these two intricate processes. In this framework, learning is performed at the single-cell level, by stress-driven exploratory trial-and-error. Such a process is not contingent on pre-existing pathways but on a random search for a state that diminishes the stress. We review underlying mechanisms that may support this search, and show by using a learning model that such exploratory learning is feasible in a high-dimensional system as the cell. At the population level, we view the tissue as a network of exploring agents that communicate, restraining cancer formation in health. In this view, disease results from the breakdown of homeostasis between cellular exploratory drive and tissue homeostasis.

@article{6a8d611f056e48f6b0a1b0e38d2d2335,
title = "Cancer progression as a learning process",
abstract = "Drug resistance and metastasis—the major complications in cancer—both entail adaptation of cancer cells to stress, whether a drug or a lethal new environment. Intriguingly, these adaptive processes share similar features that cannot be explained by a pure Darwinian scheme, including dormancy, increased heterogeneity, and stress-induced plasticity. Here, we propose that learning theory offers a framework to explain these features and may shed light on these two intricate processes. In this framework, learning is performed at the single-cell level, by stress-driven exploratory trial-and-error. Such a process is not contingent on pre-existing pathways but on a random search for a state that diminishes the stress. We review underlying mechanisms that may support this search, and show by using a learning model that such exploratory learning is feasible in a high-dimensional system as the cell. At the population level, we view the tissue as a network of exploring agents that communicate, restraining cancer formation in health. In this view, disease results from the breakdown of homeostasis between cellular exploratory drive and tissue homeostasis.",
keywords = "Cancer systems biology, Evolutionary theories",
author = "Aseel Shomar and Omri Barak and Naama Brenner",
note = "{\textcopyright} 2022 The Author(s).",
year = "2022",
month = mar,
day = "18",
doi = "10.1016/j.isci.2022.103924",
language = "אנגלית",
volume = "25",
pages = "103924",
journal = "iScience",
issn = "2589-0042",
publisher = "Elsevier Inc.",
number = "3",

}

Identifying Regulation with Adversarial Surrogates

Teichner R, Shomar A, Barak O, Brenner N, Marom S, Meir R et al. Identifying Regulation with Adversarial Surrogates. bioRxiv. 2022 Jan 1;2022.10.08.511451. https://doi.org/10.1101/2022.10.08.511451
 
Homeostasis, the ability to maintain a relatively constant internal environment in the face of perturbations, is a hallmark of biological systems. It is believed that this constancy is achieved through multiple internal regulation and control processes. Given observations of a system, or even a detailed model of one, it is both valuable and extremely challenging to extract the control objectives of the homeostatic mechanisms. In this work, we develop a robust data-driven method to identify these objectives, namely to understand: “what does the system care about?”. We propose an algorithm, Identifying Regulation with Adversarial Surrogates (IRAS), that receives an array of temporal measurements of the system, and outputs a candidate for the control objective, expressed as a combination of observed variables. IRAS is an iterative algorithm consisting of two competing players. The first player, realized by an artificial deep neural network, aims to minimize a measure of invariance we refer to as the coefficient of regulation. The second player aims to render the task of the first player more difficult by forcing it to extract information about the temporal structure of the data, which is absent from similar ‘surrogate’ data. We test the algorithm on two synthetic and one natural data set, demonstrating excellent empirical results. Interestingly, our approach can also be used to extract conserved quantities, e.g., energy and momentum, in purely physical systems, as we demonstrate empirically.Competing Interest StatementThe authors have declared no competing interest.
@article{0d0a1d7a18d340928a28afeaac548627,
title = "Identifying Regulation with Adversarial Surrogates",
abstract = "Homeostasis, the ability to maintain a relatively constant internal environment in the face of perturbations, is a hallmark of biological systems. It is believed that this constancy is achieved through multiple internal regulation and control processes. Given observations of a system, or even a detailed model of one, it is both valuable and extremely challenging to extract the control objectives of the homeostatic mechanisms. In this work, we develop a robust data-driven method to identify these objectives, namely to understand: “what does the system care about?”. We propose an algorithm, Identifying Regulation with Adversarial Surrogates (IRAS), that receives an array of temporal measurements of the system, and outputs a candidate for the control objective, expressed as a combination of observed variables. IRAS is an iterative algorithm consisting of two competing players. The first player, realized by an artificial deep neural network, aims to minimize a measure of invariance we refer to as the coefficient of regulation. The second player aims to render the task of the first player more difficult by forcing it to extract information about the temporal structure of the data, which is absent from similar {\textquoteleft}surrogate{\textquoteright} data. We test the algorithm on two synthetic and one natural data set, demonstrating excellent empirical results. Interestingly, our approach can also be used to extract conserved quantities, e.g., energy and momentum, in purely physical systems, as we demonstrate empirically.Competing Interest StatementThe authors have declared no competing interest.",
author = "Ron Teichner and Aseel Shomar and O. Barak and N. Brenner and S. Marom and R. Meir and D. Eytan",
year = "2022",
month = jan,
day = "1",
doi = "10.1101/2022.10.08.511451",
language = "אנגלית",
pages = "2022.10.08.511451",
journal = "bioRxiv",
publisher = "Cold Spring Harbor Laboratory Press",

}

2021

Teaching during the covid-19 pandemic: The experience of the faculty of medicine at the technion-israel institute of technology

Flugelman MY, Margalit R, Aronheim A, Barak O, Marom A, Dolnikov K et al. Teaching during the covid-19 pandemic: The experience of the faculty of medicine at the technion-israel institute of technology. Israel Medical Association Journal. 2021 Jul;23(7):401-407.
 

Background: The coronavirus disease-2019 (COVID-19) pandemic forced drastic changes in all layers of life. Social distancing and lockdown drove the educational system to uncharted territories at an accelerated pace, leaving educators little time to adjust. Objectives: To describe changes in teaching during the first phase of the COVID-19 pandemic. Methods: We described the steps implemented at the Technion- Israel Institute of Technology Faculty of Medicine during the initial 4 months of the COVID-19 pandemic to preserve teaching and the academic ecosystem. Results: Several established methodologies, such as the flipped classroom and active learning, demonstrated effectiveness. In addition, we used creative methods to teach clinical medicine during the ban on bedside teaching and modified community engagement activities to meet COVID-19 induced community needs. Conclusions: The challenges and the lessons learned from teaching during the COVID-19 pandemic prompted us to adjust our teaching methods and curriculum using multiple online teaching methods and promoting self-learning. It also provided invaluable insights on our pedagogy and the teaching of medicine in the future with emphasis on students and faculty being part of the changes and adjustments in curriculum and teaching methods. However, personal interactions are essential to medical school education, as are laboratories, group simulations, and bedside teaching.

@article{12ffd3b9e8f54a478ebaf13b79fb0d36,
title = "Teaching during the covid-19 pandemic: The experience of the faculty of medicine at the technion-israel institute of technology",
abstract = "Background: The coronavirus disease-2019 (COVID-19) pandemic forced drastic changes in all layers of life. Social distancing and lockdown drove the educational system to uncharted territories at an accelerated pace, leaving educators little time to adjust. Objectives: To describe changes in teaching during the first phase of the COVID-19 pandemic. Methods: We described the steps implemented at the Technion- Israel Institute of Technology Faculty of Medicine during the initial 4 months of the COVID-19 pandemic to preserve teaching and the academic ecosystem. Results: Several established methodologies, such as the flipped classroom and active learning, demonstrated effectiveness. In addition, we used creative methods to teach clinical medicine during the ban on bedside teaching and modified community engagement activities to meet COVID-19 induced community needs. Conclusions: The challenges and the lessons learned from teaching during the COVID-19 pandemic prompted us to adjust our teaching methods and curriculum using multiple online teaching methods and promoting self-learning. It also provided invaluable insights on our pedagogy and the teaching of medicine in the future with emphasis on students and faculty being part of the changes and adjustments in curriculum and teaching methods. However, personal interactions are essential to medical school education, as are laboratories, group simulations, and bedside teaching.",
keywords = "COVID-19/epidemiology, Communicable Disease Control/methods, Education, Distance/methods, Education, Medical/organization & administration, Humans, Needs Assessment, Organizational Innovation, Outcome Assessment, Health Care, Physical Distancing, SARS-CoV-2, Schools, Medical, Teaching/trends",
author = "Flugelman, {Moshe Y.} and Ruth Margalit and Ami Aronheim and Omri Barak and Assaf Marom and Katya Dolnikov and Eyal Braun and Ayelet Raz-Pasteur and Azzam, {Zaher S.} and David Hochstein and Riad Haddad and Rachel Nave and Arieh Riskin and Dan Waisman and Robert Glueck and Michal Mekel and Yael Avraham and Uval Bar-Peled and Ronit Kacev and Michal Keren and Amir Karban and Elon Eisenberg",
note = "Publisher Copyright: {\textcopyright} 2021 Israel Medical Association. All rights reserved.",
year = "2021",
month = jul,
language = "אנגלית",
volume = "23",
pages = "401--407",
journal = "Israel Medical Association Journal",
issn = "1565-1088",
publisher = "Israel Medical Association",
number = "7",

}

Mapping low-dimensional dynamics to high-dimensional neural activity: A derivation of the ring model from the neural engineering framework

Barak O, Romani S. Mapping low-dimensional dynamics to high-dimensional neural activity: A derivation of the ring model from the neural engineering framework. Neural Computation. 2021 Mar;33(3):827-852. 3. https://doi.org/10.1162/neco_a_01361
 

Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity—the neural engineering framework. We analytically solve the framework for the classic ring model—a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.

@article{568bd4c5a2424c4ebb819639d1acfcb7,
title = "Mapping low-dimensional dynamics to high-dimensional neural activity: A derivation of the ring model from the neural engineering framework",
abstract = "Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity—the neural engineering framework. We analytically solve the framework for the classic ring model—a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.",
author = "Omri Barak and Sandro Romani",
note = "Publisher Copyright: {\textcopyright} 2021 Massachusetts Institute of Technology.",
year = "2021",
month = mar,
doi = "10.1162/neco_a_01361",
language = "אנגלית",
volume = "33",
pages = "827--852",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "3",

}

Quality of internal representation shapes learning performance in feedback neural networks

Susman L, Mastrogiuseppe F, Brenner N, Barak O. Quality of internal representation shapes learning performance in feedback neural networks. Physical Review Research. 2021 Feb 23;3(1):013176. https://doi.org/10.1103/PhysRevResearch.3.013176
 

A fundamental feature of complex biological systems is the ability to form feedback interactions with their environment. A prominent model for studying such interactions is reservoir computing, where learning acts on low-dimensional bottlenecks. Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood. In this work, we study nonlinear feedback networks trained to generate a sinusoidal signal, and analyze how learning performance is shaped by the interplay between internal network dynamics and target properties. By performing exact mathematical analysis of linearized networks, we predict that learning performance is maximized when the target is characterized by an optimal, intermediate frequency which monotonically decreases with the strength of the internal reservoir connectivity. At the optimal frequency, the reservoir representation of the target signal is high-dimensional, desynchronized, and thus maximally robust to noise. We show that our predictions successfully capture the qualitative behavior of performance in nonlinear networks. Moreover, we find that the relationship between internal representations and performance can be further exploited in trained nonlinear networks to explain behaviors which do not have a linear counterpart. Our results indicate that a major determinant of learning success is the quality of the internal representation of the target, which in turn is shaped by an interplay between parameters controlling the internal network and those defining the task.

@article{2c9656b4d6c94ca39df2f80cba8d4207,
title = "Quality of internal representation shapes learning performance in feedback neural networks",
abstract = "A fundamental feature of complex biological systems is the ability to form feedback interactions with their environment. A prominent model for studying such interactions is reservoir computing, where learning acts on low-dimensional bottlenecks. Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood. In this work, we study nonlinear feedback networks trained to generate a sinusoidal signal, and analyze how learning performance is shaped by the interplay between internal network dynamics and target properties. By performing exact mathematical analysis of linearized networks, we predict that learning performance is maximized when the target is characterized by an optimal, intermediate frequency which monotonically decreases with the strength of the internal reservoir connectivity. At the optimal frequency, the reservoir representation of the target signal is high-dimensional, desynchronized, and thus maximally robust to noise. We show that our predictions successfully capture the qualitative behavior of performance in nonlinear networks. Moreover, we find that the relationship between internal representations and performance can be further exploited in trained nonlinear networks to explain behaviors which do not have a linear counterpart. Our results indicate that a major determinant of learning success is the quality of the internal representation of the target, which in turn is shaped by an interplay between parameters controlling the internal network and those defining the task.",
author = "Lee Susman and Francesca Mastrogiuseppe and Naama Brenner and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2021 authors. Published by the American Physical Society.",
year = "2021",
month = feb,
day = "23",
doi = "10.1103/PhysRevResearch.3.013176",
language = "אנגלית",
volume = "3",
journal = "Physical Review Research",
issn = "2643-1564",
number = "1",

}

Charting and navigating the space of solutions for recurrent neural networks

Turner E, Dabholkar K, Barak O. Charting and navigating the space of solutions for recurrent neural networks. In Ranzato MA, Beygelzimer A, Dauphin Y, Liang PS, Wortman Vaughan J, editors, Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021. Neural information processing systems foundation. 2021. p. 25320-25333. (Advances in Neural Information Processing Systems).
 

In recent years Recurrent Neural Networks (RNNs) were successfully used to model the way neural activity drives task-related behavior in animals, operating under the implicit assumption that the obtained solutions are universal. Observations in both neuroscience and machine learning challenge this assumption. Animals can approach a given task with a variety of strategies, and training machine learning algorithms introduces the phenomenon of underspecification. These observations imply that every task is associated with a space of solutions. To date, the structure of this space is not understood, limiting the approach of comparing RNNs with neural data. Here, we characterize the space of solutions associated with various tasks. We first study a simple two-neuron network on a task that leads to multiple solutions. We trace the nature of the final solution back to the network’s initial connectivity and identify discrete dynamical regimes that underlie this diversity. We then examine three neuroscience-inspired tasks: Delayed discrimination, Interval discrimination, and Time reproduction. For each task, we find a rich set of solutions. One layer of variability can be found directly in the neural activity of the networks. An additional layer is uncovered by testing the trained networks’ ability to extrapolate, as a perturbation to a system often reveals hidden structure. Furthermore, we relate extrapolation patterns to specific dynamical objects and effective algorithms found by the networks. We introduce a tool to derive the reduced dynamics of networks by generating a compact directed graph describing the essence of the dynamics with regards to behavioral inputs and outputs. Using this representation, we can partition the solutions to each task into a handful of types and show that neural features can partially predict them. Taken together, our results shed light on the concept of the space of solutions and its uses both in Machine learning and in Neuroscience.

@inproceedings{33ea127ab19e428e975db717d1b509fc,
title = "Charting and navigating the space of solutions for recurrent neural networks",
abstract = "In recent years Recurrent Neural Networks (RNNs) were successfully used to model the way neural activity drives task-related behavior in animals, operating under the implicit assumption that the obtained solutions are universal. Observations in both neuroscience and machine learning challenge this assumption. Animals can approach a given task with a variety of strategies, and training machine learning algorithms introduces the phenomenon of underspecification. These observations imply that every task is associated with a space of solutions. To date, the structure of this space is not understood, limiting the approach of comparing RNNs with neural data. Here, we characterize the space of solutions associated with various tasks. We first study a simple two-neuron network on a task that leads to multiple solutions. We trace the nature of the final solution back to the network{\textquoteright}s initial connectivity and identify discrete dynamical regimes that underlie this diversity. We then examine three neuroscience-inspired tasks: Delayed discrimination, Interval discrimination, and Time reproduction. For each task, we find a rich set of solutions. One layer of variability can be found directly in the neural activity of the networks. An additional layer is uncovered by testing the trained networks{\textquoteright} ability to extrapolate, as a perturbation to a system often reveals hidden structure. Furthermore, we relate extrapolation patterns to specific dynamical objects and effective algorithms found by the networks. We introduce a tool to derive the reduced dynamics of networks by generating a compact directed graph describing the essence of the dynamics with regards to behavioral inputs and outputs. Using this representation, we can partition the solutions to each task into a handful of types and show that neural features can partially predict them. Taken together, our results shed light on the concept of the space of solutions and its uses both in Machine learning and in Neuroscience.",
author = "Elia Turner and Kabir Dabholkar and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2021 Neural information processing systems foundation. All rights reserved.; 35th Conference on Neural Information Processing Systems, NeurIPS 2021 ; Conference date: 06-12-2021 Through 14-12-2021",
year = "2021",
language = "אנגלית",
series = "Advances in Neural Information Processing Systems",
publisher = "Neural information processing systems foundation",
pages = "25320--25333",
editor = "Marc'Aurelio Ranzato and Alina Beygelzimer and Yann Dauphin and Liang, {Percy S.} and {Wortman Vaughan}, Jenn",
booktitle = "Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021",

}

2020

Cell-Type-Specific Outcome Representation in the Primary Motor Cortex

Levy S, Lavzin M, Benisty H, Ghanayim A, Dubin U, Achvat S et al. Cell-Type-Specific Outcome Representation in the Primary Motor Cortex. Neuron. 2020 Sep 9;107(5):954-971.e9. https://doi.org/10.1016/j.neuron.2020.06.006
 

Monitoring outcome is critical for acquiring skilled movements. Levy et al. describe activity in subpopulations of layer 2–3 motor cortex pyramidal neurons that distinctly report outcomes of previous successes and failures independent of kinematics and reward. These signals may serve as reinforcement learning processes involved in maintaining or learning skilled movements.

@article{a38b7f1254834fff9ebfae3c6fa7ae26,
title = "Cell-Type-Specific Outcome Representation in the Primary Motor Cortex",
abstract = "Monitoring outcome is critical for acquiring skilled movements. Levy et al. describe activity in subpopulations of layer 2–3 motor cortex pyramidal neurons that distinctly report outcomes of previous successes and failures independent of kinematics and reward. These signals may serve as reinforcement learning processes involved in maintaining or learning skilled movements.",
keywords = "layer 2-3, layer 5, motor cortex, motor learning, outcome, pyramidal tract neurons, reward, two-photon calcium imaging",
author = "Shahar Levy and Maria Lavzin and Hadas Benisty and Amir Ghanayim and Uri Dubin and Shay Achvat and Zohar Brosh and Fadi Aeed and Mensh, {Brett D.} and Schiller Yitzhak and Ron Meir and Omri Barak and Ronen Talmon and Hantman, {Adam W.} and Jackie Schiller",
note = "Publisher Copyright: {\textcopyright} 2020 Elsevier Inc.",
year = "2020",
month = sep,
day = "9",
doi = "10.1016/j.neuron.2020.06.006",
language = "אנגלית",
volume = "107",
pages = "954--971.e9",
journal = "Neuron",
issn = "0896-6273",
publisher = "Cell Press",
number = "5",

}

Local and global features of genetic networks supporting a phenotypic switch

Shomar A, Barak O, Brenner N. Local and global features of genetic networks supporting a phenotypic switch. PLoS ONE. 2020 Sep;15(9 September):e0238433. e0238433. https://doi.org/10.1371/journal.pone.0238433
 

Phenotypic switches are associated with alterations in the cell’s gene expression profile and are vital to many aspects of biology. Previous studies have identified local motifs of the genetic regulatory network that could underlie such switches. Recent advancements allowed the study of networks at the global, many-gene, level; however, the relationship between the local and global scales in giving rise to phenotypic switches remains elusive. In this work, we studied the epithelial-mesenchymal transition (EMT) using a gene regulatory network model. This model supports two clusters of stable steady-states identified with the epithelial and mesenchymal phenotypes, and a range of intermediate less stable hybrid states, whose importance in cancer has been recently highlighted. Using an array of network perturbations and quantifying the resulting landscape, we investigated how features of the network at different levels give rise to these landscape properties. We found that local connectivity patterns affect the landscape in a mostly incremental manner; in particular, a specific previously identified double-negative feedback motif is not required when embedded in the full network, because the landscape is maintained at a global level. Nevertheless, despite the distributed nature of the switch, it is possible to find combinations of a few local changes that disrupt it. At the level of network architecture, we identified a crucial role for peripheral genes that act as incoming signals to the network in creating clusters of states. Such incoming signals are a signature of modularity and are expected to appear also in other biological networks. Hybrid states between epithelial and mesenchymal arise in the model due to barriers in the interaction between genes, causing hysteresis at all connections. Our results suggest emergent switches can neither be pinpointed to local motifs, nor do they arise as typical properties of random network ensembles. Rather, they arise through an interplay between the nature of local interactions, and the core-periphery structure induced by the modularity of the cell.

@article{a2f68ef954df48f89aac738679a174a1,
title = "Local and global features of genetic networks supporting a phenotypic switch",
abstract = "Phenotypic switches are associated with alterations in the cell{\textquoteright}s gene expression profile and are vital to many aspects of biology. Previous studies have identified local motifs of the genetic regulatory network that could underlie such switches. Recent advancements allowed the study of networks at the global, many-gene, level; however, the relationship between the local and global scales in giving rise to phenotypic switches remains elusive. In this work, we studied the epithelial-mesenchymal transition (EMT) using a gene regulatory network model. This model supports two clusters of stable steady-states identified with the epithelial and mesenchymal phenotypes, and a range of intermediate less stable hybrid states, whose importance in cancer has been recently highlighted. Using an array of network perturbations and quantifying the resulting landscape, we investigated how features of the network at different levels give rise to these landscape properties. We found that local connectivity patterns affect the landscape in a mostly incremental manner; in particular, a specific previously identified double-negative feedback motif is not required when embedded in the full network, because the landscape is maintained at a global level. Nevertheless, despite the distributed nature of the switch, it is possible to find combinations of a few local changes that disrupt it. At the level of network architecture, we identified a crucial role for peripheral genes that act as incoming signals to the network in creating clusters of states. Such incoming signals are a signature of modularity and are expected to appear also in other biological networks. Hybrid states between epithelial and mesenchymal arise in the model due to barriers in the interaction between genes, causing hysteresis at all connections. Our results suggest emergent switches can neither be pinpointed to local motifs, nor do they arise as typical properties of random network ensembles. Rather, they arise through an interplay between the nature of local interactions, and the core-periphery structure induced by the modularity of the cell.",
keywords = "Biological Variation, Population/genetics, Epithelial-Mesenchymal Transition/genetics, Feedback, Physiological/physiology, Gene Regulatory Networks/genetics, Humans, Models, Biological, Models, Genetic, Models, Statistical, Phenotype",
author = "Aseel Shomar and Omri Barak and Naama Brenner",
note = "Publisher Copyright: {\textcopyright} 2020 Shomar et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
year = "2020",
month = sep,
doi = "10.1371/journal.pone.0238433",
language = "אנגלית",
volume = "15",
pages = "e0238433",
journal = "PLoS ONE",
issn = "1932-6203",
publisher = "Public Library of Science San Francisco, USA",
number = "9 September",

}

Repeated sequential learning increases memory capacity via effective decorrelation in a recurrent neural network

Kurikawa T, Barak O, Kaneko K. Repeated sequential learning increases memory capacity via effective decorrelation in a recurrent neural network. Physical Review Research. 2020 Jun;2(2):023307. https://doi.org/10.1103/PhysRevResearch.2.023307
 

Memories in neural systems are shaped through the interplay of neural and learning dynamics under external inputs. This interplay can result in either overwriting or strengthening of memories as the system is repeatedly exposed to multiple input-output mappings, but it is unclear which effect dominates. By introducing a simple local learning rule to a neural network, we found that the memory capacity is drastically increased by sequentially repeating the learning steps of input-output mappings. We show that the resulting connectivity decorrelates the target patterns. This process is associated with the emergence of spontaneous activity that intermittently exhibits neural patterns corresponding to embedded memories. Stabilization of memories is achieved by a distinct bifurcation from the spontaneous activity under the application of each input.

@article{40a501f486574994bdc9783a267ac935,
title = "Repeated sequential learning increases memory capacity via effective decorrelation in a recurrent neural network",
abstract = "Memories in neural systems are shaped through the interplay of neural and learning dynamics under external inputs. This interplay can result in either overwriting or strengthening of memories as the system is repeatedly exposed to multiple input-output mappings, but it is unclear which effect dominates. By introducing a simple local learning rule to a neural network, we found that the memory capacity is drastically increased by sequentially repeating the learning steps of input-output mappings. We show that the resulting connectivity decorrelates the target patterns. This process is associated with the emergence of spontaneous activity that intermittently exhibits neural patterns corresponding to embedded memories. Stabilization of memories is achieved by a distinct bifurcation from the spontaneous activity under the application of each input.",
author = "Tomoki Kurikawa and Omri Barak and Kunihiko Kaneko",
note = "Publisher Copyright: {\textcopyright} 2020 authors. Published by the American Physical Society. Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.",
year = "2020",
month = jun,
doi = "10.1103/PhysRevResearch.2.023307",
language = "אנגלית",
volume = "2",
journal = "Physical Review Research",
issn = "2643-1564",
number = "2",

}

Scale free topology as an effective feedback system

Rivkind A, Schreier H, Brenner N, Barak O. Scale free topology as an effective feedback system. PLoS Computational Biology. 2020 May;16(5):e1007825. e1007825. https://doi.org/10.1371/journal.pcbi.1007825
 

Biological networks are often heterogeneous in their connectivity pattern, with degree distributions featuring a heavy tail of highly connected hubs. The implications of this heterogeneity on dynamical properties are a topic of much interest. Here we show that interpreting topology as a feedback circuit can provide novel insights on dynamics. Based on the observation that in finite networks a small number of hubs have a disproportionate effect on the entire system, we construct an approximation by lumping these nodes into a single effective hub, which acts as a feedback loop with the rest of the nodes. We use this approximation to study dynamics of networks with scale-free degree distributions, focusing on their probability of convergence to fixed points. We find that the approximation preserves convergence statistics over a wide range of settings. Our mapping provides a parametrization of scale free topology which is predictive at the ensemble level and also retains properties of individual realizations. Specifically, outgoing hubs have an organizing role that can drive the network to convergence, in analogy to suppression of chaos by an external drive. In contrast, incoming hubs have no such property, resulting in a marked difference between the behavior of networks with outgoing vs. incoming scale free degree distribution. Combining feedback analysis with mean field theory predicts a transition between convergent and divergent dynamics which is corroborated by numerical simulations. Furthermore, they highlight the effect of a handful of outlying hubs, rather than of the connectivity distribution law as a whole, on network dynamics.

@article{acdefe041e9a42feab98b242c8d267c5,
title = "Scale free topology as an effective feedback system",
abstract = "Biological networks are often heterogeneous in their connectivity pattern, with degree distributions featuring a heavy tail of highly connected hubs. The implications of this heterogeneity on dynamical properties are a topic of much interest. Here we show that interpreting topology as a feedback circuit can provide novel insights on dynamics. Based on the observation that in finite networks a small number of hubs have a disproportionate effect on the entire system, we construct an approximation by lumping these nodes into a single effective hub, which acts as a feedback loop with the rest of the nodes. We use this approximation to study dynamics of networks with scale-free degree distributions, focusing on their probability of convergence to fixed points. We find that the approximation preserves convergence statistics over a wide range of settings. Our mapping provides a parametrization of scale free topology which is predictive at the ensemble level and also retains properties of individual realizations. Specifically, outgoing hubs have an organizing role that can drive the network to convergence, in analogy to suppression of chaos by an external drive. In contrast, incoming hubs have no such property, resulting in a marked difference between the behavior of networks with outgoing vs. incoming scale free degree distribution. Combining feedback analysis with mean field theory predicts a transition between convergent and divergent dynamics which is corroborated by numerical simulations. Furthermore, they highlight the effect of a handful of outlying hubs, rather than of the connectivity distribution law as a whole, on network dynamics.",
keywords = "Computational Biology/methods, Feedback, Gene Regulatory Networks/physiology, Models, Statistical, Models, Theoretical, Molecular Dynamics Simulation, Probability, Systems Analysis",
author = "Alexander Rivkind and Hallel Schreier and Naama Brenner and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2020 Rivkind et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
year = "2020",
month = may,
doi = "10.1371/journal.pcbi.1007825",
language = "אנגלית",
volume = "16",
pages = "e1007825",
journal = "PLoS Computational Biology",
issn = "1553-734X",
publisher = "Public Library of Science",
number = "5",

}

Dynamics of random recurrent networks with correlated low-rank structure

Schuessler F, Dubreuil A, Mastrogiuseppe F, Ostojic S, Barak O. Dynamics of random recurrent networks with correlated low-rank structure. Physical Review Research. 2020 Feb;2(1):013111. https://doi.org/10.1103/PhysRevResearch.2.013111
 

A given neural network in the brain is involved in many different tasks. This implies that, when considering a specific task, the network's connectivity contains a component which is related to the task and another component which can be considered random. Understanding the interplay between the structured and random components and their effect on network dynamics and functionality is an important open question. Recent studies addressed the coexistence of random and structured connectivity but considered the two parts to be uncorrelated. This constraint limits the dynamics and leaves the random connectivity nonfunctional. Algorithms that train networks to perform specific tasks typically generate correlations between structure and random connectivity. Here we study nonlinear networks with correlated structured and random components, assuming the structure to have a low rank. We develop an analytic framework to establish the precise effect of the correlations on the eigenvalue spectrum of the joint connectivity. We find that the spectrum consists of a bulk and multiple outliers, whose location is predicted by our theory. Using mean-field theory, we show that these outliers directly determine both the fixed points of the system and their stability. Taken together, our analysis elucidates how correlations allow structured and random connectivity to synergistically extend the range of computations available to networks.

@article{2d69e8d9e71a4c86bba595460b233851,
title = "Dynamics of random recurrent networks with correlated low-rank structure",
abstract = "A given neural network in the brain is involved in many different tasks. This implies that, when considering a specific task, the network's connectivity contains a component which is related to the task and another component which can be considered random. Understanding the interplay between the structured and random components and their effect on network dynamics and functionality is an important open question. Recent studies addressed the coexistence of random and structured connectivity but considered the two parts to be uncorrelated. This constraint limits the dynamics and leaves the random connectivity nonfunctional. Algorithms that train networks to perform specific tasks typically generate correlations between structure and random connectivity. Here we study nonlinear networks with correlated structured and random components, assuming the structure to have a low rank. We develop an analytic framework to establish the precise effect of the correlations on the eigenvalue spectrum of the joint connectivity. We find that the spectrum consists of a bulk and multiple outliers, whose location is predicted by our theory. Using mean-field theory, we show that these outliers directly determine both the fixed points of the system and their stability. Taken together, our analysis elucidates how correlations allow structured and random connectivity to synergistically extend the range of computations available to networks.",
author = "Friedrich Schuessler and Alexis Dubreuil and Francesca Mastrogiuseppe and Srdjan Ostojic and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2020 authors. Published by the American Physical Society. Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.",
year = "2020",
month = feb,
doi = "10.1103/PhysRevResearch.2.013111",
language = "אנגלית",
volume = "2",
journal = "Physical Review Research",
issn = "2643-1564",
number = "1",

}

The interplay between randomness and structure during learning in RNNs

Schuessler F, Mastrogiuseppe F, Dubreuil A, Ostojic S, Barak O. The interplay between randomness and structure during learning in RNNs. Advances in Neural Information Processing Systems. 2020;2020-December.
 

Recurrent neural networks (RNNs) trained on low-dimensional tasks have been widely used to model functional biological networks. However, the solutions found by learning and the effect of initial connectivity are not well understood. Here, we examine RNNs trained using gradient descent on different tasks inspired by the neuroscience literature. We find that the changes in recurrent connectivity can be described by low-rank matrices, despite the unconstrained nature of the learning algorithm. To identify the origin of the low-rank structure, we turn to an analytically tractable setting: training a linear RNN on a simplified task. We show how the low-dimensional task structure leads to low-rank changes to connectivity. This low-rank structure allows us to explain and quantify the phenomenon of accelerated learning in the presence of random initial connectivity. Altogether, our study opens a new perspective to understanding trained RNNs in terms of both the learning process and the resulting network structure.

@article{6774955aac9f4c52a1cc32b3ccbee6fe,
title = "The interplay between randomness and structure during learning in RNNs",
abstract = "Recurrent neural networks (RNNs) trained on low-dimensional tasks have been widely used to model functional biological networks. However, the solutions found by learning and the effect of initial connectivity are not well understood. Here, we examine RNNs trained using gradient descent on different tasks inspired by the neuroscience literature. We find that the changes in recurrent connectivity can be described by low-rank matrices, despite the unconstrained nature of the learning algorithm. To identify the origin of the low-rank structure, we turn to an analytically tractable setting: training a linear RNN on a simplified task. We show how the low-dimensional task structure leads to low-rank changes to connectivity. This low-rank structure allows us to explain and quantify the phenomenon of accelerated learning in the presence of random initial connectivity. Altogether, our study opens a new perspective to understanding trained RNNs in terms of both the learning process and the resulting network structure.",
author = "Friedrich Schuessler and Francesca Mastrogiuseppe and Alexis Dubreuil and Srdjan Ostojic and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2020 Neural information processing systems foundation. All rights reserved.; 34th Conference on Neural Information Processing Systems, NeurIPS 2020 ; Conference date: 06-12-2020 Through 12-12-2020",
year = "2020",
language = "אנגלית",
volume = "2020-December",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

The interplay between randomness and structure during learning in RNNs

Schuessler F, Mastrogiuseppe F, Dubreuil A, Ostojic S, Barak O. The interplay between randomness and structure during learning in RNNs. In Larochelle H, Ranzato M, Hadsell R, Balcan MF, Lin H, editors, Advances in Neural Information Processing Systems. Vol. 33. 2020. p. 13352-13362
@inproceedings{d4d6b2c838c8409da5681dd6cb799e5c,
title = "The interplay between randomness and structure during learning in RNNs",
author = "Friedrich Schuessler and Francesca Mastrogiuseppe and Alexis Dubreuil and Srdjan Ostojic and Omri Barak",
year = "2020",
language = "אנגלית",
isbn = "9781713829546",
volume = "33",
pages = "13352--13362",
editor = "H. Larochelle and M. Ranzato and R. Hadsell and Balcan, {M. F. } and H. Lin",
booktitle = "Advances in Neural Information Processing Systems",

}

IMPLEMENTING INDUCTIVE BIAS FOR DIFFERENT NAVIGATION TASKS THROUGH DIVERSE RNN ATTRRACTORS

Xu T, Barak O. IMPLEMENTING INDUCTIVE BIAS FOR DIFFERENT NAVIGATION TASKS THROUGH DIVERSE RNN ATTRRACTORS. 2020. Paper presented at 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia.
 

Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks. Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment. To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output. We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state. These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities. By combining two types of networks in a modular structure, we could get better performance for both regularities. Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes - which can be shaped by pre-training and analyzed using dynamical systems methods. Furthermore, we demonstrate that non-metric representations are useful for navigation tasks, and their combination with metric representation leads to flexibile multiple-task learning.

@conference{6f78077c474345c7bc6126bdbdeba951,
title = "IMPLEMENTING INDUCTIVE BIAS FOR DIFFERENT NAVIGATION TASKS THROUGH DIVERSE RNN ATTRRACTORS",
abstract = "Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks. Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment. To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output. We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state. These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities. By combining two types of networks in a modular structure, we could get better performance for both regularities. Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes - which can be shaped by pre-training and analyzed using dynamical systems methods. Furthermore, we demonstrate that non-metric representations are useful for navigation tasks, and their combination with metric representation leads to flexibile multiple-task learning.",
author = "Tie Xu and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2020 8th International Conference on Learning Representations, ICLR 2020. All rights reserved.; 8th International Conference on Learning Representations, ICLR 2020 ; Conference date: 30-04-2020",
year = "2020",
language = "אנגלית",

}

2019

Stable memory with unstable synapses

Susman L, Brenner N, Barak O. Stable memory with unstable synapses. Nature Communications. 2019 Dec 1;10(1):4441. https://doi.org/10.1038/s41467-019-12306-2
 

What is the physiological basis of long-term memory? The prevailing view in Neuroscience attributes changes in synaptic efficacy to memory acquisition, implying that stable memories correspond to stable connectivity patterns. However, an increasing body of experimental evidence points to significant, activity-independent fluctuations in synaptic strengths. How memories can survive these fluctuations and the accompanying stabilizing homeostatic mechanisms is a fundamental open question. Here we explore the possibility of memory storage within a global component of network connectivity, while individual connections fluctuate. We find that homeostatic stabilization of fluctuations differentially affects different aspects of network connectivity. Specifically, memories stored as time-varying attractors of neural dynamics are more resilient to erosion than fixed-points. Such dynamic attractors can be learned by biologically plausible learning-rules and support associative retrieval. Our results suggest a link between the properties of learning-rules and those of network-level memory representations, and point at experimentally measurable signatures.

@article{ba28fda2d00d45dc862927d0fd66fd06,
title = "Stable memory with unstable synapses",
abstract = "What is the physiological basis of long-term memory? The prevailing view in Neuroscience attributes changes in synaptic efficacy to memory acquisition, implying that stable memories correspond to stable connectivity patterns. However, an increasing body of experimental evidence points to significant, activity-independent fluctuations in synaptic strengths. How memories can survive these fluctuations and the accompanying stabilizing homeostatic mechanisms is a fundamental open question. Here we explore the possibility of memory storage within a global component of network connectivity, while individual connections fluctuate. We find that homeostatic stabilization of fluctuations differentially affects different aspects of network connectivity. Specifically, memories stored as time-varying attractors of neural dynamics are more resilient to erosion than fixed-points. Such dynamic attractors can be learned by biologically plausible learning-rules and support associative retrieval. Our results suggest a link between the properties of learning-rules and those of network-level memory representations, and point at experimentally measurable signatures.",
author = "Lee Susman and Naama Brenner and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2019, The Author(s).",
year = "2019",
month = dec,
day = "1",
doi = "10.1038/s41467-019-12306-2",
language = "אנגלית",
volume = "10",
journal = "Nature Communications",
issn = "2041-1723",
publisher = "Nature Publishing Group",
number = "1",

}

One step back, two steps forward: Interference and learning in recurrent neural networks

Beer C, Barak O. One step back, two steps forward: Interference and learning in recurrent neural networks. Neural Computation. 2019 Oct 1;31(10):1985-2003. https://doi.org/10.1162/neco_a_01222
 

Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network—a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning—one trial at a time—has major consequences for the learning trajectory and its final outcome. Specifically, we show that least mean squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning in a learning-rule-dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.

@article{e915cd903c3e436c8a0bb3a6a09be7b1,
title = "One step back, two steps forward: Interference and learning in recurrent neural networks",
abstract = "Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network—a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning—one trial at a time—has major consequences for the learning trajectory and its final outcome. Specifically, we show that least mean squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning in a learning-rule-dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.",
author = "Chen Beer and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2019 Massachusetts Institute of Technology.",
year = "2019",
month = oct,
day = "1",
doi = "10.1162/neco_a_01222",
language = "אנגלית",
volume = "31",
pages = "1985--2003",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "10",

}

Understanding and controlling memory in recurrent neural networks

Haviv D, Rivkind A, Barak O. Understanding and controlling memory in recurrent neural networks. In 36th International Conference on Machine Learning, ICML 2019. International Machine Learning Society (IMLS). 2019. p. 4733-4741. (36th International Conference on Machine Learning, ICML 2019).
 

To be effective in sequential data processing, Recurrent Neural Networks (RNNs) are required to keep track of past events by creating memories. While the relation between memories and the network's hidden state dynamics was established over the last decade, previous works in this direction were of a predominantly descriptive nature focusing mainly on locating the dynamical objects of interest. In particular, it remained unclear how dynamical observables affect the performance, how they form and whether they can be manipulated. Here, we utilize different training protocols, datasets and architectures to obtain a range of networks solving a delayed classification task with similar performance, alongside substantial differences in their ability to extrapolate for longer delays. We analyze the dynamics of the network's hidden state, and uncover the reasons for this difference. Each memory is found to be associated with a nearly steady state of the dynamics which we refer to as a 'slow point'. Slow point speeds predict extrapolation performance across all datasets, protocols and architectures tested. Furthermore, by tracking the formation of the slow points we are able to understand the origin of differences between training protocols. Finally, we propose a novel regularization technique that is based on the relation between hidden state speeds and memory longevity. Our technique manipulates these speeds, thereby leading to a dramatic improvement in memory robustness over time, and could pave the way for a new class of regularization methods.

@inproceedings{ab041c9b36e0476ca8c939c7f4cf345a,
title = "Understanding and controlling memory in recurrent neural networks",
abstract = "To be effective in sequential data processing, Recurrent Neural Networks (RNNs) are required to keep track of past events by creating memories. While the relation between memories and the network's hidden state dynamics was established over the last decade, previous works in this direction were of a predominantly descriptive nature focusing mainly on locating the dynamical objects of interest. In particular, it remained unclear how dynamical observables affect the performance, how they form and whether they can be manipulated. Here, we utilize different training protocols, datasets and architectures to obtain a range of networks solving a delayed classification task with similar performance, alongside substantial differences in their ability to extrapolate for longer delays. We analyze the dynamics of the network's hidden state, and uncover the reasons for this difference. Each memory is found to be associated with a nearly steady state of the dynamics which we refer to as a 'slow point'. Slow point speeds predict extrapolation performance across all datasets, protocols and architectures tested. Furthermore, by tracking the formation of the slow points we are able to understand the origin of differences between training protocols. Finally, we propose a novel regularization technique that is based on the relation between hidden state speeds and memory longevity. Our technique manipulates these speeds, thereby leading to a dramatic improvement in memory robustness over time, and could pave the way for a new class of regularization methods.",
author = "Doron Haviv and Alexnader Rivkind and Omri Barak",
note = "Publisher Copyright: Copyright 2019 by the author(s).; 36th International Conference on Machine Learning, ICML 2019 ; Conference date: 09-06-2019 Through 15-06-2019",
year = "2019",
language = "אנגלית",
series = "36th International Conference on Machine Learning, ICML 2019",
publisher = "International Machine Learning Society (IMLS)",
pages = "4733--4741",
booktitle = "36th International Conference on Machine Learning, ICML 2019",

}

2017

Recurrent neural networks as versatile tools of neuroscience research

Barak O. Recurrent neural networks as versatile tools of neuroscience research. Current Opinion in Neurobiology. 2017 Oct;46:1-6. https://doi.org/10.1016/j.conb.2017.06.003
 

Recurrent neural networks (RNNs) are a class of computational models that are often used as a tool to explain neurobiological phenomena, considering anatomical, electrophysiological and computational constraints. RNNs can either be designed to implement a certain dynamical principle, or they can be trained by input–output examples. Recently, there has been large progress in utilizing trained RNNs both for computational tasks, and as explanations of neural phenomena. I will review how combining trained RNNs with reverse engineering can provide an alternative framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool. Despite the recent progress and potential benefits, there are many fundamental gaps towards a theory of these networks. I will discuss these challenges and possible methods to attack them.

@article{490531d0c69e4c22b743638b459da56c,
title = "Recurrent neural networks as versatile tools of neuroscience research",
abstract = "Recurrent neural networks (RNNs) are a class of computational models that are often used as a tool to explain neurobiological phenomena, considering anatomical, electrophysiological and computational constraints. RNNs can either be designed to implement a certain dynamical principle, or they can be trained by input–output examples. Recently, there has been large progress in utilizing trained RNNs both for computational tasks, and as explanations of neural phenomena. I will review how combining trained RNNs with reverse engineering can provide an alternative framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool. Despite the recent progress and potential benefits, there are many fundamental gaps towards a theory of these networks. I will discuss these challenges and possible methods to attack them.",
author = "Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2017 Elsevier Ltd",
year = "2017",
month = oct,
doi = "10.1016/j.conb.2017.06.003",
language = "אנגלית",
volume = "46",
pages = "1--6",
journal = "Current Opinion in Neurobiology",
issn = "0959-4388",
publisher = "Elsevier Ltd.",

}

Grid Cells Encode Local Positional Information

Ismakov R, Barak O, Jeffery K, Derdikman D. Grid Cells Encode Local Positional Information. Current Biology. 2017 Aug 7;27(15):2337-2343.e3. https://doi.org/10.1016/j.cub.2017.06.034
 

The brain has an extraordinary ability to create an internal spatial map of the external world [1]. This map-like representation of environmental surroundings is encoded through specific types of neurons, located within the hippocampus and entorhinal cortex, which exhibit spatially tuned firing patterns [2, 3]. In addition to encoding space, these neurons are believed to be related to contextual information and memory [4–7]. One class of such cells is the grid cells, which are located within the entorhinal cortex, presubiculum, and parasubiculum [3, 8]. Grid cell firing forms a hexagonal array of firing fields, a pattern that is largely thought to reflect the operation of intrinsic self-motion-related computations [9–12]. If this is the case, then fields should be relatively uniform in size, number of spikes, and peak firing rate. However, it has been suggested that this is not in fact the case [3, 13]. The possibility exists that local spatial information also influences grid cells, which—if true—would greatly change the way in which grid cells are thought to contribute to place coding. Accordingly, we asked how discriminable the individual fields of a given grid cell are by looking at the distribution of field firing rates and reproducibility of this distribution across trials. Grid fields were less uniform in intensity than expected, and the pattern of strong and weak fields was spatially stable and recurred across trials. The distribution remained unchanged even after arena rescaling, but not after remapping. This suggests that additional local information is being overlaid onto the global hexagonal pattern of grid cells.

@article{e23b77c9a65b486b9ec3f63a05a50a8f,
title = "Grid Cells Encode Local Positional Information",
abstract = "The brain has an extraordinary ability to create an internal spatial map of the external world [1]. This map-like representation of environmental surroundings is encoded through specific types of neurons, located within the hippocampus and entorhinal cortex, which exhibit spatially tuned firing patterns [2, 3]. In addition to encoding space, these neurons are believed to be related to contextual information and memory [4–7]. One class of such cells is the grid cells, which are located within the entorhinal cortex, presubiculum, and parasubiculum [3, 8]. Grid cell firing forms a hexagonal array of firing fields, a pattern that is largely thought to reflect the operation of intrinsic self-motion-related computations [9–12]. If this is the case, then fields should be relatively uniform in size, number of spikes, and peak firing rate. However, it has been suggested that this is not in fact the case [3, 13]. The possibility exists that local spatial information also influences grid cells, which—if true—would greatly change the way in which grid cells are thought to contribute to place coding. Accordingly, we asked how discriminable the individual fields of a given grid cell are by looking at the distribution of field firing rates and reproducibility of this distribution across trials. Grid fields were less uniform in intensity than expected, and the pattern of strong and weak fields was spatially stable and recurred across trials. The distribution remained unchanged even after arena rescaling, but not after remapping. This suggests that additional local information is being overlaid onto the global hexagonal pattern of grid cells.",
keywords = "cognitive map, entorhinal cortex, grid cells, hippocampus, path integration, place cells, remapping, self-localization, spatial memory, spatial variability",
author = "Revekka Ismakov and Omri Barak and Kate Jeffery and Dori Derdikman",
note = "Publisher Copyright: {\textcopyright} 2017 The Authors",
year = "2017",
month = aug,
day = "7",
doi = "10.1016/j.cub.2017.06.034",
language = "אנגלית",
volume = "27",
pages = "2337--2343.e3",
journal = "Current Biology",
issn = "0960-9822",
publisher = "Cell Press",
number = "15",

}

Local Dynamics in Trained Recurrent Neural Networks

Rivkind A, Barak O. Local Dynamics in Trained Recurrent Neural Networks. Physical Review Letters. 2017 Jun 23;118(25):258101. https://doi.org/10.1103/PhysRevLett.118.258101
 

Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

@article{8ed76378108c4f929cd92e1507e03d89,
title = "Local Dynamics in Trained Recurrent Neural Networks",
abstract = "Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.",
author = "Alexander Rivkind and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2017 American Physical Society.",
year = "2017",
month = jun,
day = "23",
doi = "10.1103/PhysRevLett.118.258101",
language = "אנגלית",
volume = "118",
journal = "Physical Review Letters",
issn = "0031-9007",
publisher = "American Physical Society",
number = "25",

}

Dynamical timescale explains marginal stability in excitability dynamics

Xu T, Barak O. Dynamical timescale explains marginal stability in excitability dynamics. Journal of Neuroscience. 2017 Apr 26;37(17):4508-4524. https://doi.org/10.1523/JNEUROSCI.2340-16.2017
 

Action potentials, taking place over milliseconds, are the basis of neural computation. However, the dynamics of excitability over longer, behaviorally relevant timescales remain underexplored. A recent experiment used long-term recordings from single neurons to reveal multiple timescale fluctuations in response to constant stimuli, along with more reliable responses to variable stimuli. Here, we demonstrate that this apparent paradox is resolved if neurons operate in a marginally stable dynamic regime, which we reveal using a novel inference method. Excitability in this regime is characterized by large fluctuations while retaining high sensitivity to external varying stimuli. A new model with a dynamic recovery timescale that interacts with excitability captures this dynamic regime and predicts the neurons’ response with high accuracy. The model explains most experimental observations under several stimulus statistics. The compact structure of our model permits further exploration on the network level.

@article{a5a741b817d74c7a80182e341a5126f2,
title = "Dynamical timescale explains marginal stability in excitability dynamics",
abstract = "Action potentials, taking place over milliseconds, are the basis of neural computation. However, the dynamics of excitability over longer, behaviorally relevant timescales remain underexplored. A recent experiment used long-term recordings from single neurons to reveal multiple timescale fluctuations in response to constant stimuli, along with more reliable responses to variable stimuli. Here, we demonstrate that this apparent paradox is resolved if neurons operate in a marginally stable dynamic regime, which we reveal using a novel inference method. Excitability in this regime is characterized by large fluctuations while retaining high sensitivity to external varying stimuli. A new model with a dynamic recovery timescale that interacts with excitability captures this dynamic regime and predicts the neurons{\textquoteright} response with high accuracy. The model explains most experimental observations under several stimulus statistics. The compact structure of our model permits further exploration on the network level.",
keywords = "Adaptation, Excitability, Inference, Model, Multiple timescale, Nonlinear dynamics",
author = "Tie Xu and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2017 the authors.",
year = "2017",
month = apr,
day = "26",
doi = "10.1523/JNEUROSCI.2340-16.2017",
language = "אנגלית",
volume = "37",
pages = "4508--4524",
journal = "Journal of Neuroscience",
issn = "0270-6474",
publisher = "Society for Neuroscience",
number = "17",

}

A New Approach to Model Pitch Perception Using Sparse Coding

Barzelay O, Furst M, Barak O. A New Approach to Model Pitch Perception Using Sparse Coding. PLoS Computational Biology. 2017 Jan;13(1):e1005338. https://doi.org/10.1371/journal.pcbi.1005338
 

Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content–these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.

@article{897ecfc529444e26bcd89b8e67f45d18,
title = "A New Approach to Model Pitch Perception Using Sparse Coding",
abstract = "Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content–these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.",
author = "Oded Barzelay and Miriam Furst and Omri Barak",
note = "Publisher Copyright: {\textcopyright} 2017 Barzelay et al.",
year = "2017",
month = jan,
doi = "10.1371/journal.pcbi.1005338",
language = "אנגלית",
volume = "13",
journal = "PLoS Computational Biology",
issn = "1553-734X",
publisher = "Public Library of Science",
number = "1",

}

2016

Developmental changes in electrophysiological characteristics of human-induced pluripotent stem cell–derived cardiomyocytes

Ben-Ari M, Schick R, Ben Jehuda R, Reiter I, Binah O, Barak O et al. Developmental changes in electrophysiological characteristics of human-induced pluripotent stem cell–derived cardiomyocytes. Heart Rhythm. 2016 Dec 1;13(12):2379-2387. https://doi.org/10.1016/j.hrthm.2016.08.045
 

Background Previous studies proposed that throughout differentiation of human induced Pluripotent Stem Cell–derived cardiomyocytes (iPSC-CMs), only 3 types of action potentials (APs) exist: nodal-, atrial-, and ventricular-like. Objectives To investigate whether there are precisely 3 phenotypes or a continuum exists among them, we tested 2 hypotheses: (1) During culture development a cardiac precursor cell is present that—depending on age—can evolve into the 3 phenotypes. (2) The predominant pattern is early prevalence of a nodal phenotype, transient appearance of an atrial phenotype, evolution to a ventricular phenotype, and persistence of transitional phenotypes. Methods To test these hypotheses, we (1) performed fluorescence-activated cell sorting analysis of nodal, atrial, and ventricular markers; (2) recorded APs from 280 7- to 95-day-old iPSC-CMs; and (3) analyzed AP characteristics. Results The major findings were as follows: (1) fluorescence-activated cell sorting analysis of 30- and 60-day-old cultures showed that an iPSC-CMs population shifts from the nodal to the atrial/ventricular phenotype while including significant transitional populations; (2) the AP population did not consist of 3 phenotypes; (3) culture aging was associated with a shift from nodal to ventricular dominance, with a transient (57–70 days) appearance of the atrial phenotype; and (4) beat rate variability was more prominent in nodal than in ventricular cardiomyocytes, while pacemaker current density increased in older cultures. Conclusion From the onset of development in culture, the iPSC-CMs population includes nodal, atrial, and ventricular APs and a broad spectrum of transitional phenotypes. The most readily distinguishable phenotype is atrial, which appears only transiently yet dominates at 57–70 days of evolution.

@article{d4f57ab827144f3199fc39ce60e32b77,
title = "Developmental changes in electrophysiological characteristics of human-induced pluripotent stem cell–derived cardiomyocytes",
abstract = "Background Previous studies proposed that throughout differentiation of human induced Pluripotent Stem Cell–derived cardiomyocytes (iPSC-CMs), only 3 types of action potentials (APs) exist: nodal-, atrial-, and ventricular-like. Objectives To investigate whether there are precisely 3 phenotypes or a continuum exists among them, we tested 2 hypotheses: (1) During culture development a cardiac precursor cell is present that—depending on age—can evolve into the 3 phenotypes. (2) The predominant pattern is early prevalence of a nodal phenotype, transient appearance of an atrial phenotype, evolution to a ventricular phenotype, and persistence of transitional phenotypes. Methods To test these hypotheses, we (1) performed fluorescence-activated cell sorting analysis of nodal, atrial, and ventricular markers; (2) recorded APs from 280 7- to 95-day-old iPSC-CMs; and (3) analyzed AP characteristics. Results The major findings were as follows: (1) fluorescence-activated cell sorting analysis of 30- and 60-day-old cultures showed that an iPSC-CMs population shifts from the nodal to the atrial/ventricular phenotype while including significant transitional populations; (2) the AP population did not consist of 3 phenotypes; (3) culture aging was associated with a shift from nodal to ventricular dominance, with a transient (57–70 days) appearance of the atrial phenotype; and (4) beat rate variability was more prominent in nodal than in ventricular cardiomyocytes, while pacemaker current density increased in older cultures. Conclusion From the onset of development in culture, the iPSC-CMs population includes nodal, atrial, and ventricular APs and a broad spectrum of transitional phenotypes. The most readily distinguishable phenotype is atrial, which appears only transiently yet dominates at 57–70 days of evolution.",
keywords = "Action potential, Beat rate variability, Development, iPSC-CMs",
author = "Meital Ben-Ari and Revital Schick and {Ben Jehuda}, Ronen and Irina Reiter and Ofer Binah and Omri Barak and Amir Weissman",
note = "Publisher Copyright: {\textcopyright} 2016 Heart Rhythm Society",
year = "2016",
month = dec,
day = "1",
doi = "10.1016/j.hrthm.2016.08.045",
language = "אנגלית",
volume = "13",
pages = "2379--2387",
journal = "Heart Rhythm",
issn = "1547-5271",
publisher = "Elsevier",
number = "12",

}

2015

Grid cells correlation structure suggests organized feedforward projections into superficial layers of the medial entorhinal cortex

Tocker G, Barak O, Derdikman D. Grid cells correlation structure suggests organized feedforward projections into superficial layers of the medial entorhinal cortex. Hippocampus. 2015 Dec 1;25(12):1599-1613. https://doi.org/10.1002/hipo.22481
 

Navigation requires integration of external and internal inputs to form a representation of location. Part of this integration is considered to be carried out by the grid cells network in the medial entorhinal cortex (MEC). However, the structure of this neural network is unknown. To shed light on this structure, we measured noise correlations between 508 pairs of simultaneous previously recorded grid cells. We differentiated between pure grid and conjunctive cells (pure grid in Layers II, III, and VI vs. conjunctive in Layers III and V-only Layer III was bi-modal), and devised a new method to classify cell pairs as belonging/not-belonging to the same module. We found that pairs from the same module show significantly more correlations than pairs from different modules. The correlations between pure grid cells decreased in strength as their relative spatial phase increased. However, correlations were mostly at 0 time-lag, suggesting that the source of correlations was not only synaptic, but rather resulted mostly from common input. Given our measured correlations, the two functional groups of grid cells (pure vs. conjunctive), and the known disorganized recurrent connections within Layer II, we propose the following model: conjunctive cells in deep layers form an attractor network whose activity is governed by velocity-controlled signals. A second manifold in Layer II receives organized feedforward projections from the deep layers, giving rise to pure grid cells. Numerical simulations indicate that organized projections induce such correlations as we measure in superficial layers. Our results provide new evidence for the functional anatomy of the entorhinal circuit-suggesting that strong phase-organized feedforward projections support grid fields in the superficial layers.

@article{51177cc8adcd45d2a7f44cfaa3ff8b02,
title = "Grid cells correlation structure suggests organized feedforward projections into superficial layers of the medial entorhinal cortex",
abstract = "Navigation requires integration of external and internal inputs to form a representation of location. Part of this integration is considered to be carried out by the grid cells network in the medial entorhinal cortex (MEC). However, the structure of this neural network is unknown. To shed light on this structure, we measured noise correlations between 508 pairs of simultaneous previously recorded grid cells. We differentiated between pure grid and conjunctive cells (pure grid in Layers II, III, and VI vs. conjunctive in Layers III and V-only Layer III was bi-modal), and devised a new method to classify cell pairs as belonging/not-belonging to the same module. We found that pairs from the same module show significantly more correlations than pairs from different modules. The correlations between pure grid cells decreased in strength as their relative spatial phase increased. However, correlations were mostly at 0 time-lag, suggesting that the source of correlations was not only synaptic, but rather resulted mostly from common input. Given our measured correlations, the two functional groups of grid cells (pure vs. conjunctive), and the known disorganized recurrent connections within Layer II, we propose the following model: conjunctive cells in deep layers form an attractor network whose activity is governed by velocity-controlled signals. A second manifold in Layer II receives organized feedforward projections from the deep layers, giving rise to pure grid cells. Numerical simulations indicate that organized projections induce such correlations as we measure in superficial layers. Our results provide new evidence for the functional anatomy of the entorhinal circuit-suggesting that strong phase-organized feedforward projections support grid fields in the superficial layers.",
keywords = "Attractor network models, Conjunctive cells, Grid cell modules, Phase-related correlations, Theta phase-locking",
author = "Gilad Tocker and Omri Barak and Dori Derdikman",
note = "Publisher Copyright: {\textcopyright} 2015 Wiley Periodicals, Inc.",
year = "2015",
month = dec,
day = "1",
doi = "10.1002/hipo.22481",
language = "אנגלית",
volume = "25",
pages = "1599--1613",
journal = "Hippocampus",
issn = "1050-9631",
publisher = "Wiley-Liss Inc.",
number = "12",

}

Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty

Carnevale F, deLafuente V, Romo R, Barak O, Parga N. Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty. Neuron. 2015 May 20;86(4):1067-1077. https://doi.org/10.1016/j.neuron.2015.04.014
 

Under uncertainty, the brain uses previous knowledge to transform sensory inputs into the percepts on which decisions are based. When the uncertainty lies in the timing of sensory evidence, however, the mechanism underlying the use of previously acquired temporal information remains unknown. We study this issue in monkeys performing a detection task with variable stimulation times. We use the neural correlates of false alarms to infer the subject's response criterion and find that it modulates over the course of a trial. Analysis of premotor cortex activity shows thatthis modulation is represented by the dynamics of population responses. A trained recurrent network model reproduces the experimental findings and demonstrates a neural mechanism to benefit from temporal expectations in perceptual detection. Previous knowledge about the probability of stimulation over time can be intrinsically encoded in the neural population dynamics, allowing a flexible control of the response criterion over time.

@article{fd455e0c95ba4cca98f3afb603cf26ab,
title = "Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty",
abstract = "Under uncertainty, the brain uses previous knowledge to transform sensory inputs into the percepts on which decisions are based. When the uncertainty lies in the timing of sensory evidence, however, the mechanism underlying the use of previously acquired temporal information remains unknown. We study this issue in monkeys performing a detection task with variable stimulation times. We use the neural correlates of false alarms to infer the subject's response criterion and find that it modulates over the course of a trial. Analysis of premotor cortex activity shows thatthis modulation is represented by the dynamics of population responses. A trained recurrent network model reproduces the experimental findings and demonstrates a neural mechanism to benefit from temporal expectations in perceptual detection. Previous knowledge about the probability of stimulation over time can be intrinsically encoded in the neural population dynamics, allowing a flexible control of the response criterion over time.",
author = "Federico Carnevale and Victor deLafuente and Ranulfo Romo and Omri Barak and N{\'e}stor Parga",
note = "Publisher Copyright: {\textcopyright} 2015 Elsevier Inc..",
year = "2015",
month = may,
day = "20",
doi = "10.1016/j.neuron.2015.04.014",
language = "אנגלית",
volume = "86",
pages = "1067--1077",
journal = "Neuron",
issn = "0896-6273",
publisher = "Cell Press",
number = "4",

}

2014

Working models of working memory

Barak O, Tsodyks M. Working models of working memory. Current Opinion in Neurobiology. 2014 Apr;25:20-24. https://doi.org/10.1016/j.conb.2013.10.008
 

Working memory is a system that maintains and manipulates information for several seconds during the planning and execution of many cognitive tasks. Traditionally, it was believed that the neuronal underpinning of working memory is stationary persistent firing of selective neuronal populations. Recent advances introduced new ideas regarding possible mechanisms of working memory, such as short-term synaptic facilitation, precise tuning of recurrent excitation and inhibition, and intrinsic network dynamics. These ideas are motivated by computational considerations and careful analysis of experimental data. Taken together, they may indicate the plethora of different processes underlying working memory in the brain.

@article{e3f3736abc754605bee776d9f30df476,
title = "Working models of working memory",
abstract = "Working memory is a system that maintains and manipulates information for several seconds during the planning and execution of many cognitive tasks. Traditionally, it was believed that the neuronal underpinning of working memory is stationary persistent firing of selective neuronal populations. Recent advances introduced new ideas regarding possible mechanisms of working memory, such as short-term synaptic facilitation, precise tuning of recurrent excitation and inhibition, and intrinsic network dynamics. These ideas are motivated by computational considerations and careful analysis of experimental data. Taken together, they may indicate the plethora of different processes underlying working memory in the brain.",
author = "Omri Barak and Misha Tsodyks",
note = "Funding Information: We thank Ron Meir for helpful comments on the manuscript. MT is supported by the Israeli Science Foundation and Foundation Adelis . ",
year = "2014",
month = apr,
doi = "10.1016/j.conb.2013.10.008",
language = "אנגלית",
volume = "25",
pages = "20--24",
journal = "Current Opinion in Neurobiology",
issn = "0959-4388",
publisher = "Elsevier Ltd.",

}

2013

The importance of mixed selectivity in complex cognitive tasks

Rigotti M, Barak O, Warden MR, Wang XJ, Daw ND, Miller EK et al. The importance of mixed selectivity in complex cognitive tasks. Nature. 2013 May 30;497(7451):585-590. https://doi.org/10.1038/nature12160
 

Single-neuron activity in the prefrontal cortex (PFC) is tuned to mixtures of multiple task-related aspects. Such mixed selectivity is highly heterogeneous, seemingly disordered and therefore difficult to interpret. We analysed the neural activity recorded in monkeys during an object sequence memory task to identify a role of mixed selectivity in subserving the cognitive functions ascribed to the PFC. We show that mixed selectivity neurons encode distributed information about all task-relevant aspects. Each aspect can be decoded from the population of neurons even when single-cell selectivity to that aspect is eliminated. Moreover, mixed selectivity offers a significant computational advantage over specialized responses in terms of the repertoire of input-output functions implementable by readout neurons. This advantage originates from the highly diverse nonlinear selectivity to mixtures of task-relevant variables, a signature of high-dimensional neural representations. Crucially, this dimensionality is predictive of animal behaviour as it collapses in error trials. Our findings recommend a shift of focus for future studies from neurons that have easily interpretable response tuning to the widely observed, but rarely analysed, mixed selectivity neurons.

@article{de168c001bc64dcb851241a3bff0bdc6,
title = "The importance of mixed selectivity in complex cognitive tasks",
abstract = "Single-neuron activity in the prefrontal cortex (PFC) is tuned to mixtures of multiple task-related aspects. Such mixed selectivity is highly heterogeneous, seemingly disordered and therefore difficult to interpret. We analysed the neural activity recorded in monkeys during an object sequence memory task to identify a role of mixed selectivity in subserving the cognitive functions ascribed to the PFC. We show that mixed selectivity neurons encode distributed information about all task-relevant aspects. Each aspect can be decoded from the population of neurons even when single-cell selectivity to that aspect is eliminated. Moreover, mixed selectivity offers a significant computational advantage over specialized responses in terms of the repertoire of input-output functions implementable by readout neurons. This advantage originates from the highly diverse nonlinear selectivity to mixtures of task-relevant variables, a signature of high-dimensional neural representations. Crucially, this dimensionality is predictive of animal behaviour as it collapses in error trials. Our findings recommend a shift of focus for future studies from neurons that have easily interpretable response tuning to the widely observed, but rarely analysed, mixed selectivity neurons.",
author = "Mattia Rigotti and Omri Barak and Warden, {Melissa R.} and Wang, {Xiao Jing} and Daw, {Nathaniel D.} and Miller, {Earl K.} and Stefano Fusi",
note = "Funding Information: Acknowledgements We are grateful to L.F. Abbott for comments on the manuscript and for discussions. Work supported by the Gatsby Foundation, the Swartz Foundation and the Kavli Foundation. M.R. is supported by Swiss National Science Foundation grant PBSKP3-133357 and the Janggen-Poehn Foundation; N.D.D. is supported by the McKnight Foundation and theMcDonnell Foundation; E.K.M. is supported byNIMH grant 5-R37-MH087027-04 and The Picower Foundation; M.R.W. from the Brain & Behavior Research Foundation and the NARSAD Young Investigator grant.",
year = "2013",
month = may,
day = "30",
doi = "10.1038/nature12160",
language = "אנגלית",
volume = "497",
pages = "585--590",
journal = "Nature",
issn = "0028-0836",
publisher = "Nature Publishing Group",
number = "7451",

}

From fixed points to chaos: Three models of delayed discrimination

Barak O, Sussillo D, Romo R, Tsodyks M, Abbott LF. From fixed points to chaos: Three models of delayed discrimination. Progress in Neurobiology. 2013 Apr;103:214-222. https://doi.org/10.1016/j.pneurobio.2013.02.002
 

Working memory is a crucial component of most cognitive tasks. Its neuronal mechanisms are still unclear despite intensive experimental and theoretical explorations. Most theoretical models of working memory assume both time-invariant neural representations and precise connectivity schemes based on the tuning properties of network neurons. A different, more recent class of models assumes randomly connected neurons that have no tuning to any particular task, and bases task performance purely on adjustment of network readout. Intermediate between these schemes are networks that start out random but are trained by a learning scheme. Experimental studies of a delayed vibrotactile discrimination task indicate that some of the neurons in prefrontal cortex are persistently tuned to the frequency of a remembered stimulus, but the majority exhibit more complex relationships to the stimulus that vary considerably across time. We compare three models, ranging from a highly organized line attractor model to a randomly connected network with chaotic activity, with data recorded during this task. The random network does a surprisingly good job of both performing the task and matching certain aspects of the data. The intermediate model, in which an initially random network is partially trained to perform the working memory task by tuning its recurrent and readout connections, provides a better description, although none of the models matches all features of the data. Our results suggest that prefrontal networks may begin in a random state relative to the task and initially rely on modified readout for task performance. With further training, however, more tuned neurons with less time-varying responses should emerge as the networks become more structured.

@article{7c352865aa244243afcd8e596097e6dd,
title = "From fixed points to chaos: Three models of delayed discrimination",
abstract = "Working memory is a crucial component of most cognitive tasks. Its neuronal mechanisms are still unclear despite intensive experimental and theoretical explorations. Most theoretical models of working memory assume both time-invariant neural representations and precise connectivity schemes based on the tuning properties of network neurons. A different, more recent class of models assumes randomly connected neurons that have no tuning to any particular task, and bases task performance purely on adjustment of network readout. Intermediate between these schemes are networks that start out random but are trained by a learning scheme. Experimental studies of a delayed vibrotactile discrimination task indicate that some of the neurons in prefrontal cortex are persistently tuned to the frequency of a remembered stimulus, but the majority exhibit more complex relationships to the stimulus that vary considerably across time. We compare three models, ranging from a highly organized line attractor model to a randomly connected network with chaotic activity, with data recorded during this task. The random network does a surprisingly good job of both performing the task and matching certain aspects of the data. The intermediate model, in which an initially random network is partially trained to perform the working memory task by tuning its recurrent and readout connections, provides a better description, although none of the models matches all features of the data. Our results suggest that prefrontal networks may begin in a random state relative to the task and initially rely on modified readout for task performance. With further training, however, more tuned neurons with less time-varying responses should emerge as the networks become more structured.",
keywords = "LA, Model, Neural networks, Nonlinear dynamics, PFC, RN, TRAIN, Working memory",
author = "Omri Barak and David Sussillo and Ranulfo Romo and Misha Tsodyks and Abbott, {L. F.}",
note = "Funding Information: O.B. was supported by DARPA grant SyNAPSE HR0011-09-C-0002 . D.S. and L.A. were supported by NIH grant MH093338 . The research of R.R. was partially supported by an international Research Scholars Award from the Howard Hughes Medical Institute and grants from CONACYT and DGAPA-UNAM . M.T. is supported by the Israeli Science Foundation . We also thank the Swartz, Gatsby, Mathers and Kavli Foundations for continued support.",
year = "2013",
month = apr,
doi = "10.1016/j.pneurobio.2013.02.002",
language = "אנגלית",
volume = "103",
pages = "214--222",
journal = "Progress in Neurobiology",
issn = "0301-0082",
publisher = "Elsevier Ltd.",

}

The sparseness of mixed selectivity neurons controls the generalization-discrimination trade-off

Barak O, Rigotti M, Fusi S. The sparseness of mixed selectivity neurons controls the generalization-discrimination trade-off. Journal of Neuroscience. 2013 Feb 27;33(9):3844-3856. https://doi.org/10.1523/JNEUROSCI.2753-12.2013
 

Intelligent behavior requires integrating several sources of information in a meaningful fashion- be it context with stimulus or shape with color and size. This requires the underlying neural mechanism to respond in a different manner to similar inputs (discrimination), while maintaining a consistent response for noisy variations of the same input (generalization). We show that neurons that mix information sources via random connectivity can form an easy to read representation of input combinations. Using analytical and numerical tools, we show that the coding level or sparseness of these neurons' activity controls a trade-off between generalization and discrimination, with the optimal level depending on the task at hand. In all realistic situations that we analyzed, the optimal fraction of inputs to which a neuron responds is close to 0.1. Finally, we predict a relation between a measurable property of the neural representation and task performance.

@article{8bec67f7ddab4eb6af012ffbf6675f3d,
title = "The sparseness of mixed selectivity neurons controls the generalization-discrimination trade-off",
abstract = "Intelligent behavior requires integrating several sources of information in a meaningful fashion- be it context with stimulus or shape with color and size. This requires the underlying neural mechanism to respond in a different manner to similar inputs (discrimination), while maintaining a consistent response for noisy variations of the same input (generalization). We show that neurons that mix information sources via random connectivity can form an easy to read representation of input combinations. Using analytical and numerical tools, we show that the coding level or sparseness of these neurons' activity controls a trade-off between generalization and discrimination, with the optimal level depending on the task at hand. In all realistic situations that we analyzed, the optimal fraction of inputs to which a neuron responds is close to 0.1. Finally, we predict a relation between a measurable property of the neural representation and task performance.",
author = "Omri Barak and Mattia Rigotti and Stefano Fusi",
year = "2013",
month = feb,
day = "27",
doi = "10.1523/JNEUROSCI.2753-12.2013",
language = "אנגלית",
volume = "33",
pages = "3844--3856",
journal = "Journal of Neuroscience",
issn = "0270-6474",
publisher = "Society for Neuroscience",
number = "9",

}

Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks

Sussillo D, Barak O. Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Computation. 2013;25(3):626-649. https://doi.org/10.1162/NECO_a_00409
 

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputswith complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three highdimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.

@article{4b9ce0e479f94aaf8f14bfc8ad9f1987,
title = "Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks",
abstract = "Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputswith complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three highdimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.",
author = "David Sussillo and Omri Barak",
year = "2013",
doi = "10.1162/NECO_a_00409",
language = "אנגלית",
volume = "25",
pages = "626--649",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "3",

}

2011

A simple derivation of a bound on the perceptron margin using singular value decomposition

Barak O, Rigotti M. A simple derivation of a bound on the perceptron margin using singular value decomposition. Neural Computation. 2011 Aug;23(8):1935-1943. https://doi.org/10.1162/NECO_a_00152
 

The perceptron is a simple supervised algorithm to train a linear classifier that has been analyzed and used extensively. The classifier separates the data into two groups using a decision hyperplane, with the margin between the data and the hyperplane determining the classifier's ability to generalize and its robustness to input noise. Exact results for the maximal size of the separating margin are known for specific input distributions, and bounds exist for arbitrary distributions, but both rely on lengthy statistical mechanics calculations carried out in the limit of infinite input size. Here we present a short analysis of perceptron classification using singular value decomposition. We provide a simple derivation of a lower bound on the margin and an explicit formula for the perceptron weights that converges to the optimal result for large separating margins.

@article{749a6be4d55f495c809aafed6bb49500,
title = "A simple derivation of a bound on the perceptron margin using singular value decomposition",
abstract = "The perceptron is a simple supervised algorithm to train a linear classifier that has been analyzed and used extensively. The classifier separates the data into two groups using a decision hyperplane, with the margin between the data and the hyperplane determining the classifier's ability to generalize and its robustness to input noise. Exact results for the maximal size of the separating margin are known for specific input distributions, and bounds exist for arbitrary distributions, but both rely on lengthy statistical mechanics calculations carried out in the limit of infinite input size. Here we present a short analysis of perceptron classification using singular value decomposition. We provide a simple derivation of a lower bound on the margin and an explicit formula for the perceptron weights that converges to the optimal result for large separating margins.",
author = "Omri Barak and Mattia Rigotti",
year = "2011",
month = aug,
doi = "10.1162/NECO_a_00152",
language = "אנגלית",
volume = "23",
pages = "1935--1943",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "8",

}

2010

Neuronal population coding of parametric working memory

Barak O, Tsodyks M, Romo R. Neuronal population coding of parametric working memory. Journal of Neuroscience. 2010 Jul 14;30(28):9424-9430. https://doi.org/10.1523/JNEUROSCI.1875-10.2010
 

Comparing two sequentially presented stimuli is a widely used experimental paradigm for studying working memory. The delay activity of many single neurons in the prefrontal cortex (PFC) of monkeys was found to be stimulus-specific, however, population dynamics of stimulus representation has not been elucidated. We analyzed the population state of a large number of PFC neurons during a somato-sensory discrimination task. Using the tuning curves of the neurons, we derived a compact characterization of the population state. Stimulus representation by the population was found to degrade after stimulus termination, and emerge in a different form toward the end of the delay. Specifically, the tuning properties of neurons were found to change during the task. We suggest a mechanism whereby information about the stimulus is contained in activity-dependent synaptic facilitation of recurrent connections.

@article{731b2b1b73e3416499d321bd95e12fe6,
title = "Neuronal population coding of parametric working memory",
abstract = "Comparing two sequentially presented stimuli is a widely used experimental paradigm for studying working memory. The delay activity of many single neurons in the prefrontal cortex (PFC) of monkeys was found to be stimulus-specific, however, population dynamics of stimulus representation has not been elucidated. We analyzed the population state of a large number of PFC neurons during a somato-sensory discrimination task. Using the tuning curves of the neurons, we derived a compact characterization of the population state. Stimulus representation by the population was found to degrade after stimulus termination, and emerge in a different form toward the end of the delay. Specifically, the tuning properties of neurons were found to change during the task. We suggest a mechanism whereby information about the stimulus is contained in activity-dependent synaptic facilitation of recurrent connections.",
author = "Omri Barak and Misha Tsodyks and Ranulfo Romo",
year = "2010",
month = jul,
day = "14",
doi = "10.1523/JNEUROSCI.1875-10.2010",
language = "אנגלית",
volume = "30",
pages = "9424--9430",
journal = "Journal of Neuroscience",
issn = "0270-6474",
publisher = "Society for Neuroscience",
number = "28",

}

2008

SynaptiC Theory of Working Memory

Mongillo G, Barak O, Tsodyks M. SynaptiC Theory of Working Memory. Science. 2008 Mar 14;319(5869):1543-1546. https://doi.org/10.1126/science.1150769
 

It is usually assumed that enhanced spiking activity in the form of persistent reverberation for several seconds is the neural correlate of working memory. Here, we propose that working memory is sustained by calcium-mediated synaptic facilitation in the recurrent connections of neocortical networks. In this account, the presynaptic residual calcium is used as a buffer that is loaded, refreshed, and read out by spiking activity. Because of the long time constants of calcium kinetics, the refresh rate can be low, resulting in a mechanism that is metabolically efficient and robust. The duration and stability of working memory can be regulated by modulating the spontaneous activity in the network.

@article{2074256fd0c94c7eb66ee6a2e54dd053,
title = "SynaptiC Theory of Working Memory",
abstract = "It is usually assumed that enhanced spiking activity in the form of persistent reverberation for several seconds is the neural correlate of working memory. Here, we propose that working memory is sustained by calcium-mediated synaptic facilitation in the recurrent connections of neocortical networks. In this account, the presynaptic residual calcium is used as a buffer that is loaded, refreshed, and read out by spiking activity. Because of the long time constants of calcium kinetics, the refresh rate can be low, resulting in a mechanism that is metabolically efficient and robust. The duration and stability of working memory can be regulated by modulating the spontaneous activity in the network.",
author = "Gianluigi Mongillo and Omri Barak and Misha Tsodyks",
year = "2008",
month = mar,
day = "14",
doi = "10.1126/science.1150769",
language = "אנגלית",
volume = "319",
pages = "1543--1546",
journal = "Science",
issn = "0036-8075",
publisher = "American Association for the Advancement of Science",
number = "5869",

}

Slow oscillations in neural networks with facilitating synapses

Melamed O, Barak O, Silberberg G, Markram H, Tsodyks M. Slow oscillations in neural networks with facilitating synapses. Journal of Computational Neuroscience. 2008;25(2):308-316. https://doi.org/10.1007/s10827-008-0080-z
 

The synchronous oscillatory activity characterizing many neurons in a network is often considered to be a mechanism for representing, binding, conveying, and organizing information. A number of models have been proposed to explain high-frequency oscillations, but the mechanisms that underlie slow oscillations are still unclear. Here, we show by means of analytical solutions and simulations that facilitating excitatory (Ef) synapses onto interneurons in a neural network play a fundamental role, not only in shaping the frequency of slow oscillations, but also in determining the form of the up and down states observed in electrophysiological measurements. Short time constants and strong Ef synapse-connectivity were found to induce rapid alternations between up and down states, whereas long time constants and weak Ef synapse connectivity prolonged the time between up states and increased the up state duration. These results suggest a novel role for facilitating excitatory synapses onto interneurons in controlling the form and frequency of slow oscillations in neuronal circuits.

@article{4b0a2327a84c4080863bf257693734cf,
title = "Slow oscillations in neural networks with facilitating synapses",
abstract = "The synchronous oscillatory activity characterizing many neurons in a network is often considered to be a mechanism for representing, binding, conveying, and organizing information. A number of models have been proposed to explain high-frequency oscillations, but the mechanisms that underlie slow oscillations are still unclear. Here, we show by means of analytical solutions and simulations that facilitating excitatory (Ef) synapses onto interneurons in a neural network play a fundamental role, not only in shaping the frequency of slow oscillations, but also in determining the form of the up and down states observed in electrophysiological measurements. Short time constants and strong Ef synapse-connectivity were found to induce rapid alternations between up and down states, whereas long time constants and weak Ef synapse connectivity prolonged the time between up states and increased the up state duration. These results suggest a novel role for facilitating excitatory synapses onto interneurons in controlling the form and frequency of slow oscillations in neuronal circuits.",
keywords = "Dynamic synapse, Model, Neocortex, Recurrent network, Synchrony, Temporal processing",
author = "Ofer Melamed and Omri Barak and Gilad Silberberg and Henry Markram and Misha Tsodyks",
note = "Funding Information: Acknowledgment We thank the anonymous reviewers for their helpful comments. M.T. is partially supported by Israeli Science Foundation, Irving B. Harris Foundation and Abe & Kathryn Selsky Foundation. G.S. is supported by an HFSP long-term fellowship. O.B. is partially supported by the Azrieli foundation and the Kahn center for system biology.",
year = "2008",
doi = "10.1007/s10827-008-0080-z",
language = "אנגלית",
volume = "25",
pages = "308--316",
journal = "Journal of Computational Neuroscience",
issn = "0929-5313",
publisher = "Springer Netherlands",
number = "2",

}

2007

Erratum: Persistent activity in neural networks with dynamic synapses (PLoS Computational Biology 3, 2, DOI: 10.1371/journal.pcbi.0030035)

Barak O, Tsodyks M. Erratum: Persistent activity in neural networks with dynamic synapses (PLoS Computational Biology 3, 2, DOI: 10.1371/journal.pcbi.0030035). PLoS Computational Biology. 2007 May;3(5):945. https://doi.org/10.1371/journal.pcbi.0030104
@article{95a11fb4991745b78b52ea510d1b0aa6,
title = "Erratum: Persistent activity in neural networks with dynamic synapses (PLoS Computational Biology 3, 2, DOI: 10.1371/journal.pcbi.0030035)",
author = "Omri Barak and Misha Tsodyks",
year = "2007",
month = may,
doi = "10.1371/journal.pcbi.0030104",
language = "אנגלית",
volume = "3",
pages = "945",
journal = "PLoS Computational Biology",
issn = "1553-734X",
publisher = "Public Library of Science",
number = "5",

}

Persistent activity in neural networks with dynamic synapses

Barak O, Tsodyks M. Persistent activity in neural networks with dynamic synapses. PLoS Computational Biology. 2007 Feb;3(2):323-332. https://doi.org/10.1371/journal.pcbi.0030035
 

Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.

@article{e3c82fcc4ece4a939839d214a5d5dfb4,
title = "Persistent activity in neural networks with dynamic synapses",
abstract = "Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.",
author = "Omri Barak and Misha Tsodyks",
year = "2007",
month = feb,
doi = "10.1371/journal.pcbi.0030035",
language = "אנגלית",
volume = "3",
pages = "323--332",
journal = "PLoS Computational Biology",
issn = "1553-734X",
publisher = "Public Library of Science",
number = "2",

}

Stochastic Emergence of Repeating Cortical Motifs in Spontaneous Membrane Potential Fluctuations In Vivo

Mokeichev A, Okun M, Barak O, Katz Y, Ben-Shahar O, Lampl I. Stochastic Emergence of Repeating Cortical Motifs in Spontaneous Membrane Potential Fluctuations In Vivo. Neuron. 2007 Feb 1;53(3):413-425. https://doi.org/10.1016/j.neuron.2007.01.017
 

It was recently discovered that subthreshold membrane potential fluctuations of cortical neurons can precisely repeat during spontaneous activity, seconds to minutes apart, both in brain slices and in anesthetized animals. These repeats, also called cortical motifs, were suggested to reflect a replay of sequential neuronal firing patterns. We searched for motifs in spontaneous activity, recorded from the rat barrel cortex and from the cat striate cortex of anesthetized animals, and found numerous repeating patterns of high similarity and repetition rates. To test their significance, various statistics were compared between physiological data and three different types of stochastic surrogate data that preserve dynamical characteristics of the recorded data. We found no evidence for the existence of deterministically generated cortical motifs. Rather, the stochastic properties of cortical motifs suggest that they appear by chance, as a result of the constraints imposed by the coarse dynamics of subthreshold ongoing activity.

@article{bb7e9b6c22194c7b85709e8809ef1d56,
title = "Stochastic Emergence of Repeating Cortical Motifs in Spontaneous Membrane Potential Fluctuations In Vivo",
abstract = "It was recently discovered that subthreshold membrane potential fluctuations of cortical neurons can precisely repeat during spontaneous activity, seconds to minutes apart, both in brain slices and in anesthetized animals. These repeats, also called cortical motifs, were suggested to reflect a replay of sequential neuronal firing patterns. We searched for motifs in spontaneous activity, recorded from the rat barrel cortex and from the cat striate cortex of anesthetized animals, and found numerous repeating patterns of high similarity and repetition rates. To test their significance, various statistics were compared between physiological data and three different types of stochastic surrogate data that preserve dynamical characteristics of the recorded data. We found no evidence for the existence of deterministically generated cortical motifs. Rather, the stochastic properties of cortical motifs suggest that they appear by chance, as a result of the constraints imposed by the coarse dynamics of subthreshold ongoing activity.",
keywords = "SYSBIO, SYSNEURO",
author = "Alik Mokeichev and Michael Okun and Omri Barak and Yonatan Katz and Ohad Ben-Shahar and Ilan Lampl",
note = "Funding Information: We thank David Ferster for helpful discussions and for allowing us to use the data presented in Figure 9 which were recorded in his lab by I.L. and were part of the data analyzed in Ikegaya et al. (2004) . We would like to thank Ronny Aloni, Israel Nelken, Yosef Yarom, and Mayer Goldberg for their critical comments on our study and all the members of Lampl's lab for their helpful contribution to this work. We thank Gilad Jacobson for his insightful comments during the preparation of this manuscript. We thank the MOSIX group for providing computational resources on the MOSIX Grid at the Hebrew University. I.L. is an incumbent of the Carl and Frances Korn Career Development Chair in the Life Sciences. This work was supported by grants from The Israel Science Foundation (1037/03), the National Institute for Psychobiology in Israel, by the Henry S. and Anne Reich Research Fund for Mental Health, the Asher and Jeanette Alhadeff Research Award (I.L.), Sir Charles Clore fellowship (M.O.), Toman and Frankel funds of Ben Gurion University of the Negev and the Paul Ivanier Center for Robotics Research (O.B.-S.). ",
year = "2007",
month = feb,
day = "1",
doi = "10.1016/j.neuron.2007.01.017",
language = "אנגלית",
volume = "53",
pages = "413--425",
journal = "Neuron",
issn = "0896-6273",
publisher = "Cell Press",
number = "3",

}

2006

Recognition by variance: Learning rules for spatiotemporal patterns

Barak O, Tsodyks M. Recognition by variance: Learning rules for spatiotemporal patterns. Neural Computation. 2006 Oct;18(10):2343-2358. https://doi.org/10.1162/neco.2006.18.10.2343
 

Recognizing specific spatiotemporal patterns of activity, which take place at timescales much larger than the synaptic transmission and membrane time constants, is a demand from the nervous system exemplified, for instance, by auditory processing. We consider the total synaptic input that a single readout neuron receives on presentation of spatiotemporal spiking input patterns. Relying on the monotonic relation between the mean and the variance of a neuron's input current and its spiking output, we derive learning rules that increase the variance of the input current evoked by learned patterns relative to that obtained from random background patterns. We demonstrate that the model can successfully recognize a large number of patterns and exhibits a slow deterioration in performance with increasing number of learned patterns. In addition, robustness to time warping of the input patterns is revealed to be an emergent property of the model. Using a leaky integrate-and-fire realization of the readout neuron, we demonstrate that the above results also apply when considering spiking output.

@article{ad05d62a5ac04f1699caf93abf21fa22,
title = "Recognition by variance: Learning rules for spatiotemporal patterns",
abstract = "Recognizing specific spatiotemporal patterns of activity, which take place at timescales much larger than the synaptic transmission and membrane time constants, is a demand from the nervous system exemplified, for instance, by auditory processing. We consider the total synaptic input that a single readout neuron receives on presentation of spatiotemporal spiking input patterns. Relying on the monotonic relation between the mean and the variance of a neuron's input current and its spiking output, we derive learning rules that increase the variance of the input current evoked by learned patterns relative to that obtained from random background patterns. We demonstrate that the model can successfully recognize a large number of patterns and exhibits a slow deterioration in performance with increasing number of learned patterns. In addition, robustness to time warping of the input patterns is revealed to be an emergent property of the model. Using a leaky integrate-and-fire realization of the readout neuron, we demonstrate that the above results also apply when considering spiking output.",
author = "Omri Barak and Misha Tsodyks",
note = "Funding Information: We thank Ofer Melamed, Barak Blumenfeld, Alex Loebel, and Alik Moke-ichev for critical reading of the manuscript. We thank two anonymous reviewers for constructive comments on the previous version of the manuscript. The study is supported by the Israeli Science Foundation and Irving B. Harris Foundation. M. T. is the incumbent to the Gerald and Hedy Oliven Professorial Chair in Brain Research.",
year = "2006",
month = oct,
doi = "10.1162/neco.2006.18.10.2343",
language = "אנגלית",
volume = "18",
pages = "2343--2358",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "10",

}

Responses of trigeminal ganglion neurons to the radial distance of contact during active vibrissal touch

Szwed M, Bagdasarian K, Blumenfeld B, Barak O, Derdikman D, Ahissar E. Responses of trigeminal ganglion neurons to the radial distance of contact during active vibrissal touch. Journal of Neurophysiology. 2006 Feb;95(2):791-802. https://doi.org/10.1152/jn.00571.2005
 

Rats explore their environment by actively moving their whiskers. Recently, we described how object location in the horizontal (front-back) axis is encoded by first-order neurons in the trigeminal ganglion (TG) by spike timing. Here we show how TG neurons encode object location along the radial coordinate, i.e., from the snout outward. Using extracellular recordings from urethane- anesthetized rats and electrically induced whisking, we found that TG neurons encode radial distance primarily by the number of spikes fired. When an object was positioned closer to the whisker root, all touch-selective neurons recorded fired more spikes. Some of these cells responded exclusively to objects located near the base of whiskers, signaling proximal touch by an identity (labeled-line) code. A number of tonic touch-selective neurons also decreased delays from touch to the first spike and decreased interspike intervals for closer object positions. Information theory analysis revealed that near-certainty discrimination between two objects separated by 30% of the length of whiskers was possible for some single cells. However, encoding reliability was usually lower as a result of large trial-by-trial response variability. Our current findings, together with the identity coding suggested by anatomy for the vertical dimension and the temporal coding of the horizontal dimension, suggest that object location is encoded by separate neuronal variables along the three spatial dimensions: temporal for the horizontal, spatial for the vertical, and spike rate for the radial dimension.

@article{8581cf7930d040edb11a1f264ac26c49,
title = "Responses of trigeminal ganglion neurons to the radial distance of contact during active vibrissal touch",
abstract = "Rats explore their environment by actively moving their whiskers. Recently, we described how object location in the horizontal (front-back) axis is encoded by first-order neurons in the trigeminal ganglion (TG) by spike timing. Here we show how TG neurons encode object location along the radial coordinate, i.e., from the snout outward. Using extracellular recordings from urethane- anesthetized rats and electrically induced whisking, we found that TG neurons encode radial distance primarily by the number of spikes fired. When an object was positioned closer to the whisker root, all touch-selective neurons recorded fired more spikes. Some of these cells responded exclusively to objects located near the base of whiskers, signaling proximal touch by an identity (labeled-line) code. A number of tonic touch-selective neurons also decreased delays from touch to the first spike and decreased interspike intervals for closer object positions. Information theory analysis revealed that near-certainty discrimination between two objects separated by 30% of the length of whiskers was possible for some single cells. However, encoding reliability was usually lower as a result of large trial-by-trial response variability. Our current findings, together with the identity coding suggested by anatomy for the vertical dimension and the temporal coding of the horizontal dimension, suggest that object location is encoded by separate neuronal variables along the three spatial dimensions: temporal for the horizontal, spatial for the vertical, and spike rate for the radial dimension.",
author = "Marcin Szwed and Knarik Bagdasarian and Barak Blumenfeld and Omri Barak and Dori Derdikman and Ehud Ahissar",
year = "2006",
month = feb,
doi = "10.1152/jn.00571.2005",
language = "אנגלית",
volume = "95",
pages = "791--802",
journal = "Journal of Neurophysiology",
issn = "0022-3077",
publisher = "American Physiological Society",
number = "2",

}

© 2024 Copyright Elyachar Central Library, Technion - Israel Institute of Technology