Skip to main content
Tufts Mobile homeNews home
Story
3 of 20

Teaching AI the Rules of the Brain

Tufts neuroscientist Michael Halassa on generating data from neurons to inform artificial intelligence

The 2024 Nobel Prizes in physics and chemistry were seen as a sweep for artificial intelligence (AI) tools which, at their conception, were inspired by neuroscience. By imitating the behavior of human brain cells, machine-learning algorithms are accelerating our understanding of basic biology, with technologies such as Google DeepMind’s AlphaFold 3 making it possible to predict the structure of proteins or how they might interact with potential drugs.

As scientists across every field grapple with what AI will mean for their work, physician scientist Michael Halassa, an associate professor of neuroscience at Tufts University School of Medicine, is focused on how it could transform the study of cognitive processing, mental illness, and psychiatric medicine.

Halassa’s lab has spent the past several years measuring the way brain cells talk to one another as subjects solve complex tasks. In addition to aspiring to understand how the brain reasons about the world, a practical application of his work is to create "disease-relevant, brain-based models," which are measurements that translate brain pathology into forecasting models that predict treatment response.

Halassa says that if appropriate machine-learning architectures—the blueprint for how an AI program evaluates and processes information—are built and trained on neurological data, these tools could model complex diseases like schizophrenia and be used to track a patient's response to treatment. His goal is to use such computations to inspire the next generation of AI models for psychiatry.

Tufts Now: The mission of your lab is to connect neural circuits to cognition. Could you talk about what that means to you and your team?

Michael Halassa: We started as a mouse lab trying to better understand a part of the brain called the thalamus, which is involved in gating sensory information and helping you to decide where to focus your attention. For example, if you're at a crowded party, it's your thalamus that allows you to block out all the music or voices around you to eavesdrop on a conversation happening in another room.

As we started going into our research, a major surprise was the realization that most of the thalamus isn't getting inputs from the senses, but from the cerebral cortex. We later learned that part of its job is to function like the brain's voting system, taking in information, looking for trends, deciding what is reliable and helping us make the best possible decisions.

The human brain over the course of evolution has developed this hardwired mechanism to look for misinformation, and our lab has been exploring how this breaks down in schizophrenia. Our work is also providing insights into how to build AI architectures that can make sense of conflicting information so they can compute trends and combine input in the most optimal ways.

What are examples of research projects that could be helpful to inform AI models? 

By recording the activity of the thalamus and prefrontal cortex, the part of the brain that drives executive functions such as making complex decisions, in animals solving various tasks, we are generating models of how the brain works that can then be replicated or applied to AI. For example, in 2021, a postdoctoral fellow in my lab, Arghya Mukherjee, published a paper in the journal Nature showing that the mouse thalamus can track conflicting sensory inputs while making decisions, slowing down decision dynamics in the prefrontal cortex in a manner that is commensurate with the reliability of incoming information. 

 More recently, postdoctoral associate Norman Lam led a study published in Nature involving tree shrews. These animals can arbitrate between errors due to their own perceptual misjudgments versus those that are truly due to environmental shifts, and we found that the thalamus is critical for this process. By keeping track of the different sources of uncertainty (perceptual vs. environmental), a subject can appropriately shift their behavior in a manner that better matched true environmental changes. This is what neuroscientists refer to as a hierarchical decision.

As humans, we are faced with this type of issue constantly; we can quickly infer that a traffic light is broken on a clear day but it may take longer to reach that decision on a foggy day. But these inferences can sometimes lead to misjudgments. For example, if you notice a coworker is not smiling one morning, you may assume they are upset with you when they could just be preoccupied. This "jumping to conclusions" phenomenon is exacerbated in schizophrenia and may be related to disruptions in the circuits outlined above. We are continuing to test these hypotheses through task designs that isolate these processes.

Where do you see AI networks evolving because of research like yours?

Neural networks are usually good at learning only one thing and when they try to learn a new task, previous tasks become overwritten. This has been a challenge to overcome because the architectures that can train AI to learn a language or interpret visual cues are often not interoperable.

The human brain can multitask, and it does so with a fraction of the energy required for AI. You can literally run a human being on a cup of sugar water while they simultaneously drive a car and talk on the phone. To do the same with AI, you'd need enough electricity to power a village for a year. We think part of the solution to improving AI performance lies in understanding this ability that humans use all the time.

Humans are also better able to make decisions that are hierarchical in nature, partly because we are good at segmenting the environment into perceptual or mnemonic episodes we call "context." Making adaptive responses is reliant on keeping track of the context and knowing exactly when it has shifted.

AI programs are less adept at segmenting the environment in this adaptive manner, making it difficult for them to know when errors are generated due to perceptual misjudgments or an environmental switch. We are hard at work trying to figure out how the animal and human brain does that and hope to teach AI how to do it accordingly.

What could AI programs do with “disease-relevant models” and where would they come from?

Our work has primarily focused on schizophrenia, which is associated with a breakdown in communication between the thalamus and the cerebral cortex. We think there may be some inability for the brain to deliberate and make decisions. We see this in mouse models relevant to the disorder in which an increase in sensory noise makes them more unable to change their behavior based on new evidence.

But traditional behavioral research on animals has not been very predictive of what happens in people, as behaviors have been poorly controlled and hard to interpret. What some of us in the field have done is try to put the brain in a specific state where you can expose a particular computation. This means you know the exact inputs needed to generate a specific behavioral output and these outputs can be translated into computational hypotheses in the form of circuit models. We are at the cusp of having the right data to do this in translational research using AI tools.

How do you envision AI models trained on these metrics being used in psychiatry?

First, AI-inspired disease models could help us better understand the underlying mechanisms of complex psychiatric disorders like schizophrenia. By mimicking the neural circuitry we observe in patients, we might be able to identify which neural pathways are most dysfunctional and can therefore be treated with certain medications or noninvasive neurostimulation techniques.

Second, if we can train standard AI tools (just as data analysis tools) to recognize the subtle patterns of brain activity and behavior associated with different psychiatric conditions, it could help clinicians make more accurate and earlier diagnoses. For example, as we learn more about schizophrenia, we will likely find that it's not a single disease. We'll need computation descriptions to define what a patient has and the best course of treatment.

Finally, these AI forecasting models could be used to predict and track treatment responses. By monitoring how a patient's brain activity changes over time based on the expected patterns, we might be able to personalize treatment plans more effectively and adjust them in real time based on how someone is responding.