Salk researchers use artificial intelligence to make strides in understanding the brain

Salk professors Terrence Sejnowski and Kay Tye, along with Salk graduate student Bed Tsuda (from left)
From left, Salk Institute professors Terrence Sejnowski and Kay Tye and graduate student Ben Tsuda helped author a paper about the development of a computational model to mimic brain activity.

Researchers at the Salk Institute of Biological Studies in La Jolla have published a paper detailing their advancements in artificial intelligence, further tying together the fields of AI and neuroscience.

Terrence Sejnowski, the Francis Crick chair at Salk and the head of its Computational Biology Laboratory, served as the lead author of the paper, published Nov. 24, about his team’s work to develop a computational model of brain activity.

Computational modeling uses computers to simulate and study complex systems using mathematics, physics and computer science. Thousands of computer experiments can identify a handful of laboratory experiments that are most likely to solve the problem being studied, according to the National Institutes of Health.

The motivation for the paper — co-authored by Kay Tye, a professor in Salk’s Systems Neurobiology Laboratory; Ben Tsuda, a Salk graduate student; and Hava Siegelmann of the University of Massachusetts at Amherst — came from knowing that “when someone has a brain injury in the prefrontal cortex, such as a stroke, they have certain deficits that are very subtle,” Sejnowski said.

The goal of the project, he said, “was to use tools that have become available very recently in artificial intelligence to try to understand how the prefrontal cortex of the brain functions.”

The prefrontal cortex, in the front of the brain, is the portion “known to be an important part for cognition, specifically for planning and making decisions. We wanted to be able to understand the neural mechanisms underlying that ability.”

“It points toward a really transformative impact on human mental disorders.”

Terrence Sejnowski, Salk Institute of Biological Studies

The Wisconsin Card Sorting Test is administered to patients with brain injuries, requiring them to switch among different plans or rules for sorting cards, Sejnowski said.

“Normal people have no trouble with this task,” he said, “but people with prefrontal damage inevitably perseverate [repeat the same action]. They’re following one rule, like match the color. When the rule suddenly changes to match the number … normal people would discover the new rule and go on with that. But people with prefrontal damage can’t switch. They end up doing the same thing over and over again.”

In developing a computational model, “we asked ourselves how could we mimic that deficit,” Sejnowski said. “We had to develop a model of the prefrontal cortex that is able to do the Wisconsin Card Sorting Test.”

The scientists looked for a similar impact on the model’s performance after a “lesion … damaging the network in the same way the brain is damaged,” he said.

He said the computational model they built was based on “neural network literature which goes back to the 1980s.” Sejnowski said he was “a pioneer in developing this class of models that are now being used in artificial intelligence today.”

This particular model is called the “mixture of experts,” a combination of circuits “somewhere in your prefrontal cortex … that have mastered specific rules or specific expert performance and tasks,” he said. “The prefrontal cortex has to decide, ‘For this particular task, which expert should I use? Should I use this one in this circuit or should I use that one in that circuit?’ A decision has to be made.”

The decision-making process is called “gating,” he said, and the “gating network” was mimicked in the computational model.

What made this different from the way it was done in the ‘80s, Sejnowski said, is the transition from the “feed forward neural network,” in which “the input came in and the output came out,” to a “more powerful network — the recurrent neural network.”

The recurrent neural network, Sejnowski said, involves training the network. “They had to learn, starting from many examples, in order to get them to perform at a high level.”

These “learning algorithms” are “a recipe,” he said. “Every time you get a reward from getting it right, you improve the performance. It’s a gradual process.”

In the development of the computational model, “we tested everything,” Sejnowski said. “We cut some of the inputs, we added noise within the gating network. When we added noise to the recurrent connections, it began to perseverate,” indicating the gating network is in the prefrontal cortex, as hypothesized.

That confirmation means “we need to develop molecular tools that will allow us to go in and manipulate the actual synapses and connections,” Sejnowski said. “It points toward a really transformative impact on human mental disorders.”

“We’re very hopeful,” he added. “We have these tools and techniques that we’re developing, and I think it’s just a matter of time now before we’ll actually be able to help a lot of these people.”

Terrence Sejnowski is pictured discussing his 2018 book “The Deep Learning Revolution.”
Terrence Sejnowski, pictured discussing his 2018 book “The Deep Learning Revolution,” says the study and intersection of artificial and human intelligence is “a historical moment.”

The project is a further step in “a revolution that has occurred over the last 10 years in artificial intelligence and a revolution in neuroscience,” Sejnowski said. Until a decade ago, “artificial intelligence ignored the brain. These were engineers who just wanted to write a computer program. It was a much more difficult problem than anybody could have imagined.”

“Nature has solved the problem,” he said, noting that scientists looking to see how the brain works shifted the development of artificial intelligence from a computer engineering standpoint to one of neuroscience.

“The secret sauce that really broke open the revolution was the fact that the brain doesn’t have to be programmed. It learns through experience, and that means it learns through these learning algorithms,” Sejnowski said.

“The key is … [not] to write instructions for every possibility,” he said. “That’s impractical. What you do is you give the neural network a thousand examples [to] figure out how to come up with the right answer,” then ask the network to generalize based on information provided.

This approach to artificial intelligence has worked for many areas of the field, including object recognition, language translation and speech recognition, he said. “We have networks that can make decisions, play games at world champion levels. This is basically a transformation.”

Sejnowski goes into further discussion of the intersection of artificial and human intelligence in his 2018 book “The Deep Learning Revolution.”

That intersection is “a historical moment,” he said. “From now on, we are going to be working together; these two fields are suddenly coalescing. As we learn more about the brain, that can inspire the next generation of AI, and as the tools get better and better for analyzing these networks, that’s going to illuminate how the brain works.” ◆