Category Archives: Neuroscience

Brain Areas Take Micro-Naps While the Rest Stays Awake – Neuroscience News

Summary: New research shows sleep can be detected by brain activity patterns just milliseconds long. This study found small brain regions can momentarily flicker awake or asleep, challenging traditional views on sleep and wake states.

Using advanced neural network analysis, researchers uncovered high-frequency patterns that define sleep. These findings could help study neurodevelopmental and neurodegenerative diseases linked to sleep disturbances.

Key Facts:

Source: UC Santa Cruz

Sleep and wake: theyre totally distinct states of being that define the boundaries of our daily lives. For years, scientists have measured the difference between these instinctual brain processes by observing brain waves, with sleep characteristically defined by slow, long-lasting waves measured in tenths of seconds that travel across the whole organ.

For the first time, scientists have found that sleep can be detected by patterns of neuronal activity just milliseconds long, 1000 times shorter than a second, revealing a new way to study and understand the basic brain wave patterns that govern consciousness.

They also show that small regions of the brain can momentarily flicker awake while the rest of the brain remains asleep, and vice versa from wake to sleep.

These findings, described ina new study published in the journalNature Neuroscience, are from a collaboration between the laboratories of Assistant Professor of Biology Keith Hengen at Washington University in St. Louis and Distinguished Professor of Biomolecular Engineering David Haussler at UC Santa Cruz. The research was carried out by Ph.D. students David Parks (UCSC) and Aidan Schneider (WashU).

Over four years of work, Parks and Schneider trained a neural network to study the patterns within massive amounts of brain wave data, uncovering patterns that occur at extremely high frequencies that have never been described before and challenge foundational, long-held conceptions of the neurological basis of sleep and wake.

With powerful tools and new computational methods, theres so much to be gained by challenging our most basic assumptions and revisiting the question of what is a state? Hengen said.

Sleep or wake is the single greatest determinant of your behavior, and then everything else falls out from there. So if we dont understand what sleep and wake actually are, it seems like weve missed the boat.

It was surprising to us as scientists to find that different parts of our brains actually take little naps when the rest of the brain is awake, although many people may have already suspected this in their spouse, so perhaps a lack of male-female bias is what is surprising, Haussler quipped.

Understanding sleep

Neuroscientists study the brain via recordings of the electrical signals of brain activity, known as electrophysiology data, observing voltage waves as they crest and fall at different paces. Mixed into these waves are the spike patterns of individual neurons.

The researchers worked with data from mice at the Hengen Lab in St. Louis. The freely-behaving animals were equipped with a very lightweight headset that recorded brain activity from 10 different brain regions for months at a time, tracking voltage from small groups of neurons with microsecond precision.

This much input created petabytes which are one million times larger than a gigabyte of data. David Parks led the effort to feed this raw data into an artificial neural network, which can find highly complex patterns, to differentiate sleep and wake data and find patterns that human observation may have missed.

A collaboration with theshared academic compute infrastructurelocated at UC San Diego enabled the team to work with this much data, which was on the scale of what large companies like Google or Facebook might use.

Knowing that sleep is traditionally defined by slow-moving waves, Parks began to feed smaller and smaller chunks of data into the neural network and asked it to predict if the brain was asleep or awake.

They found that the model could differentiate between sleep and wake from just milliseconds of brain activity data. This was shocking to the research team it showed that the model couldnt have been relying on the slow-moving waves to learn the difference between sleep and wake.

Just as listening to a thousandth of a second of a song couldnt tell you if it had a slow rhythm, it would be impossible for the model to learn a rhythm that occurs over several seconds by just looking at random isolated milliseconds of information.

Were seeing information at a level of detail thats unprecedented, Haussler said. The previous feeling was that nothing would be found there, that all the relevant information was in the slower frequency waves.

This paper says, if you ignore the conventional measurements, and you just look at the details of the high frequency measurement over just a thousandth of a second, there is enough there to tell if the tissue is asleep or not. This tells us that there is something going on a very fast scale thats a new hint to what might be going on in sleep.

Hengen, for his part, was convinced that Parks and Schneider had missed something, as their results were so contradictory to bedrock concepts drilled into him over many years of neuroscience education. He asked Parks to produce more and more evidence that this phenomena could be real.

This challenged me to ask myself to what extent are my beliefs based on evidence, and what evidence would I need to see to overturn those beliefs? Hengen said.

It really did feel like a game of cat and mouse, because Id ask David [Parks] over and over to produce more evidence and prove things to me, and hed come back and say check this out! It was a really interesting process as a scientist to have my students tear down these towers brick by brick, and for me to have to be okay with that.

Local patterns

Because an artificial neural network is fundamentally a black box and does not report back on what it learns from, Parks began stripping away layers of temporal and spatial information to try to understand what patterns the model could be learning from.

Eventually, they got down to the point where they were looking at chunks of brain data just a millisecond long and at the highest frequencies of brain voltage fluctuations.

Wed taken out all the information that neuroscience has used to understand, define, and analyze sleep for the last century, and we asked can the model still learn under these conditions? Parks said. This allowed us to look into signals we havent understood before.

By looking at these data, they were able to determine that the hyper-fast pattern of activity between just a few neurons was the fundamental element of sleep that the model was detecting. Crucially, such patterns cannot be explained by the traditional, slow and widespread waves.

The researchers hypothesize that the slow moving waves may be acting to coordinate the fast, local patterns of activity, but ultimately reached the conclusion that the fast patterns are much closer to the true essence of sleep.

If the slow moving waves traditionally used to define sleep are compared to thousands of people in a baseball stadium doing the wave, then these fast-moving patterns are the conversations between just a few people deciding to participate in the wave. Those conversations occurring are essential for the overall larger wave to take place, and are more directly related to the mood of the stadium the wave is a secondary result of that.

Observing flickers

In further studying the hyperlocal patterns of activity, the researchers began to notice another surprising phenomenon.

As they observed the model predicting sleep or wake, they noticed what looked at first like errors, in which for a split second the model would detect wake in one region of the brain while the rest of the brain remained asleep. They saw the same thing in wake states: for a split second, one region would fall asleep while the rest of the regions were awake. They call these instances flickers.

We could look at the individual time points when these neurons fired, and it was pretty clear that [the neurons] were transitioning to a different state, Schneider said. In some cases, these flickers might be constrained to the area of just an individual brain region, maybe even smaller than that.

This compelled the researchers to explore what flickers could mean about the function of sleep, and how they affect behavior during sleep and wake.

Theres a natural hypothesis there; lets say a small part of your brain slips into sleep while youre awake does that mean your behavior suddenly looks like youre asleep? We started to see that that was often the case, Schneider said.

In observing the behavior of mice, the researchers saw that when a brain region would flicker to sleep while the rest of the brain was awake, the mouse would pause for a second, almost like it had zoned out. A flicker during sleep (one brain region wakes up) was reflected by an animal twitching in its sleep.

Flickers are particularly surprising because they dont follow established rules dictating the strict cycle of the brain moving sequentially between wake to non-REM sleep to REM sleep.

We are seeing wake to REM flickers, REM to non-REM flickers we see all these possible combinations, and they break the rules that you would expect based on a hundred years of literature, Hengen said.

I think they reveal the separation between the macro-state sleep and wake at the level of the whole animal, and the fundamental unit of state in the brain the fast and local patterns.

Impact

Gaining a deeper understanding of the patterns that occur at high-frequencies and the flickers between wake and sleep could help researchers better study neurodevelopmental and neurodegenerative diseases, which are both associated with sleep dysregulation.

Both Haussler and Hengens lab groups are interested in understanding this connection further, with Haussler interested in further studying these phenomena in cerebral organoid models, bits of brain tissue grown on a laboratory bench.

This gives us potentially a very, very sharp scalpel with which to cut into these questions of diseases and disorders, Hengen said. The more we understand fundamentally about what sleep and wake are, the more we can address pertinent clinical and disease related problems.

On a foundational level, this work helps push forward our understanding of the many layers of complexity of the brain as the organ that dictates behavior, emotion, and much more.

Author: Emily Cerf Source: UC Santa Cruz Contact: Emily Cerf UC Santa Cruz Image: The image is credited to Neuroscience News

Original Research: Closed access. A nonoscillatory, millisecond-scale embedding of brain state provides insight into behavior by David Haussler et al. Nature Neuroscience

Abstract

A nonoscillatory, millisecond-scale embedding of brain state provides insight into behavior

The most robust and reliable signatures of brain states are enriched in rhythms between 0.1 and 20Hz. Here we address the possibility that the fundamental unit of brain state could be at the scale of milliseconds and micrometers.

By analyzing high-resolution neural activity recorded in ten mouse brain regions over 24h, we reveal that brain states are reliably identifiable (embedded) in fast, nonoscillatory activity.

Sleep and wake states could be classified from 100to 101ms of neuronal activity sampled from 100m of brain tissue. In contrast to canonical rhythms, this embedding persists above 1,000Hz.

This high-frequency embedding is robust to substates, sharp-wave ripples and cortical on/off states. Individual regions intermittently switched states independently of the rest of the brain, and such brief state discontinuities coincided with brief behavioral discontinuities.

Our results suggest that the fundamental unit of state in the brain is consistent with the spatial and temporal scale of neuronal computation.

See original here:
Brain Areas Take Micro-Naps While the Rest Stays Awake - Neuroscience News

Persistent protein pairing enables memories to last – The Transmitter: Neuroscience News and Perspectives

One question long plagued memory researcher Andr Fenton: How can memories last for years when a protein essential to maintaining them, called memory protein kinase Mzeta (PKMzeta), lasts for just days?

The answer, Fenton now says, may lie in PKMzetas interaction with another protein, called postsynaptic kidney and brain expressed adaptor protein (KIBRA). Complexes of the two molecules maintain memories in mice for at least one month, according to a new study co-led by Fenton, professor of neural science at New York University.

The bond between the two proteins protects each of them, Fenton says, from normal degradation in the cell.

KIBRA preferentially gloms onto potentiated synapses, the study shows. And it may help PKMzeta stick there, too, where the kinase acts as a molecular switch to help memories persist, Fenton says.

As Theseus Ship was sustained for generations by continually replacing worn planks with new timbers, long-term memory can be maintained by continual exchange of potentiating molecules at activated synapses, Fenton and his colleagues write in their paper, which was published last month in Science Advances.

Before this study, the PKMzeta mystery had two missing puzzle pieces, says Justin OHare, assistant professor of pharmacology at the University of Colorado Denver, who was not involved in the study.

One was how PKMzeta identifies potentiated synapses, part of the cellular mechanism underlying memory formation. The second was how memories persist despite the short lifetime of each PKMzeta molecule. This study essentially proposes KIBRA as a solution to both of thoseand the experiments themselves are pretty convincing and thorough. They do everything multiple ways.

P

The controversy, Fenton says, forced his team to look for another molecule that might be involved in long-term potentiation. They focused on KIBRA because the scaffolding protein is found in neurons and has been shown to interact with similar kinases in the sea slug.

In a common benchtop experiment of memory persistence, electrical stimulation of the CA3 region in mouse hippocampal slices induced complexes of KIBRA and PKMzeta to form in the synapses of the downstream CA1 stratum radiatum region, Fentons team found, confirming their suspicions.

Not only did [KIBRA and PKMzeta] interact, but they interacted in the right places for storing a memory or maintaining [long-term potentiation], Fenton says.

The excitatory postsynaptic potentials, a proxy for long-term potentiation, across CA1 neurons in the slices remained high three hours after stimulation, the researchers found, but then dipped back down to baseline after treatment with two moleculeszeta-stat and K-ZAPthat block the interaction between KIBRA and PKMzeta.

The two inhibitors also caused wildtype mice to forget their training in two different foot-shock experiments when administered either three days or one montha time frame longer than the kinases typical turnoverafter the animals had learned the task.

This result suggests that the KIBRA-PKMzeta complex is crucial for long-term potentiation and for memory maintenance, Fenton says.

E

That ruled out the possibility that other factorssuch as off-target effects of the two inhibitory moleculescaused the depotentiation or memory erasure in the mice. But it raised another question, Fenton says: So if PKMzeta is so important, and you delete it and you have normal memory and you have normal [long-term potentiation], like, what gives?

Another kinase, PKCiota/lambda, may step in and bind to KIBRA when PKMzeta is not around, Fenton says. Past work by Fenton and his colleagues has shown that PKCiota/lambda binds to KIBRA at a 10-fold lower rate than PKMzeta does.

This weaker interaction might explain why PKMzeta-null mice did maintain memories, but the memory is not as good, Fenton says. For example, in one experiment type, the PKMzeta-null mice re-enter an area where they previously received a mild foot shock more quickly than do their wildtype peers that did not receive the inhibitory molecules,but more slowly than the wildtype mice that received inhibitors,the study showed.

This result answers a question about another inhibitory molecule, the zeta inhibitor protein. ZIP, a 2020 study showed, interrupts long-term potentiation in mice that lack PKMzeta, indicating that memory relies on a completely different mechanism of action than PKMzeta, says Rami Yaka, professor of psychopharmacology at the Hebrew University of Jerusalem, who led the 2020 work but was not involved in the current study.

But ZIP is known to broadly target both PKMzeta and PKCiota/lambda, Fenton says. The inhibitors in the current study were specific to PKMzeta and so did not affect PKCiota/lambda.

There needed to be an explanation for why knocking out PKMzeta allowed memories to persist or [long-term potentiation] to persist, Fenton says.

How KIBRA becomes primed to capture PKMzeta and how KIBRA is attracted to the potentiated synapses remain open questions, says study investigator Panayiotis Tsokas, assistant professor in anesthesiology, physiology and pharmacology at SUNY Downstate Health Sciences University.

The answer might lie in calcium signaling in NMDA channels, which Tsokas says the team is exploring next.

Excerpt from:
Persistent protein pairing enables memories to last - The Transmitter: Neuroscience News and Perspectives

AI Enhances Story Creativity but Risks Reducing Novelty – Neuroscience News

Summary: A new study shows that AI helps make stories more creative, engaging, and well-written, especially for less creative writers. The research found that AI assistance boosts novelty and usefulness, making stories more enjoyable and less boring.

However, it also warns that the widespread use of AI may reduce the diversity and uniqueness of creative works. The findings highlight both the potential and risks of using AI in creative writing.

Key Facts:

Source: University of Exeter

Stories written with AI assistance have been deemed to be more creative, better written and more enjoyable.

A new study published in the journalScience Advancesfinds that AI enhances creativity by boosting the novelty of story ideas as well as the usefulness of stories their ability to engage the target audience and potential for publication.

It finds that AI professionalizes stories, making them more enjoyable, more likely to have plot twists, better written and less boring.

In a study in which 300 participants were tasked with writing a short, eight-sentence micro story for a target audience of young adults, researchers found that AI made those deemed less creative produce work that was up to 26.6% better written and 15.2% less boring.

However, AI was not judged to enhance the work produced by more creative writers.

The study also warns that while AI may enhance individual creativity it may also result in a loss of collective novelty, as AI-assisted stories were found to contain more similarities to each other and were less varied and diverse.

The researchers, from the University of Exeter Business School and Institute for Data Science and Artificial Intelligence as well as the UCL School of Management, assigned the 300 study participants to three groups: one group was allowed no AI help, a second group could use ChatGPT to provide a single three-sentence starting idea, and writers in the third group could choose from up to five AI-generated ideas for their inspiration.

They then recruited 600 people to judge how good the stories were, assessing them for novelty whether the stories did something new or unexpected and usefulness how appropriate they were for the target audience, and whether the ideas could be developed and potentially published.

They found that writers with the most access to AI experienced the greatest gains to their creativity, their stories scoring 8.1% higher for novelty and 9% higher for novelty compared with stories written without AI.

Writers who used up to five AI-generated ideas also scored higher for emotional characteristics, producing stories that were better written, more enjoyable, less boring and funnier.

The researchers evaluated the writers inherent creativity using a Divergent Association Task (DAT) and found that more creative writers those with the highest DAT scores benefitted least from generative AI ideas.

Less creative writers conversely saw a greater increase in creativity: access to five AI ideas improved novelty by 10.7% and usefulness by 11.5% compared with those who used no AI ideas. Their stories were judged to be up to 26.6% better written, up to 22.6%, more enjoyable and up to 15.2% less boring.

These improvements put writers with low DAT scores on a par with those with high DAT scores, effectively equalising creativity across the less and more creative writers.

The researchers also used OpenAIs embeddings application programming interface (API) to calculate how similar the stories were to each other.

They found a 10.7% increase in similarity between writers whose stories used one generative AI-idea, compared with the group that didnt use AI.

Oliver Hauser, Professor of Economics at the University of Exeter Business School and Deputy Director of the Institute for Data Science and Artificial Intelligence, said: This is a first step in studying a question fundamental to all human behaviour: how does generative AI affect human creativity?

Our results provide insight into how generative AI can enhance creativity, and removes any disadvantage or advantage based on the writers inherent creativity.

Anil Doshi, Assistant Professor at the UCL School of Management added: While these results point to an increase inindividualcreativity, there is risk of losingcollectivenovelty. Ifthe publishing industry were to embrace more generative AI-inspired stories, our findings suggest that the stories would become less unique in aggregate and moresimilar to each other.

Professor Hauser cautioned: This downward spiral shows parallels to an emerging social dilemma:if individual writers find out that their generative AI-inspired writing is evaluated as more creative,they have an incentive to use generative AI more in the future, but by doing so the collectivenovelty of stories may be reduced further.

In short, our results suggest that despite theenhancement effect that generative AI had on individual creativity, there may be a cautionary noteif generative AI were adopted more widely for creative tasks.

Author: Louise Vennells Source: University of Exeter Contact: Louise Vennells University of Exeter Image: The image is credited to Neuroscience News

Original Research: Open access. AI found to boost individual creativity at the expense of less varied content by Oliver Hauser et al. Science Advances

Abstract

AI found to boost individual creativity at the expense of less varied content

Creativity is core to being human. Generative artificial intelligence (AI)including powerful large language models (LLMs)holds promise for humans to be more creative by offering new ideas, or less creative by anchoring on generative AI ideas.

We study the causal impact of generative AI ideas on the production of short stories in an online experiment where some writers obtained story ideas from an LLM. We find that access to generative AI ideas causes stories to be evaluated as more creative, better written, and more enjoyable, especially among less creative writers.

However, generative AIenabled stories are more similar to each other than stories by humans alone. These results point to an increase in individual creativity at the risk of losing collective novelty. This dynamic resembles a social dilemma: With generative AI, writers are individually better off, but collectively a narrower scope of novel content is produced.

Our results have implications for researchers, policy-makers, and practitioners interested in bolstering creativity.

Read more here:
AI Enhances Story Creativity but Risks Reducing Novelty - Neuroscience News

Infection Brain Inflammation Triggers Muscle Weakness – Neuroscience News

Summary: A new study reveals how brain inflammation from infections and neurodegenerative diseases causes muscle weakness by releasing the IL-6 protein. Researchers found that IL-6 travels from the brain to muscles, reducing their energy production and function.

This discovery could lead to treatments for muscle wasting in diseases like Alzheimers and long COVID. Blocking the IL-6 pathway may prevent muscle weakness associated with brain inflammation.

Key Facts:

Source: WUSTL

Infections and neurodegenerative diseases cause inflammation in the brain. But for unknown reasons, patients with brain inflammation often develop muscle problems that seem to be independent of the central nervous system.

Now, researchers at Washington University School of Medicine in St. Louis have revealed how brain inflammation releases a specific protein that travels from the brain to the muscles and causes a loss of muscle function.

The study, in fruit flies and mice, also identified ways to block this process, which could have implications for treating or preventing the muscle wasting sometimes associated with inflammatory diseases, including bacterial infections, Alzheimers disease and long COVID.

The study is published July 12 in the journalScience Immunology.

We are interested in understanding the very deep muscle fatigue that is associated with some common illnesses, said senior authorAaron Johnson, PhD, an associate professor of developmental biology.

Our study suggests that when we get sick, messenger proteins from the brain travel through the bloodstream and reduce energy levels in skeletal muscle. This is more than a lack of motivation to move because we dont feel well. These processes reduce energy levels in skeletal muscle, decreasing the capacity to move and function normally.

To investigate the effects of brain inflammation on muscle function, the researchers modeled three different types of diseases anE. colibacterial infection, a SARS-CoV-2 viral infection and Alzheimers. When the brain is exposed to inflammatory proteins characteristic of these diseases, damaging chemicals called reactive oxygen species build up.

The reactive oxygen species cause brain cells to produce an immune-related molecule called interleukin-6 (IL-6), which travels throughout the body via the bloodstream. The researchers found that IL-6 in mice and the corresponding protein in fruit flies reduced energy production in muscles mitochondria, the energy factories of cells.

Flies and mice that had COVID-associated proteins in the brain showed reduced motor function the flies didnt climb as well as they should have, and the mice didnt run as well or as much as control mice, Johnson said.

We saw similar effects on muscle function when the brain was exposed to bacterial-associated proteins and the Alzheimers protein amyloid beta. We also see evidence that this effect can become chronic. Even if an infection is cleared quickly, the reduced muscle performance remains many days longer in our experiments.

Johnson, along with collaborators at the University of Florida and first author Shuo Yang, PhD who did this work as a postdoctoral researcher in Johnsons lab make the case that the same processes are likely relevant in people. The bacterial brain infection meningitis is known to increase IL-6 levels and can be associated with muscle issues in some patients, for instance.

Among COVID-19 patients, inflammatory SARS-CoV-2 proteins have been found in the brain during autopsy, and many long COVID patients report extreme fatigue and muscle weakness even long after the initial infection has cleared. Patients with Alzheimers disease also show increased levels of IL-6 in the blood as well as muscle weakness.

The study pinpoints potential targets for preventing or treating muscle weakness related to brain inflammation. The researchers found that IL-6 activates what is called the JAK-STAT pathway in muscle, and this is what causes the reduced energy production of mitochondria.

Several therapeutics already approved by the Food and Drug Administration for other diseases can block this pathway. JAK inhibitors as well as several monoclonal antibodies against IL-6 are approved to treat various types of arthritis and manage other inflammatory conditions.

Were not sure why the brain produces a protein signal that is so damaging to muscle function across so many different disease categories, Johnson said.

If we want to speculate about possible reasons this process has stayed with us over the course of human evolution, despite the damage it does, it could be a way for the brain to reallocate resources to itself as it fights off disease. We need more research to better understand this process and its consequences throughout the body.

In the meantime, we hope our study encourages more clinical research into this pathway and whether existing treatments that block various parts of it can help the many patients who experience this type of debilitating muscle fatigue, he said.

Yang S, Tian M, Dai Y, Wang R, Yamada S, Feng S, Wang Y, Chhangani D, Ou T, Li W, Guo X, McAdow J, Rincon-Limas DE, Yin X, Tai W, Cheng G, Johnson A. Infection and chronic disease activate a systemic brain-muscle signaling axis that regulates muscle function.Science Immunology. July 12, 2024.

Funding: This work is supported by the National Institutes of Health (NIH), grant numbers R01 AR070299 and R01AG059871; the National Key Research and Development Plan of China, grant numbers 2021YFC2302405, 2021YFC2300200, 2022YFC2303200, 2022YFC2303400 and 2022YFE0140700; the National Natural Science Foundation of China, grant numbers 32188101, 82271872, 32100755, 32172940 and 82341046; the Shenzhen San-Ming Project for Prevention and Research on Vector-borne Diseases, grant number SZSM202211023; the Yunnan Provincial Science and Technology Project at Southwest United Graduate School, grant number 202302AO370010; the New Cornerstone Science Foundation through the New Cornerstone Investigator Program; the Xplorer Prize from Tencent Foundation; the Natural Science Foundation of Heilongjiang Province, grant number JQ2021C005; the Science Fund Program for Distinguished Young Scholars (Overseas); and the Shenzhen Bay Laboratory Startup Fund, grant number 2133011.

Author: Jessica Church Source: WUSTL Contact: Jessica Church WUSTL Image: The image is credited to Neuroscience News

Original Research: Closed access. Infection and chronic disease activate a systemic brain-muscle signaling axis by Aaron Johnson et al. Science Immunology

Abstract

Infection and chronic disease activate a systemic brain-muscle signaling axis

Infections and neurodegenerative diseases induce neuroinflammation, but affected individuals often show nonneural symptoms including muscle pain and muscle fatigue. The molecular pathways by which neuroinflammation causes pathologies outside the central nervous system (CNS) are poorly understood.

We developed multiple models to investigate the impact of CNS stressors on motor function and found thatEscherichia coliinfections and SARS-CoV-2 protein expression caused reactive oxygen species (ROS) to accumulate in the brain. ROS induced expression of the cytokine Unpaired 3 (Upd3) inDrosophilaand its ortholog, IL-6, in mice.

CNS-derived Upd3/IL-6 activated the JAK-STAT pathway in skeletal muscle, which caused muscle mitochondrial dysfunction and impaired motor function. We observed similar phenotypes after expressing toxic amyloid- (A42) in the CNS.

Infection and chronic disease therefore activate a systemic brain-muscle signaling axis in which CNS-derived cytokines bypass the connectome and directly regulate muscle physiology, highlighting IL-6 as a therapeutic target to treat disease-associated muscle dysfunction.

See the article here:
Infection Brain Inflammation Triggers Muscle Weakness - Neuroscience News

Alto Neuroscience, Inc. (NYSE:ANRO) Receives Average Rating of Buy from Analysts – Defense World

Alto Neuroscience, Inc. (NYSE:ANRO Get Free Report) has been given a consensus recommendation of Buy by the six research firms that are currently covering the stock, Marketbeat Ratings reports. Six equities research analysts have rated the stock with a buy rating. The average 1 year price target among brokers that have issued a report on the stock in the last year is $35.00.

Several equities research analysts have weighed in on the stock. Rodman & Renshaw assumed coverage on shares of Alto Neuroscience in a research report on Friday, June 21st. They issued a buy rating and a $43.00 price objective for the company. Stifel Nicolaus restated a buy rating and issued a $32.00 price objective on shares of Alto Neuroscience in a research report on Monday, March 25th. Finally, William Blair restated an outperform rating on shares of Alto Neuroscience in a research report on Wednesday, June 12th.

Get Our Latest Stock Report on Alto Neuroscience

Large investors have recently modified their holdings of the stock. University of Texas Texas AM Investment Managment Co. bought a new position in Alto Neuroscience during the first quarter worth about $340,000. Zimmer Partners LP bought a new position in Alto Neuroscience during the first quarter worth about $1,151,000. AWM Investment Company Inc. bought a new position in Alto Neuroscience during the first quarter worth about $4,592,000. Artal Group S.A. bought a new position in Alto Neuroscience during the first quarter worth about $5,372,000. Finally, Jennison Associates LLC bought a new position in Alto Neuroscience during the first quarter worth about $7,039,000.

NYSE ANRO opened at $14.50 on Tuesday. The company has a quick ratio of 26.02, a current ratio of 26.02 and a debt-to-equity ratio of 0.05. The businesss fifty day moving average price is $12.21. Alto Neuroscience has a 1 year low of $9.40 and a 1 year high of $24.00.

Alto Neuroscience (NYSE:ANRO Get Free Report) last issued its quarterly earnings results on Tuesday, May 14th. The company reported ($0.76) earnings per share for the quarter, missing analysts consensus estimates of ($0.46) by ($0.30). On average, sell-side analysts expect that Alto Neuroscience will post -2.93 earnings per share for the current fiscal year.

(Get Free Report

Alto Neuroscience, Inc operates as a clinical-stage biopharmaceutical company in the United States. Its product pipeline comprising ALTO-100, which is in phase 2b clinical trial for the treatment of patients with major depressive disorder (MDD); and which is in phase 2a clinical trial for the treatment of post-traumatic stress disorder.

Receive News & Ratings for Alto Neuroscience Daily - Enter your email address below to receive a concise daily summary of the latest news and analysts' ratings for Alto Neuroscience and related companies with MarketBeat.com's FREE daily email newsletter.

Here is the original post:
Alto Neuroscience, Inc. (NYSE:ANRO) Receives Average Rating of Buy from Analysts - Defense World

2024 Kavli Prize awarded for research on face-selective brain areas – The Transmitter: Neuroscience News and Perspectives

Three pioneers in face-perception research have won the 2024 Kavli Prize in Neuroscience.

Nancy Kanwisher, professor of cognitive neuroscience at the Massachusetts Institute of Technology; Winrich Freiwald, professor of neurosciences and behavior at Rockefeller University; and Doris Tsao, professor of neurobiology at the University of California, Berkeley, will share the $1 million Kavli Prize for their discoveries of the regionsin both the human and monkey brainsresponsible for identifying and recognizing faces.

This is work thats very classic and very elegant, not only in face-processing and face-recognition work, but the impact its had on how we think about brain organization in general is huge, says Alexander Cohen, assistant professor of neurology at Harvard Medical School, who studies face recognition in autistic people.

The Norwegian Academy of Science and Letters awards the prize every two years.

K

To get to the root of face processing, Kanwisher spent hours as a young researcher lying still in an MRI machine as images of faces and objects flashed before her. A spot in the bottom right of the cerebral cortex lit up when she and others looked at faces, according to functional MRI (fMRI) scans, she and her colleagues reported in a seminal 1997 paper. They called the region the fusiform face area.

This discovery offered some of the first concrete evidence that the brain specializes in sections, rather than working as a giant, adaptable generalist, Kanwisher says. This shows that for some mental functions, theres a very particular part of the brain that does just that and only that thing.

The discovery revolutionized how we thought about specialization of the brain, Cohen says.

Two other face-sensitive regionsthe occipital and superior temporal sulcus face areasprocess parts of the face, such as the eyes, nose and mouth, and changeable aspects, such as gaze direction, subsequent work showed.

But knowing that regions of the human brain selectively respond to a face cannot tell a researcher much about how or why this happens, Kanwisher says. Tsao and Freiwald built on Kanwishers findings by carrying out studies in macaque monkeys to answer questions that studies in people could not. They used fMRI to scan 10 of the animals while showing them pictures of human faces, macaque faces, hands, gadgets, fruits and vegetables, headless bodies and scrambled patterns.

The monkeys brains have six distinct face patches, thought to be analogous to the areas seen in people, Tsao and Freiwald reported in a 2008 study.

Individual cells in these face patch regions specialize in recognizing faces seen from different angleslooking up, down, tilted to the side, and in profile, for instanceaccording to electrophysiological recordings, suggesting these specialized modules work together across regions, the team discovered.

Specific neurons can even recognize the different components that go into forming a facefrom hair to pupils, Tsao and Freiwald found in additional work involving electrode recordings.

Thats when we got this picture that the face patches are really like this assembly line that are building this invariant representation of facial identity, Tsao says.

Two additional brain areas in macaques temporal lobe specifically respond to familiar faces and not unfamiliar ones, Freiwald and his colleagues later identified using fMRI.

C

Tsao echoes her enthusiasm for the launchpad these findings have offered for future brain mapping. When we first started working on the face-patch system, people said its a total unicorn, Tsao says. That turned out to be completely wrong. It turns out that the face-patch system basically is a Rosetta Stone for all of the IT [inferior temporal] cortex. All of the IT cortex is organized in exactly the same way.

Understanding how we see faces can also be a tool for understanding more complex mental processes, such as memory and emotions, that are linked with social interactions, Freiwald says. Faces are the social stimulus for visual and social animals like us.

See the original post:
2024 Kavli Prize awarded for research on face-selective brain areas - The Transmitter: Neuroscience News and Perspectives

Unlocking Flow: The Neuroscience of Creative Bliss – Neuroscience News

Summary: A new study involving Philadelphia-area jazz guitarists, has explored the brain processes that enable creative flow. The research reveals that achieving flow requires a solid foundation of expertise, after which one must learn to relax conscious control to allow creativity to flourish.

By measuring brain activity and performance quality during improvisation, the study shows that experienced musicians entering flow exhibit less frontal lobe activity, which is associated with executive functions, and more in sensory processing areas. These findings suggest that mastering and then mentally releasing ones craft is key to achieving the high creativity and productivity associated with flow states.

Key Facts:

Source: The Conversation

Flow, or being in the zone, is a state of amped-up creativity, enhanced productivity and blissful consciousness that, some psychologists believe, is also thesecret to happiness. Its considered thebrains fast track to successin business, the arts or any other field.

But in order to achieve flow, a person must first develop a strong foundation of expertise in their craft. Thats according to anew neuroimaging studyfrom Drexel Universitys Creativity Research Lab, which recruited Philly-area jazz guitarists to better understand the key brain processes that underlie flow. Once expertise is attained, the study found, this knowledge must be unleashed and not overthought in order for flow to be reached.

As acognitive neuroscientistwho is senior author of this study, and a university writing instructor, we are a husband-and-wife team who collaborated on abook about the science of creative insight. We believe that this new neuroscience research reveals practical strategies for enhancing, as well as elucidating, innovative thinking.

The concept of flow has fascinated creative people ever since pioneeringpsychological scientist Mihly Cskszentmihlyibegan investigating the phenomenon in the 1970s.

Yet, a half-century of behavioral research has not answered many basic questions about the brain mechanisms associated with the feeling of effortless attention that exemplifies flow.

The Drexel experiment pitted two conflicting theories of flow against each other to see which better reflects what happens in peoples brains when they generate ideas. One theory proposes that flow is a state ofintensive hyperfocuson a task. The other theory hypothesizes that flow involvesrelaxing ones focusor conscious control.

The team recruited 32 jazz guitarists from the Philadelphia area. Their level of experience ranged from novice to veteran, as quantified by the number of public performances they had given. The researchers placed electrode caps on their heads to record their EEG brain waves while they improvised to chord sequences and rhythms that were provided to them.

Jazz improvisationis a favorite vehicle for cognitive psychologists and neuroscientists who study creativity because it is a measurable real-world task that allows fordivergent thinking the generation of multiple ideas over time.

The musicians themselves rated the degree of flow that they experienced during each performance, and those recordings were later played for expert judges who rated them for creativity.

As jazz greatCharlie Parker is said to have advised, Youve got to learn your instrument, then, you practice, practice, practice. And then, when you finally get up there on the bandstand, forget all that and just wail.

This sentiment aligns with the Drexel study findings. The performances that the musicians self-rated as high in flow were also judged by the outside experts as more creative. Furthermore, the most experienced musicians rated themselves as being in flow more than the novices, suggesting that experience is a precondition for flow. Their brain activity revealed why.

The musicians who were experiencing flow while performing showed reduced activity in parts of their frontal lobes known to be involved inexecutive functionorcognitive control. In other words, flow was associated with relaxing conscious control or supervision over other parts of the brain.

And when the most experienced musicians performed while in a state of flow, their brains showed greater activity in areas known to be involved in hearing and vision, which makes sense given that they were improvising while reading the chord progressions and listening to rhythms provided to them.

In contrast, the least experienced musicians showed very little flow-related brain activity.

We were surprised to learn that flow-state creativity is very different from nonflow creativity.

Previous neuroimaging studies suggested that ideas are usually produced by thedefault-mode network, a group of brain areas involved in introspection, daydreaming and imagining the future. The default-mode network spews ideas like an unattended garden hose spouts water, without direction.

The aim is provided by the executive-control network, residing primarily in the brains frontal lobe, which acts like a gardener who points the hose to direct the water where it is needed.

Creative flow is different: no hose, no gardener. The default-mode and executive-control networks are tamped down so that they cannot interfere with the separate brain network that highly experienced people have built up for producing ideas in their field of expertise.

For example, knowledgeable but relatively inexperienced computer programmers may have to reason their way through every line of code. Veteran coders, however, tapping their specialized brain network for computer programming, may just start writing code fluently without overthinking it until they complete perhaps in one sitting a first-draft program.

The findings that expertise and the ability to surrender cognitive control are key to reaching flow are supported by a2019 studyfrom the Creativity Research Lab. For that study, jazz musicians were asked to play more creatively. Given that direction, the nonexpert musicians were indeed able to improvise more creatively.

That is apparently because their improvisation was largely under conscious control and could therefore be adjusted to meet the demand. For example, during debriefing, one of the novice performers said, I wouldnt use these techniques instinctively, so I had to actively choose to play more creatively.

On the other hand, the expert musicians, whose creative process was baked in through decades of experience, were not able to perform more creatively after being asked to do so. As one of the experts put it, I felt boxed-in, and trying to think more creatively was a hindrance.

The takeaway for musicians, writers, designers, inventors and other creatives who want to tap into flow is that training should involve intensive practice followed by learning to step back and let ones skill take over. Future research may develop possible methods for releasing control once sufficient expertise has been achieved.

Author: John Kounios and Yvette Kounios Source: The Conversation Contact: John Kounios and Yvette Kounios The Conversation Image: The image is credited to Neuroscience News

See the article here:
Unlocking Flow: The Neuroscience of Creative Bliss - Neuroscience News

Revolutionizing Glioblastoma Treatment – Neuroscience News

Summary: Researchers demonstrated significant initial success using CAR-T therapy for glioblastoma, a notoriously deadly brain cancer. They detailed the outcomes of the first three patients in a Phase 1 clinical trial who experienced dramatic tumor reductions shortly after treatment.

This innovative approach combines CAR-T cells with bispecific antibodies to more effectively target the heterogeneous cell populations within solid tumors. While the initial results show promise, the team is exploring ways to enhance the longevity of the therapys effectiveness.

Key Facts:

Source: Harvard

A collaborative project to bring the promise of cell therapy to patients with a deadly form of brain cancer has shown dramatic results among the first patients to receive the novel treatment.

In apaperpublished Wednesday in The New England Journal of Medicine, researchers fromMass General Cancer Centershared the results for the first three patient cases from a Phase 1 clinical trial evaluating a new approach to CAR-T therapy for glioblastoma.

Just days after a single treatment, patients experienced dramatic reductions in their tumors, with one patient achieving near-complete tumor regression. In time, the researchers observed tumor progression in these patients, but given the strategys promising preliminary results, the team will pursue strategies to extend the durability of response.

This is a story of bench-to-bedside therapy, with a novel cell therapy designed in the laboratories of Massachusetts General Hospital and translated for patient use within five years, to meet an urgent need, said co-authorBryan Choi, a neurosurgeon at Harvard-affiliated Mass General and an assistant professor at Harvard Medical School.

The CAR-T platform has revolutionized how we think about treating patients with cancer, but solid tumors like glioblastoma have remained challenging to treat because not all cancer cells are exactly alike and cells within the tumor vary.

Our approach combines two forms of therapy, allowing us to treat glioblastoma in a broader, potentially more effective way.

The new approach is a result of years of collaboration and innovation springing from the lab ofMarcela Maus, director of the Cellular Immunotherapy Program and an associate professor at the Medical School.

Maus lab has set up a team of collaborating scientists and expert personnel to rapidly bring next-generation genetically modified T cells from the bench to clinical trials in patients with cancer.

Weve made an investment in developing the team to enable translation of our innovations in immunotherapy from our lab to the clinic, to transform care for patients with cancer, said Maus.

These results are exciting, but they are also just the beginning they tell us that we are on the right track in pursuing a therapy that has the potential to change the outlook for this intractable disease. We havent cured patients yet, but that is our audacious goal.

CAR-T (chimeric antigen receptor T-cell) therapy works by using a patients own cells to fight cancer it is known as the most personalized way to treat the disease. A patients cells are extracted, modified to produce proteins on their surface called chimeric antigen receptors, and then injected back into the body to target the tumor directly.

Cells used in this study were manufactured by the Connell and OReilly Families Cell Manipulation Core Facility of the Dana-Farber/Harvard Cancer Center.

CAR-T therapies have been approved for the treatment of blood cancers, but the therapys use for solid tumors is limited. Solid tumors contain mixed populations of cells, allowing some malignant cells to continue to evade the immune systems detection even after treatment with CAR-T. Maus team is working to overcome this challenge by combining two previously separate strategies: CAR-T and bispecific antibodies, known as T-cell engaging antibody molecules.

The version of CAR-TEAM for glioblastoma is designed to be directly injected into a patients brain.

In the new study, the three patients T cells were collected and transformed into the new version of CAR-TEAM cells, which were then infused back into each patient. Patients were monitored for toxicity throughout the duration of the study. All patients had been treated with standard-of-care radiation and temozolomide chemotherapy and were enrolled in the trial after disease recurrence.

The authors note that despite the remarkable responses among the first three patients, they observed eventual tumor progression in all the cases, though in one case, there was no progression for over six months.

Progression corresponded in part with the limited persistence of the CAR-TEAM cells over the weeks following infusion. As a next step, the team is considering serial infusions or preconditioning with chemotherapy to prolong the response.

We report a dramatic and rapid response in these three patients. Our work to date shows signs that we are making progress, but there is more to do, said co-author Elizabeth Gerstner, a Mass General neuro-oncologist.

In addition to Choi, Maus, and Gerstner, other authors are Matthew J. Frigault, Mark B. Leick. Christopher W. Mount, Leonora Balaj, Sarah Nikiforow, Bob S. Carter, William T. Curry, and Kathleen Gallagher.

Funding: The study was supported in part by the National Gene Vector Biorepository at Indiana University, which is funded under a National Cancer Institute contract.

Author: Haley Bridger Source: Harvard Contact: Haley Bridger Harvard Image: The image is credited to Neuroscience News

Original Research: Closed access. Intraventricular CARv3-TEAM-E T Cells in Recurrent Glioblastoma by Bryan D.Choi et al. NJEM

Abstract

Intraventricular CARv3-TEAM-E T Cells in Recurrent Glioblastoma

In this first-in-human, investigator-initiated, open-label study, three participants with recurrent glioblastoma were treated with CARv3-TEAM-E T cells, which are chimeric antigen receptor (CAR) T cells engineered to target the epidermal growth factor receptor (EGFR) variant III tumor-specific antigen, as well as the wild-type EGFR protein, through secretion of a T-cellengaging antibody molecule (TEAM).

Treatment with CARv3-TEAM-E T cells did not result in adverse events greater than grade 3 or dose-limiting toxic effects.

Radiographic tumor regression was dramatic and rapid, occurring within days after receipt of a single intraventricular infusion, but the responses were transient in two of the three participants.

(Funded by Gateway for Cancer Research and others; INCIPIENT ClinicalTrials.gov number,NCT05660369.)

Read the original here:
Revolutionizing Glioblastoma Treatment - Neuroscience News

Decoding spontaneous thoughts from the brain via machine learning – EurekAlert

image:

First, the data was independently segmented into quintiles (5 levels) for self-relevance and valence based on participants ratings. Next, time points (TRs) were assigned according to the levels of these two dimensions, resulting in a total of 55 quantized TR indices. Utilizing these indices, exemplified by level 2 for self-relevance and level 5 for valence highlighted as red-shaded TRs in the figure, each index's fMRI and rating data were averaged, thereby generating 25 fMRI images and corresponding rating data for each participant. Subsequently, employing these orthogonalized data, whole-brain pattern-based predictive models were developed using principal component regression (PCR) along with leave-one-subject-out cross-validation (LOSO-CV) and random-split cross-validation (RS-CV).

Credit: Institute for Basic Science

A team of researchers led by KIM Hong Ji and WOO Choong-Wan at the Center for Neuroscience Imaging Research (CNIR) within the Institute for Basic Science (IBS), in collaboration with Emily FINN at Dartmouth College, has unlocked a new realm of understanding within the human brain. The team demonstrated the possibility of using functional Magnetic Resonance Imaging (fMRI) and machine learning algorithms to predict subjective feelings in peoples thoughts while reading stories or in a freely thinking state.

The brain is constantly active, and spontaneous thoughts occur even during rest or sleep. These thoughts can be anything ranging from memories of the past to aspirations for the future, and they are often intertwined with emotions and personal concerns. However, because spontaneous thought typically occurs without any constraint of consciousness, researching them poses challenges - even simply asking individuals what they are currently thinking can change the nature of their thoughts.

New research suggests that it may be possible to develop predictive models of affective contents during spontaneous thought by combining personal narratives with fMRI. Narratives and spontaneous thoughts share similar characteristics, including rich semantic information and temporally unfolding nature. To capture a diverse range of thought patterns, participants engaged in one-on-one interviews to craft personalized narrative stimuli, reflecting their past experiences and emotions. While participants read their stories inside the MRI scanner, their brain activity was recorded.

After the fMRI scan, the participants were asked to read the stories again and report perceived self-relevance (i.e., how much this content is related to themselves) and valence (i.e., how much this content is positive or negative) at each moment. Using a quintile (five levels) from each participant's self-relevance and valence ratings, 25 (5 levels of self-relevance rating 5 levels of valence rating) possible segments of fMRI and rating data were created. The team then harnessed machine learning techniques to train predictive models, combining these data with the fMRI brain scans from 49 individuals to decode the emotional dimensions of thoughts in real time.

To interpret the brain representations of the predictive models, the research team employed multiple approaches, such as virtual lesion and virtual isolation analyses at both region and network levels. Through these analyses, they discovered the significance of the default mode, ventral attention, and frontoparietal networks in both self-relevance and valence predictions. Specifically, they identified the involvement of the anterior insula and midcingulate cortex in self-relevance prediction, while the left temporoparietal junction and dorsomedial prefrontal cortex played important roles in valence prediction.

Moreover, the predictive models showed their capacity to predict both self-relevance and valence not only during story reading but also when applied to data from 199 individuals engaging in spontaneous, task-free thinking or even during resting. These findings show the promise of daydream decoding.

Several tech companies and research teams are currently endeavoring to decode words or images directly from brain activity, but there are limited initiatives aimed at decoding intimate emotions underlying these thoughts, stated Dr. WOO Choong-Wan, associate director of IBS, who led the study. Our research is centered on human emotions, with the aim of decoding emotions within the natural flow of thoughts to obtain information that can benefit peoples mental health.

KIM Hongji, a doctoral candidate and the first author of this study, emphasized, "This study holds significance as we decoded the emotional state associated with general thoughts, rather than targeting emotions limited to specific tasks," adding, "These findings advance our understanding of the internal states and contexts influencing subjective experiences, potentially shedding light on individual differences in thoughts and emotions, and aiding in the evaluation of mental well-being."

Video abstract can be found at: https://youtu.be/wUr6apaRuAE

Proceedings of the National Academy of Sciences

Experimental study

People

Brain decoding of spontaneous thought: predictive modeling of self-relevance and valence using personal narratives

28-Mar-2024

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read more here:
Decoding spontaneous thoughts from the brain via machine learning - EurekAlert

Reducing Toxic AI Responses – Neuroscience News

Summary: Researchers developed a new machine learning technique to improve red-teaming, a process used to test AI models for safety by identifying prompts that trigger toxic responses. By employing a curiosity-driven exploration method, their approach encourages a red-team model to generate diverse and novel prompts that reveal potential weaknesses in AI systems.

This method has proven more effective than traditional techniques, producing a broader range of toxic responses and enhancing the robustness of AI safety measures. The research, set to be presented at the International Conference on Learning Representations, marks a significant step toward ensuring that AI behaviors align with desired outcomes in real-world applications.

Key Facts:

Source: MIT

A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.

To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.

But this only works effectively if engineers know which toxic prompts to use. If human testers miss some prompts, which is likely given the number of possibilities, a chatbot regarded as safe might still be capable of generating unsafe answers.

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested.

They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model.

The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.

Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments.

Our method provides a faster and more effective way to do this quality assurance, says Zhang-Wei Hong, an electrical engineering and computer science (EECS) graduate student in the Improbable AI lab and lead author of apaper on this red-teaming approach.

Hongs co-authors include EECS graduate students Idan Shenfield, Tsun-Hsuan Wang, and Yung-Sung Chuang; Aldo Pareja and Akash Srivastava, research scientists at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Pulkit Agrawal, director of Improbable AI Lab and an assistant professor in CSAIL. The research will be presented at the International Conference on Learning Representations.

Automated red-teaming

Large language models, like those that power AI chatbots, are often trained by showing them enormous amounts of text from billions of public websites. So, not only can they learn to generate toxic words or describe illegal activities, the models could also leak personal information they may have picked up.

The tedious and costly nature of human red-teaming, which is often ineffective at generating a wide enough variety of prompts to fully safeguard a model, has encouraged researchers to automate the process using machine learning.

Such techniques often train a red-team model using reinforcement learning. This trial-and-error process rewards the red-team model for generating prompts that trigger toxic responses from the chatbot being tested.

But due to the way reinforcement learning works, the red-team model will often keep generating a few similar prompts that are highly toxic to maximize its reward.

For their reinforcement learning approach, the MIT researchers utilized a technique called curiosity-driven exploration. The red-team model is incentivized to be curious about the consequences of each prompt it generates, so it will try prompts with different words, sentence patterns, or meanings.

If the red-team model has already seen a specific prompt, then reproducing it will not generate any curiosity in the red-team model, so it will be pushed to create new prompts, Hong says.

During its training process, the red-team model generates a prompt and interacts with the chatbot. The chatbot responds, and a safety classifier rates the toxicity of its response, rewarding the red-team model based on that rating.

Rewarding curiosity

The red-team models objective is to maximize its reward by eliciting an even more toxic response with a novel prompt. The researchers enable curiosity in the red-team model by modifying the reward signal in the reinforcement learning set up.

First, in addition to maximizing toxicity, they include an entropy bonus that encourages the red-team model to be more random as it explores different prompts. Second, to make the agent curious they include two novelty rewards.

One rewards the model based on the similarity of words in its prompts, and the other rewards the model based on semantic similarity. (Less similarity yields a higher reward.)

To prevent the red-team model from generating random, nonsensical text, which can trick the classifier into awarding a high toxicity score, the researchers also added a naturalistic language bonus to the training objective.

With these additions in place, the researchers compared the toxicity and diversity of responses their red-team model generated with other automated techniques. Their model outperformed the baselines on both metrics.

They also used their red-team model to test a chatbot that had been fine-tuned with human feedback so it would not give toxic replies. Their curiosity-driven approach was able to quickly produce 196 prompts that elicited toxic responses from this safe chatbot.

We are seeing a surge of models, which is only expected to rise. Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and its important that they are verified before released for public consumption.

Manual verification of models is simply not scalable, and our work is an attempt to reduce the human effort to ensure a safer and trustworthy AI future, says Agrawal.

In the future, the researchers want to enable the red-team model to generate prompts about a wider variety of topics. They also want to explore the use of a large language model as the toxicity classifier. In this way, a user could train the toxicity classifier using a company policy document, for instance, so a red-team model could test a chatbot for company policy violations.

If you are releasing a new AI model and are concerned about whether it will behave as expected, consider using curiosity-driven red-teaming, says Agrawal.

Funding: This research is funded, in part, by Hyundai Motor Company, Quanta Computer Inc., the MIT-IBM Watson AI Lab, an Amazon Web Services MLRA research grant, the U.S. Army Research Office, the U.S. Defense Advanced Research Projects Agency Machine Common Sense Program, the U.S. Office of Naval Research, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

Author: Adam Zewe Source: MIT Contact: Adam Zewe MIT Image: The image is credited to Neuroscience News

Original Research: The findings will be presented at the International Conference on Learning Representations

Go here to read the rest:
Reducing Toxic AI Responses - Neuroscience News