Category Archives: Neuroscience

Metabolic Markers of Depression Identified – Neuroscience News

Summary: Researchers revealed a crucial link between cellular metabolism and major depressive disorder, particularly in treatment-refractory cases and suicidal ideation. This research found specific blood metabolites that differ in people with depression, providing new biomarkers for risk assessment.

The study also highlights sex-based differences in depressions metabolic impact and suggests that mitochondrial dysfunction plays a role in suicidal ideation. These insights offer new avenues for personalized treatment and prevention strategies, potentially utilizing supplements like folate and carnitine to address metabolic gaps.

Key Facts:

Source: UCSD

Major depressive disorder affects 16.1 million adults in the United States and costs $210 billion annually. While the primary symptoms of depression are psychological, scientists and doctors have come to understand that depression is a complex disease with physical effects throughout the body.

For example, measuring markers of cellular metabolism has become an important approach to studying mental illnesses and developing new ways to diagnose, treat and prevent them.

Researchers at University of California San Diego School of Medicine have now advanced this line of work in a new study, revealing a connection between cellular metabolism and depression.

They found that people with depression and suicidal ideation had detectable compounds in their blood that could help identify individuals at higher risk of becoming suicidal. The researchers also found sex-based differences in how depression impacts cell metabolism.

The findings, published December 15, 2023 inTranslational Psychiatry, could help personalize mental health care and potentially identify new targets for future drugs.

Mental illnesses like depression have impacts and drivers well beyond the brain, saidRobert Naviaux, MD, PhD, a professor in the Department of Medicine, Pediatrics and Pathology at UC San Diego School of Medicine.

Prior to about ten years ago it was difficult to study how the chemistry of the whole body influences our behavior and state of mind, but modern technologies like metabolomics are helping us listen in on cells conversations in their native tongue, which is biochemistry.

While many people with depression experience improvement with psychotherapy and medication, some peoples depression is treatment-refractory, meaning treatment has little to no impact. Suicidal thoughts are experienced by the majority of patients with treatment-refractory depression, and as many as 30% will attempt suicide at least once in their lifetime.

Were seeing a significant rise in midlife mortality in the United States, and increased suicide incidence is one of many things driving that trend, said Naviaux. Tools that could help us stratify people based on their risk of becoming suicidal could help us save lives.

The researchers analyzed the blood of 99 study participants with treatment-refractory depression and suicidal ideation, as well as an equal number of healthy controls.

Among the hundreds of different biochemicals circulating in the blood of these individuals, they found that five could be used as a biomarker to classify patients with treatment-refractory depression and suicidal ideation. However, which five could be used differed between men and women.

If we have 100 people who either dont have depression or who have depression and suicidal ideation, we would be able to correctly identify 85-90 of those at greatest risk based on five metabolites in males and another 5 metabolites in females, said Naviaux.

This could be important in terms of diagnostics, but it also opens up a broader conversation in the field about whats actually leading to these metabolic changes.

While there were clear differences in blood metabolism between males and females, some metabolic markers of suicidal ideation were consistent across both sexes. This included biomarkers for mitochondrial dysfunction, which occurs when the energy-producing structures of our cells malfunction.

Mitochondria are some of the most important structures of our cells and changed mitochondrial functions occur in a host of human diseases, added Naviaux.

Mitochondria produce ATP, the primary energy currency of all cells. ATP is also an important molecule for cell-to-cell communication, and the researchers hypothesize it is this function that is most dysregulated in people with suicidal ideation.

When ATP is inside the cell it acts like an energy source, but outside the cell it is a danger signal that activates dozens of protective pathways in response to some environmental stressor, said Naviaux.

We hypothesize that suicide attempts may actually be part of a larger physiological impulse to stop a stress response that has become unbearable at the cellular level.

Because some of the metabolic deficiencies identified in the study were in compounds that are available as supplements, such as folate and carnitine, the researchers are interested in exploring the possibility of individualizing depression treatment with these compounds to help fill in the gaps in metabolism that are needed for recovery. Naviaux hastens to add that these supplements are not cures.

None of these metabolites are a magic bullet that will completely reverse somebodys depression, said Naviaux.

However, our results tell us that there may be things we can do to nudge the metabolism in the right direction to help patients respond better to treatment, and in the context of suicide, this could be just enough to prevent people from crossing that threshold.

In addition to suggesting a new approach to personalize medicine for depression, the research could help scientists discover new drugs that can target mitochondrial dysfunction, which could have wide implications for human health in general.

Many chronic diseases are comorbid with depression, because it can be extremely stressful to deal with an illness for years at a time, said Naviaux.

If we can find ways to treat depression and suicidal ideation on a metabolic level, we may also help improve outcomes for the many diseases that lead to depression.

Many chronic illnesses, such as post-traumatic stress disorder and chronic fatigue syndrome, are not lethal themselves unless they lead to suicidal thoughts and actions. If metabolomics can be used to identify the people at greatest risk, it could ultimately help us save more lives.

Co-authors include: Jane C. Naviaux, Lin Wang, Kefeng Li, Jonathan M. Monk and Sai Sachin Lingampelly at UC San Diego, Lisa A. Pan, Anna Maria Segreti, Kaitlyn Bloom, Jerry Vockley, David N. Finegold and David G. Peters at University of Pittsburgh School of Medicine, and Mark A. Tarnopolsky at McMaster University.

Author: Miles Martin Source: UCSD Contact: Miles Martin UCSD Image: The image is credited to Neuroscience News

Original Research: Open access. Metabolic features of treatment-refractory major depressive disorder with suicidal ideation by Robert Naviaux et al. Translational Psychiatry

Abstract

Metabolic features of treatment-refractory major depressive disorder with suicidal ideation

Peripheral blood metabolomics was used to gain chemical insight into the biology of treatment-refractory Major Depressive Disorder with suicidal ideation, and to identify individualized differences for personalized care.

The study cohort consisted of 99 patients with treatment-refractory major depressive disorder and suicidal ideation (trMDD-SIn=52 females and 47 males) and 94 age- and sex-matched healthy controls (n=48 females and 46 males). The median age was 29 years (IQR 2242). Targeted, broad-spectrum metabolomics measured 448 metabolites. Fibroblast growth factor 21 (FGF21) and growth differentiation factor 15 (GDF15) were measured as biomarkers of mitochondrial dysfunction.

The diagnostic accuracy of plasma metabolomics was over 90% (95%CI: 0.801.0) by area under the receiver operator characteristic (AUROC) curve analysis. Over 55% of the metabolic impact in males and 75% in females came from abnormalities in lipids.

Modified purines and pyrimidines from tRNA, rRNA, and mRNA turnover were increased in the trMDD-SI group. FGF21 was increased in both males and females. Increased lactate, glutamate, and saccharopine, and decreased cystine provided evidence of reductive stress. Seventy-five percent of the metabolomic abnormalities found were individualized.

Personalized deficiencies in CoQ10, flavin adenine dinucleotide (FAD), citrulline, lutein, carnitine, or folate were found. Pathways regulated by mitochondrial function dominated the metabolic signature.

Peripheral blood metabolomics identified mitochondrial dysfunction and reductive stress as common denominators in suicidal ideation associated with treatment-refractory major depressive disorder.

Individualized metabolic differences were found that may help with personalized management.

Go here to read the rest:
Metabolic Markers of Depression Identified - Neuroscience News

Brain Imaging Reveals Altered Brain Connectivity in Autism – Neuroscience News

Summary: Researchers advanced brain imaging and machine learning to uncover altered functional brain connectivity in individuals with Autism Spectrum Disorder (ASD), acknowledging the diversity within the disorder.

The research distinguishes between shared and individual-specific connectivity patterns in ASD, revealing both common and unique brain alterations. This approach marks a significant shift from group-based analysis to a more personalized understanding of ASD.

The findings open pathways for tailored treatments, addressing the unique needs of individuals with ASD.

Key Facts:

Source: Elsevier

What happens in the brain to cause many neurodevelopmental disorders, including autism spectrum disorder (ASD), remains a mystery. A major limitation for researchers is the lack of biomarkers, or objective biological outputs, for these disorders, and in the case of ASD, for specific subtypes of disease.

Now, anew studyuses brain imaging and machine learning to identify altered functional brain connectivity (FC) in people with ASD importantly, taking into consideration differences between individuals.

The study appears inBiological Psychiatry, published by Elsevier.

John Krystal, MD, Editor ofBiological Psychiatry, said of the work,ASD has long been known to be a highly heterogeneous condition. While genetic studies have provided some clues to different causes of the disorder in different groups of ASD patients, it has been challenging to separate subtypes of ASD using other types of biomarkers, such as brain imaging.

Brain imaging scans are also extremely heterogenous, varying greatly from one individual to another, making such data difficult to use as a biomarker. Previous studies have identified both increased and decreased FC in people with ASD compared to healthy controls, but because those studies focused on groups of participants, they failed to appreciate heterogeneous autism-related atypical FC.

In the new study, the researchers showed that although heterogenous brain imaging subtypes could be distinguished among participants with ASD.

Xujun Duan, PhD, senior author of the work at the University of Electronic Science and Technology of China, explained,In this study, we used a technique to project altered FC of autism onto two subspaces: an individual-shared subspace, which represents altered connectivity pattern shared across autism, and an individual-specific subspace, which represents the remaining individual characteristics after eliminating the individual-shared altered connectivity patterns.

The investigators found that the individual-shared subspace altered FC of autism reflects differences at the group level, while individual-specific subspace altered FC represents individual variation in autistic traits. These findings suggest a requirement to move beyond group effects and to capture and capitalize on the individual-specific brain features for dissecting clinical heterogeneity.

Dr. Krystal added,Part of the challenge to finding subtypes of ASD has been the enormous complexity of neuroimaging data. This study uses a sophisticated computational approach to identify aspects of brain circuit alterations that are common to ASD and others that are associated with particular ASD traits.

This type of strategy may help to more effectively guide the development of personalized treatments for ASD, i.e., treatments that meet the specific needs of particular patients.

Author: Eileen Leahy Source: Elsevier Contact: Eileen Leahy Elsevier Image: The image is credited to Neuroscience News

Original Research: Open access. Disentangling the Individual-Shared and Individual-Specific Subspace of Altered Brain Functional Connectivity in Autism Spectrum Disorder by Xujun Duan et al. Biological Psychiatry

Abstract

Disentangling the Individual-Shared and Individual-Specific Subspace of Altered Brain Functional Connectivity in Autism Spectrum Disorder

Despite considerable effort toward understanding the neural basis of autism spectrum disorder (ASD) using case-control analyses of resting-state functional magnetic resonance imaging data, findings are often not reproducible, largely due to biological and clinical heterogeneity among individuals with ASD. Thus, exploring the individual-shared and individual-specific altered functional connectivity (AFC) in ASD is important to understand this complex, heterogeneous disorder.

We considered 254 individuals with ASD and 295 typically developing individuals from the Autism Brain Imaging Data Exchange to explore the individual-shared and individual-specific subspaces of AFC. First, we computed AFC matrices of individuals with ASD compared with typically developing individuals. Then, common orthogonal basis extraction was used to project AFC of ASD onto 2 subspaces: an individual-shared subspace, which represents altered connectivity patterns shared across ASD, and an individual-specific subspace, which represents the remaining individual characteristics after eliminating the individual-shared altered connectivity patterns.

Analysis yielded 3 common components spanning the individual-shared subspace. Common components were associated with differences of functional connectivity at the group level. AFC in the individual-specific subspace improved the prediction of clinical symptoms. The default mode networkrelated and cingulo-opercular networkrelated magnitudes of AFC in the individual-specific subspace were significantly correlated with symptom severity in social communication deficits and restricted, repetitive behaviors in ASD.

Our study decomposed AFC of ASD into individual-shared and individual-specific subspaces, highlighting the importance of capturing and capitalizing on individual-specific brain connectivity features for dissecting heterogeneity. Our analysis framework provides a blueprint for parsing heterogeneity in other prevalent neurodevelopmental conditions.

Link:
Brain Imaging Reveals Altered Brain Connectivity in Autism - Neuroscience News

AI Revolutionizes Neuron Tracking in Moving Animals – Neuroscience News

Summary: Researchers developed an AI-based method to track neurons in moving and deforming animals, a significant advancement in neuroscience research. This convolutional neural network (CNN) method overcomes the challenge of tracking brain activity in organisms like worms, whose bodies constantly change shape.

By employing targeted augmentation, the AI significantly reduces the need for manual image annotation, streamlining the neuron identification process. Tested on the roundworm Caenorhabditis elegans, this technology has not only increased analysis efficiency but also deepened insights into complex neuronal behaviors.

Key Facts:

Source: EPFL

Recent advances allow imaging of neurons inside freely moving animals. However, to decode circuit activity, these imaged neurons must be computationally identified and tracked. This becomes particularly challenging when the brain itself moves and deforms inside an organisms flexible body, e.g. in a worm. Until now, the scientific community has lacked the tools to address the problem.

Now, a team of scientists from EPFL and Harvard have developed a pioneering AI method to track neurons inside moving and deforming animals. The study, now published inNature Methods, was led bySahand Jamal Rahiat EPFLs School of Basic Sciences.

The new method is based on a convolutional neural network (CNN), which is a type of AI that has been trained to recognize and understand patterns in images. This involves a process called convolution, which looks at small parts of the picture like edges, colors, or shapes at a time and then combines all that information together to make sense of it and to identify objects or patterns.

The problem is that to identify and track neurons during a movie of an animals brain, many images have to be labeled by hand because the animal appears very differently across time due to the many different body deformations. Given the diversity of the animals postures, generating a sufficient number of annotations manually to train a CNN can be daunting.

To address this, the researchers developed an enhanced CNN featuring targeted augmentation. The innovative technique automatically synthesizes reliable annotations for reference out of only a limited set of manual annotations. The result is that the CNN effectively learns the internal deformations of the brain and then uses them to create annotations for new postures, drastically reducing the need for manual annotation and double-checking.

The new method is versatile, being able to identify neurons whether they are represented in images as individual points or as 3D volumes. The researchers tested it on the roundwormCaenorhabditis elegans, whose 302 neurons have made it a popular model organism in neuroscience.

Using the enhanced CNN, the scientists measured activity in some of the worms interneurons (neurons that bridge signals between neurons). They found that they exhibit complex behaviors, for example changing their response patterns when exposed to different stimuli, such as periodic bursts of odors.

The team have made their CNN accessible, providing a user-friendly graphical user interface that integrates targeted augmentation, streamlining the process into a comprehensive pipeline, from manual annotation to final proofreading.

By significantly reducing the manual effort required for neuron segmentation and tracking, the new method increases analysis throughput three times compared to full manual annotation, says Sahand Jamal Rahi.

The breakthrough has the potential to accelerate research in brain imaging and deepen our understanding of neural circuits and behaviors.

Other contributors

Swiss Data Science Center

Author: Nik Papageorgiou Source: EPFL Contact: Nik Papageorgiou EPFL Image: The image is credited to Neuroscience News

Original Research: The findings will appear in Nature Methods

Visit link:
AI Revolutionizes Neuron Tracking in Moving Animals - Neuroscience News

Mirror Insight: Mice Show Glimpses of Self-Recognition – Neuroscience News

Summary: Mice display behavior akin to self-recognition when viewing their reflections in mirrors. This behavior emerges under specific conditions: familiarity with mirrors, socialization with similar-looking mice, and visible markings on their fur.

The study also identifies a subset of neurons in the hippocampus that are crucial for this self-recognition-like behavior. These findings provide valuable insights into the neural mechanisms behind self-recognition, a previously enigmatic aspect of neurobehavioral research.

Key Facts:

Source: Cell Press

Researchers report December 5 in the journalNeuronthat mice display behavior that resembles self-recognition when they see themselves in the mirror. When the researchers marked the foreheads of black-furred mice with a spot of white ink, the mice spent more time grooming their heads in front of the mirrorpresumably to try and wash away the ink spot.

However, the mice only showed this self-recognition-like behavior if they were already accustomed to mirrors, if they had socialized with other mice who looked like them, and if the ink spot was relatively large.

The team identified a subset of neurons in the hippocampus that are involved in developing and storing this visual self-image, providing a first glimpse of the neural mechanisms behind self-recognition, something that was previously a black box in neurobehavioral research.

To form episodic memory, for example, of events in our daily life, brains form and store information about where, what, when, and who, and the most important component is self-information or status, says neuroscientist and senior author Takashi Kitamura of University of Texas Southwestern Medical Center.

Researchers usually examine how the brain encodes or recognizes others, but the self-information aspect is unclear.

The researchers used a mirror test to investigate whether mice could detect a change in their own appearancein this case, a dollop of ink on their foreheads. Because the ink also provided a tactile stimulus, the researchers tested the black-furred mice with both black and white ink.

Though the mirror test was originally developed to test consciousness in different species, the authors note that their experiments only show that mice can detect a change in their own appearance, but this does not necessarily mean that they are self-aware.

They found that mice could indeed detect changes to their appearance, but only under certain conditions. Mice who were familiar with mirrors spent significantly more time grooming their heads (but not other parts of their bodies) in front of the mirror when they were marked with dollops of white ink that were 0.6 cm2or 2 cm2.

However, the mice did not engage in increased head grooming when the ink was blackthe same color as their furor when the ink mark was small (0.2 cm2), even if the ink was white, and mice who were not habituated to mirrors before the ink test did not display increased head grooming in any scenario.

The mice required significant external sensory cues to pass the mirror testwe have to put a lot of ink on their heads, and then the tactile stimulus coming from the ink somehow enables the animal to detect the ink on their heads via a mirror reflection, says first author Jun Yokose of University of Texas Southwestern Medical Center. Chimps and humans dont need any of that extra sensory stimulus.

Using gene expression mapping, the researchers identified a subset of neurons in the ventral hippocampus that were activated when the mice recognized themselves in the mirror. When the researchers selectively rendered these neurons non-functional, the mice no longer displayed the mirror-and-ink-induced grooming behavior.

A subset of these self-responding neurons also became activated when the mice observed other mice of the same strain (and therefore similar physical appearance and fur color), but not when they observed a different strain of mouse that had white fur.

Because previous studies in chimpanzees have suggested that social experience is required for mirror self-recognition, the researchers also tested mice who had been socially isolated after weaning. These socially isolated mice did not display increased head grooming behavior during the ink test, and neither did black-furred mice that were reared alongside white-furred mice.

The gene expression analysis also showed that socially isolated mice did not develop self-responding neuron activity in the hippocampus, and neither did the black-furred mice that were reared by white-furred mice, suggesting that mice need to have social experiences alongside other similar-looking mice in order to develop the neural circuits required for self-recognition.

A subset of these self-responding neurons was also reactivated when we exposed the mice to other individuals of the same strain, says Kitamura.

This is consistent with previous human literature that showed that some hippocampal cells fire not only when the person is looking at themselves, but also when they look at familiar people like a parent.

Next, the researchers plan to try to disentangle the importance of visual and tactile stimuli to test whether mice can recognize changes in their reflection in the absence of a tactile stimulusperhaps by using technology similar to the filters on social media apps that allow people to give themselves puppy-dog faces or bunny ears.

They also plan to study other brain regions that might be involved in self-recognition and to investigate how the different regions communicate and integrate information.

Now that we have this mouse model, we can manipulate or monitor neural activity to comprehensively investigate the neural circuit mechanisms behind how self-recognition-like behavior is induced in mice, says Yokose.

Funding: This research was supported by the Endowed Scholar Program, the Brain & Behavior Research Foundation, the Daiichi Sankyo Foundation of Life Science, and Uehara Memorial Foundation.

Author: Kristopher Benke Source: Cell Press Contact: Kristopher Benke Cell Press Image: The image is credited to Neuroscience News

Original Research: Open access. Visuotactile integration facilitates mirror-induced self-directed behavior through activation of hippocampal neuronal ensembles in mice by Takashi Kitamura et al. Neuron

Abstract

Visuotactile integration facilitates mirror-induced self-directed behavior through activation of hippocampal neuronal ensembles in mice

Remembering the visual features of oneself is critical for self-recognition. However, the neural mechanisms of how the visual self-image is developed remain unknown because of the limited availability of behavioral paradigms in experimental animals.

Here, we demonstrate a mirror-induced self-directed behavior (MSB) in mice, resembling visual self-recognition. Mice displayed increased mark-directed grooming to remove ink placed on their heads when an ink-induced visual-tactile stimulus contingency occurred. MSB required mirror habituation and social experience.

The chemogenetic inhibition of dorsal or ventral hippocampal CA1 (vCA1) neurons attenuated MSB. Especially, a subset of vCA1 neurons activated during the mirror exposure was significantly reactivated during re-exposure to the mirror and was necessary for MSB.

The self-responding vCA1 neurons were also reactivated when mice were exposed to a conspecific of the same strain.

These results suggest that visual self-image may be developed through social experience and mirror habituation and stored in a subset of vCA1 neurons.

View original post here:
Mirror Insight: Mice Show Glimpses of Self-Recognition - Neuroscience News

AI Vulnerabilities Exposed: Adversarial Attacks More Common and Dangerous Than Expected – Neuroscience News

Summary: A new study reveals that artificial intelligence systems are more susceptible to adversarial attacks than previously believed, making them vulnerable to manipulation that can lead to incorrect decisions.

Researchers found that adversarial vulnerabilities are widespread in AI deep neural networks, raising concerns about their use in critical applications. To assess these vulnerabilities, the team developed QuadAttacK, a software that can test neural networks for susceptibility to adversarial attacks.

The findings highlight the need to enhance AI robustness against such attacks, particularly in applications with potential human life implications.

Key Facts:

Source: North Carolina State University

Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.

At issue are so-called adversarial attacks, in which someone manipulates the data being fed into an AI system in order to confuse it. For example, someone might know that putting a specific type of sticker at a specific spot on a stop sign could effectively make the stop sign invisible to an AI system. Or a hacker could install code on an X-ray machine that alters the image data in a way that causes an AI system to make inaccurate diagnoses.

For the most part, you can make all sorts of changes to a stop sign, and an AI that has been trained to identify stop signs will still know its a stop sign, says Tianfu Wu, co-author of a paper on the new work and an associate professor of electrical and computer engineering at North Carolina State University.

However, if the AI has a vulnerability, and an attacker knows the vulnerability, the attacker could take advantage of the vulnerability and cause an accident.

The new study from Wu and his collaborators focused on determining how common these sorts of adversarial vulnerabilities are in AI deep neural networks. They found that the vulnerabilities are much more common than previously thought.

Whats more, we found that attackers can take advantage of these vulnerabilities to force the AI to interpret the data to be whatever they want, Wu says.

Using the stop sign example, you could make the AI system think the stop sign is a mailbox, or a speed limit sign, or a green light, and so on, simply by using slightly different stickers or whatever the vulnerability is.

This is incredibly important, because if an AI system is not robust against these sorts of attacks, you dont want to put the system into practical use particularly for applications that can affect human lives.

To test the vulnerability of deep neural networks to these adversarial attacks, the researchers developed a piece of software called QuadAttacK. The software can be used to test any deep neural network for adversarial vulnerabilities.

Basically, if you have a trained AI system, and you test it with clean data, the AI system will behave as predicted. QuadAttacKwatches these operations and learns how the AI is making decisions related to the data. This allows QuadAttacKto determine how the data could be manipulated to fool the AI.

QuadAttacKthen begins sending manipulated data to the AI system to see how the AI responds. If QuadAttacKhas identified a vulnerability it can quickly make the AI see whatever QuadAttacKwants it to see.

In proof-of-concept testing, the researchers used QuadAttacKto test four deep neural networks: two convolutional neural networks (ResNet-50 and DenseNet-121) and two vision transformers (ViT-B and DEiT-S). These four networks were chosen because they are in widespread use in AI systems around the world.

We were surprised to find that all four of these networks were very vulnerable to adversarial attacks, Wu says. We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.

The research team has made QuadAttacKpublicly available, so that the research community can use it themselves to test neural networks for vulnerabilities. The program can be found here:https://thomaspaniagua.github.io/quadattack_web/.

Now that we can better identify these vulnerabilities, the next step is to find ways to minimize those vulnerabilities, Wu says. We already have some potential solutions but the results of that work are still forthcoming.

The paper, QuadAttacK: A Quadratic Programming Approach to Learning Ordered Top-KAdversarial Attacks, will be presented Dec. 16 at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023), which is being held in New Orleans, La. First author of the paper is Thomas Paniagua, a Ph.D. student at NCState. The paper was co-authored by Ryan Grainger, a Ph.D. student at NCState.

Funding: The work was done with support from the U.S. Army Research Office, under grants W911NF1810295 and W911NF2210010; and from the National Science Foundation, under grants 1909644, 2024688 and 2013451.

Author: Matt Shipman Source: North Carolina State University Contact: Matt Shipman North Carolina State University Image: The image is credited to Neuroscience News

Original Research: The findings will be presented at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS)

Continued here:
AI Vulnerabilities Exposed: Adversarial Attacks More Common and Dangerous Than Expected - Neuroscience News

Mapping Ketamine’s Impact on the Brain – Neuroscience News

Summary: A study reveals that repeated use of ketamine leads to structural changes in the brains dopamine system, emphasizing the need for targeted ketamine therapies.

The research suggests that specific brain regions should be addressed to minimize unintended effects on other dopamine areas. Repeated ketamine exposure decreases dopamine neurons linked to mood regulation and increases dopamine neurons related to metabolism and basic functions.

These findings may explain ketamines potential in treating eating disorders and the dissociative behavioral effects observed in users. The study paves the way for improved ketamine applications in clinical settings.

Key Facts:

Source: Columbia University

Ketamine an anesthetic also known for its illicit use as a recreational drug has undergone a thorough reputational rehabilitation in recent years as the medical establishment has begun to recognize its wide-ranging therapeutic effects.

The drug is increasingly used for a range of medical purposes, including as a painkiller alternative to opioids, and as a therapy for treatment-resistant depression.

In a new study published in the journalCell Reports, Columbia biologists and biomedical engineers mapped ketamines effects on the brains of mice, and found that repeated use over extended periods of time leads to widespread structural changes in the brains dopamine system.

The findings bolster the case for developing ketamine therapies that target specific areas of the brain, rather than administering doses that wash the entire brain in ketamine.

Instead of bathing the entire brain in ketamine, as most therapies now do, our whole-brain mapping data indicates that a safer approach would be to target specific parts of the brain with it, so as to minimize unintended effects on other dopamine regions of the brain, Raju Tomer, the senior author of the paper said.

The study found that repeated ketamine exposure leads to a decrease in dopamine neurons in regions of the midbrain that are linked to regulating mood, as well as an increase in dopamine neurons in the hypothalamus, which regulates the bodys basic functions like metabolism and homeostasis.

The former finding, that ketamine decreases dopamine in the midbrain, may indicate why long-term abuse of ketamine could cause users to exhibit similar symptoms to people with schizophrenia, a mood disorder.

The latter finding, that ketamine increases dopamine in the parts of the brain that regulate metabolism, on the other hand, may help explain why it shows promise in treating eating disorders.

The researchers highly-detailed data also enabled them to track how ketamine affects dopamine networks across the brain. They found that ketamine reduced the density of dopamine axons, or nerve fibers, in the areas of the brain responsible for our hearing and vision, while increasing dopamine axons in the brains cognitive centers. These intriguing findings may help explain the dissociative behavioral effects observed in individuals exposed to ketamine.

The restructuring of the brains dopamine system that we see after repeated ketamine use may be linked to cognitive behavioral changes over time, Malika Datta, a co-author of the paper said.

Most studies of ketamines effects on the brain to-date have looked at the effects of acute exposure how one dose affects the brain in the immediate term. For this study, researchers examined repeated daily exposure over the course of up to ten days. Statistically significant alterations to the brains dopamine makeup were only measurably detectable after ten days of daily ketamine use.

The researchers assessed the effects of repeated exposure to the drug at two doses, one dose analogous to the dose used to model depression treatment in mice, and another closer to the dose that induces anesthesia. The drugs effects on dopamine system were visible at both doses.

The study is charting a new technological frontier in how to conduct high-resolution studies of the entire brain, said Yannan Chen, a co-author of the paper. It is the first successful attempt to map changes induced by chronic ketamine exposure at what is known as sub-cellular resolution, in other words, down to the level of seeing ketamines effects on parts of individual cells.

Most sub-cellular studies of ketamines effects conducted to-date have been hypothesis-driven investigations of one area of the brain that researchers have targeted because they believed that it might play an important role in how the brain metabolizes the drug. This study is the first sub-cellular study to examinethe entire brain without first forming such a hypothesis.

Bradley Miller, a Columbia psychiatrist and neuroscientist who focuses on depression, said: Ketamine rapidly resolves depression in many patients with treatment resistant depression, and it is being investigated for longer term use to prevent the relapse of depression.

This study reveals how ketamine rewires the brain with repeated use. This is an essential step for developing targeted treatments that effectively treat depression without some of the unwanted side effects of ketamine.

The research was supported by the National Institutes of Health (NIH) and the National Institute of Mental Health (NIMH). The papers lead authors are Malika Datta and Yannan Chen, who completed their research in Raju Tomers lab at Columbia. Datta is now a postdoctoral fellow at Yale.

This study gives us a deeper brain-wide perspective of how ketamine functions that we hope will contribute to improved uses of this highly promising drug in various clinical settings as well as help minimize its recreational abuse. Morebroadly, the study demonstrates that the same type of neurons located in different brain regions can be affected differently by the same drug, said Tomer.

Author: Christopher Shea Source: Columbia University Contact: Christopher Shea Columbia University Image: The image is credited to Neuroscience News

Original Research: The findings will be published in Cell Reports

See the rest here:
Mapping Ketamine's Impact on the Brain - Neuroscience News

Stickleback Fish Reveal Insights into Animal Decision-Making – Neuroscience News

Summary: A new study provides significant insights into how animals, specifically three-spined stickleback fish, make decisions under competing demands. The study explored the fishs behaviors during the breeding season, where males must simultaneously defend territory, court females, and care for offspring.

By exposing male sticklebacks to various stimuli and analyzing their behavioral responses and brain gene expression, the researchers discovered a complex interaction between territorial defense and courtship, with some prioritizing defense.

This research not only sheds light on animal decision-making processes but also suggests ancient mechanisms driving complex decision-making across many taxa.

Key Facts:

Source: University of Illinois

How do animals make decisions when faced with competing demands, and how have decision making processes evolved over time?

In a recent publication in Biology Letters, Tina Barbasch, a postdoctoral researcher at the Carl R. Woese Institute for Genomic Biology, and Alison Bell (GNDP), a professor in the Department of Ecology, Evolution and Behavior, explored these questions using three-spined stickleback fish (Gasterosteus aculeatus).

Whether you are in school, working, raising children, managing a social life, or just trying to relax for a moment, managing multiple responsibilities at once can quickly become overwhelming. You may find yourself wondering how much simpler life would be if you were a fish floating along a river or a hawk soaring through the sky.

Yet, animals also face the burdens of multitasking, whether it be searching for their next meal while avoiding becoming someone elses next meal or attracting a mate while defending their territory.

During my PhD, I studied parental care in clownfish, and how they decide how much care to provide for their offspring, says Barbasch.

This requires the integration of many sources of social and environmental information. Recently, I have become interested in understanding the mechanisms underlying how animals make decisions and integrate different sources of information.

Despite the importance of decision making for an animals fitness, the mechanisms that shape decision-making are not well understood. Stickleback are a powerful model for investigating these questions because of their complex life history and reproductive behavior.

During the breeding season, male sticklebacks establish territories to build nests to attract females. Males must simultaneously defend their territories from other males, court females that enter their territory with performative swimming motions, called zig-zags, and ultimately provide care for offspring if they can successfully court a female.

This study was inspired by an experiment where we looked at brain gene expression in male three-spined stickleback during parental care or when defending their territory, explained Bell.

We found that the same genes were involved in both experiments, but in opposite directions genes turned on in one condition were turned off in the other. This idea that the brain might be using the same molecular machinery, but in opposite ways, could have major implications for the evolution of decision making.

To explore the underlying molecular mechanisms of decision making, Barbasch exposed male stickleback to one of three stimuli: a female stickleback (courtship treatment); another male stickleback (territorial intruder treatment), or both a male and female stickleback (trade-off treatment).

Some male stickleback were left alone as a control. Aggressive behaviors (biting) and courtship behaviors (zig-zags) were quantified, and then the brains of the male stickleback were dissected to look at gene expression using RNA sequencing.

Barbasch found that, when faced with a trade-off, males generally prioritized territorial defense over courtship. There was also substantial variation across males in how they responded, suggesting that there might be different strategies that males employ when faced with a trade-off.

Furthermore, the gene expression results identified groups of genes that were differentially expressed across each of the experimental treatments relative to a control. Of particular interest are the genes that are only present in the trade-off treatment, because they suggest that males have a unique molecular response when faced with conflicting demands.

We performed gene ontology analysis on these trade-off genes to look into what the identity and function of these genes might be, describes Barbasch. Preliminary results suggest the trade-off genes may be related to the dopamine response pathway, which modulates reward and motivation in the brain, or neurogenesis, which is important for cognition.

Ultimately, these findings highlight the importance of exploring the molecular basis of animal behavior, as Bell outlines. Animals are living really complicated lives, across many taxa. This suggests that the mechanisms that are driving complex decision-making are probably really ancient and animals have been managing complex decisions for a long time.

Barbaschs study also sets the foundation for a wide range of exciting follow-up studies. She has already started to explore the behavioral and molecular responses by stickleback to other trade-offs including those involving predation risk, foraging, and parental care.

She also plans on expanding her molecular toolkit by quantifying gene expression in finer detail using single-cell RNA sequencing and weighted gene co-expression network analysis, which helps capture gene function by identifying networks of genes with related patterns of expression.

So, the next time you notice an animal doing something, think a bit deeper about their day-to-day life, and how they are finding a way to manage all their responsibilities.

Author: Nicholas Vasi Source: University of Illinois Contact: Nicholas Vasi University of Illinois Image: The image is credited to Neuroscience News

Original Research: Open access. A distinct neurogenomic response to a trade-off between social challenge and opportunity in male sticklebacks (Gasterosteus aculeatus) by Alison Bell et al. Biology Letters

Abstract

A distinct neurogenomic response to a trade-off between social challenge and opportunity in male sticklebacks (Gasterosteus aculeatus)

Animals frequently make adaptive decisions about what to prioritize when faced with multiple, competing demands simultaneously.

However, the proximate mechanisms of decision-making in the face of competing demands are not well understood.

We explored this question using brain transcriptomics in a classic model system: threespined sticklebacks, where males face conflict between courtship and territorial defence. We characterized the behaviour and brain gene expression profiles of males confronted by a trade-off between courtship and territorial defence by comparing them to males not confronted by this trade-off.

When faced with the trade-off, males behaviourally prioritized defence over courtship, and this decision was reflected in their brain gene expression profiles. A distinct set of genes and biological processes was recruited in the brain when males faced a trade-off and these responses were largely non-overlapping across two brain regions.

Combined, these results raise new questions about the interplay between the neural and molecular mechanisms involved in decision-making.

See the original post:
Stickleback Fish Reveal Insights into Animal Decision-Making - Neuroscience News

The Growing Synergy of AI and Neuroscience in Decoding the Human Brain – Securities.io

Artificial intelligence (AI) has been the talk of the town lately, with chatbots like OpenAI's ChatGPT, Google's Bard, and Elon Musk's Grok gaining a lot of traction. However, AI isn't as new as these chatbots; rather, interest in AI came decades ago in 1950 when scientist Alan Turing proposed a test of machine intelligence called The Imitation Game in his paper Computer Machinery and Intelligence.

Can machines think? asks Turing in his paper, offering a Turing Test, where a human interrogator would try to distinguish between a computer and human text response.

Since then, advancements in technology have led to more sophisticated AI systems that have been used across different fields, including healthcare and the understanding and treatment of the most complex human organ, the brain.

Click here to learn all about AI brain chips.

Broadly speaking, AI systems reason, learn, and perform tasks commonly associated with human cognitive functions, such as identifying patterns and interpreting speech by processing massive amounts of data.

AI is basically a set of technologies that enable computers to perform a variety of advanced functions. The backbone of innovation in modern computing, AI encompasses different disciplines, including:

These AI models that simulate cognitive processes and aid in complex cognitive tasks such as language translation and image recognition are based on biological neural networks, which are complex systems of interconnected neurons and help train' machines to make sense of speech, images, and patterns.

The intricate and intelligent human brain has been presenting a challenge for scientists to unlock possibilities for human augmentation. However, while AI has been harnessed to create the likes of Apple's Siri, Amazon's Alexa, and IBM's Watson, the truly transformative impact will only be achieved when artificial neural networks are augmented by human native intelligence, an outcome of centuries of survival.

Although computers still can't match the complete flexibility of humans, there are programs that manage to execute specific tasks, with the scope of AI's applications expanding daily. This technological progress, coupled with advancements in science, has notably led to the utilization of AI in medical diagnosis and treatment.

By analyzing large amounts of patient data from multiple sources to assist healthcare providers, AI helps get a complete picture of a patient's health for a more accurate prediction and make more informed decisions about patient care. This further helps detect potential health problems earlier before they become potentially life-threatening. Moreover, by using AI, healthcare providers can automate routine tasks, allowing them to focus on more complex patient care.

Click here to learn how various technologies are enabling the next level of human evolution.

Groundbreaking research in neuroscience has led to the development of advanced brain imaging techniques, including:

Concurrently, as AI algorithms, particularly in machine learning and deep learning, have become more sophisticated, this has resulted in an intersection of both fields. Such synchronization is enabling scientists to analyze and understand brain data at an unprecedented scale.

The intersection of AI and neuroscience, the field focusing on the nervous system and brain, is particularly evident in the realm of data analysis. Presently, AI empowers scientists and researchers to map brain regions with unprecedented accuracy. This has been made possible by the technological advancements in AI that allow the classification of intricate patterns of brain data and then making correlations. This collaboration has also paved the way for researchers to better comprehend neural pathways.

With the help of AI, medical diagnostics could be made better by improving the prediction accuracy, speed, and efficiency of the diagnostic process. AI-powered brain image studies have found subtle changes in brain structures that make their appearance prior to their clinical symptoms becoming known, which have enormous potential for early detection and intervention, potentially revolutionizing our approach to neurodegenerative disorders.

For instance, late last month, researchers leveraged AI toanalyze specialized brain MRI scansof individuals with attention-deficit/hyperactivity disorder (ADHD). ADHD is a common disorder, with an estimated 5.7 million children and adolescents between the ages of 6 and 17 diagnosed with it in the US.

The disorder that is increasingly becoming prevalent due to the influx of smartphones can have a huge impact on the patient's quality of life, as children with ADHD tend to have trouble paying attention and regulating activity. Here, early diagnosis and intervention are key to managing it, but ADHD, as study co-author Justin Huynh said:

It is extremely difficult to diagnose.

The study used fractional anisotropy (FA) values as input for training a deep-learning AI model to diagnose ADHD in a quantitative, objective diagnostic framework.

As we saw, by feeding massive amounts of datasets related to brain scans and patient histories, algorithms can distinguish subtle markers that may not be possible for humans. This, in turn, increases diagnostic accuracy, resulting in earlier interventions and better patient outcomes.

Studying new brain-imaging technology to understand the secrets of brain science and then linking it with AI to simulate the brain is also a way to close the gap between AI and human intelligence. Already, there have been a lot of advancements in brain-computer interfaces (BCI) by companies like Neuralink. BCI connects the brain directly to external devices, allowing disabled people to control prosthetics and interact with the world just by thought, showcasing their potential for many scientific and practical applications.

This merger of human intelligence and AI ultimately can create superhumans' but needs computing models that integrate visual and natural language processing, just as the human brain does, for comprehensive communication. In this context, virtual assistants can address both simple and complex tasks, but machines need to learn to understand richer contexts for human-like communication skills.

In healthcare, diagnostics involves evaluating medical conditions or diseases by analyzing symptoms, medical history, and test results. Its goal is to make use of tests such as imaging tests, blood tests, etc, to determine the cause of a medical problem and make an accurate diagnosis to provide effective treatment. In addition, diagnostics can be used to monitor the progress of a condition and assess the effectiveness of treatment.

The potential of AI in treatment is pretty compelling. Artificial intelligence can provide an analysis of a person's brain characteristics as well as their medical history, genetics, lifestyle data, and other factors, based on which it can offer personalized medicine. This way, AI promises tailored treatment plans that take into account the unique intricacies of each patient's brain.

By identifying unique, unbiased patterns in data, AI can potentially also discover new biomarkers or intervention methods. AI-based systems are faster and more efficient than manual processes and significantly reduce human errors.

A team of researchers recently used AI to predictthe optimal method for synthesizing drug molecules. This method, according to the paper's lead author David Nippa, has the potential to reduce the number of required lab experiments significantly, as a result, increasing both the efficiency and sustainability of chemical synthesis.

The AI model was trained on data from trustworthy scientific works and experiments from an automated lab and can successfully predict the position of borylation for any molecule and provide the optimal conditions for the chemical transformation. Already being used to identify positions in existing active ingredients where additional active groups can be introduced, this model will help in developing new and more effective variants of known drug active ingredients more quickly.

Now, let's take a look at some of the publicly traded companies in the medical sector that are making use of the technology.

This pharma giant has been investing in AI for biomedical data analysis and drug discovery and development. With a market cap of $223.48 bln, Novartis stocks are currently trading at $98.27, up 8.17% this year. The company's revenue trailing twelve months (TTM) has been $47.88 bln while having EPS (TTM) of 3.59, P/E (TTM) of 27.30, and ROE (TTM) of 14.94%. Meanwhile, the dividend yield has been 3.57%.

The company has been integrating AI across its operations, including analyzing vast datasets covering public health records, prescription data, internal data, and medical insurance claims to identify potential trial patients to optimize clinical trial design. Using the AI tool has made enrolling patients in trials faster, cheaper, and more efficient, according to Novartis.

This research-based biopharmaceutical company has a market cap of $163.238 bln and its shares are currently trading at $28.97, down 43.58% this year. The company's revenue trailing twelve months (TTM) has been $68.53 bln while having EPS (TTM) of 1.82, P/E (TTM) of 15.88, and ROE (TTM) of 11.05%. Meanwhile, the dividend yield has been 5.67%.

Pfizer has been showing a lot of interest in leveraging AI to enhance its drug discovery efforts. The company has partnered with many AI companies, such as CytoReason, Tempus, Gero, and Truveta. Meanwhile, to improve its oncology clinical trials, Pfizer signed a data-sharing agreement with oncology AI company Vysioneer, which also has an FDA-cleared AI-powered brain tumor auto-contouring solution called VBrain.

In addition to creating an ML research hub to create new predictive models and tools, Pfizer also partnered with one of the largest cloud providers in the Amazon Web Services for using cloud computing in drug discovery and manufacturing. This partnership has been particularly valuable during the COVID-19 pandemic in various aspects of the vaccine's development, from manufacturing to clinical trials.

This biopharmaceutical company has a market cap of $200.8 bln, and its shares are currently trading at $64.86, down 4.44% this year. The company's revenue trailing twelve months (TTM) has been almost $45 bln while having EPS (TTM) of 1.89, P/E (TTM) of 34.29, and ROE (TTM) of 16.30%. Meanwhile, the dividend yield has been 2.22%.

The Anglo-Swedish drugmaker has been investing in AI to analyze complex biological data for drug discovery and has been collaborating with AI companies to enhance their research capabilities. Most recently, AstraZeneca signed a deal worth up to $247 million with AI-based biologics drug discovery company Absci to design an antibody to fight cancer. The biologics firm makes use of generative AI to get optimal drug candidates based on traits such as affinity, manufacturability, and safety, among others.

Last month, AstraZeneca formed a health-technology unit dubbed Evinova to accelerate innovation and bring AI to clinical trials. The company has also gained early access to AI-driven' digital twins' and signed an AI-powered drug discovery pact with Verge Genomics through its rare disease division,Alexion.

This AI-enabled drug discovery and development company has a market cap of $86.45 bln, and its shares are currently trading at $0.545, down 84.43% this year. The company's EPS (TTM) is 0.75, and P/E (TTM) is 0.72.

BenevolentAI is a clinical-stage company that aims to treat atopic dermatitis as well as potential treatments for chronic diseases and cancer. It uses predictive AI algorithms to analyze and extract the needed insights from the available data and scientific literature. Back in May this year, as part of a strategic plan to position itself for a new era in AI, the company shared that it would reduce spending and free up net cash to increase its financial flexibility.

The company has an established partnership with other big pharmaceutical companies such as GSK and Novartis, while its collaboration with AstraZeneca is to develop drugs for fibrosis and chronic kidney disease. A few months ago, BenevolentAI also partnered with Merck KGaA to leverage its expertise in oncology and neuroinflammation and support the company's AI-driven drug discovery plans by focusing on finding viable small molecule candidates.

As we saw, AI has vast potential to enhance the diagnosis and treatment of brain diseases. It can even help predict brain disorders based on minor deviations from normal brain activity, leading to improved patient outcomes and a more efficient and effective healthcare system. However, it must be noted that this intersection of AI and the human brain is not without its ethical concerns and hence demands strict privacy safeguards.

Read the original post:
The Growing Synergy of AI and Neuroscience in Decoding the Human Brain - Securities.io

Cannabis and Alcohol Co-use Impacts Adolescent Brain and Behavior – Neuroscience News

Summary: Recent studies reveal effects of cannabis and alcohol co-use on adolescent rats, simulating human behavior.

Rats voluntarily consumed THC-infused treats and alcohol, allowing researchers to observe changes in brain structure and behavior. Notably, co-use led to reduced synaptic plasticity in the prefrontal cortex, with effects more pronounced in female rats.

The studies aim to understand cognitive disruptions caused by drug use in adolescence and develop treatment approaches.

Key Facts:

Source: University of Illinois

The increased legalization of cannabis over the past several years can potentially increase its co-use with alcohol. Concerningly, very few studies have looked at the effects of these two drugs when used in combination.

In a series of new studies, researchers at the University of Illinois Urbana-Champaign used rats to understand how brain structure and behavior can change when cannabis and alcohol are taken together.

Most researchers have studied the effects of either alcohol or THC (delta-9-tetrahydrocannabinol), the primary psychoactive drug in cannabis, alone. However, when people, especially adolescents, use these drugs, they often do so in tandem.

Even when researchers study the co-use of these drugs, it involves injecting the animals with the drugs, which does not mirror what happens in humans.

Its rare that a person would have these drugs forced upon them. Also, other studies have shown that the effects of a drug are very different when an animal chooses to take it compared to when it is exposed against its will, Lauren Carrica, a graduate student in the Gulley lab.

Our study is unique because the rats have access to both these drugs and they choose to consume them.

The researchers used young male and female rats to mimic adolescence in humans. During feeding time, the animals were exposed to recreational doses (3 mg/kg-10 mg/kg) of THC that was coated on Fudge Brownie Goldfish Grahams and a sweetened 10% ethanol solution. The control group of rats were fed just the cookies and sweetened water in addition to their regular food.

Training them to eat the drug was simple. We mimicked the timing that humans are more likely to take the drugsat the end of the day. We did not deprive them of food or water. They were given an alcohol bottle in place of their water bottle during the access period and they preferred eating the cookies over their regular chow, said Nu-Chu Liang, an associate professor of psychology.

After 20 days of increasing THC doses, rats were drug-free as they grew into young adulthood. The researchers took blood samples from the rats and also tested their memories to see if the co-use of drugs had any effect.

Briefly, rats were required to remember the location of a target lever after a delay period that ranged from very short to very long. If they remembered the location, and pressed the target lever, they earned a food reward. If they responded on the wrong lever, no food was delivered.

The effects were more pronounced in females and they had higher levels of chemicals that are produced when THC is broken down. Even so, the influence of THC on memory were modest, Carrica said.

These volitional, low-to-moderate doses of alcohol, THC, or both drugs did not induce long lasting, serious cognitive defects.

The subtlety of these effects is not surprising because we have modeled how these drugs are taken in a social setting over a relatively short period of time, said Joshua Gulley (GNDP), a professor of psychology.

Our results with the female rats are in agreement with other research that has shown that women who take edibles often have a different experience, which may be due to differences in how their bodies break down the drug.

In this first study the researchers were unable to expose the rats to higher levels of THC because the rats would ignore the THC-laced cookies.

When you gave them higher doses, some animals lost interest in the cookies, and it is unclear why. Its possible that they dont like the higher doses or there is something about the taste or smell that becomes aversive, Gulley said.

Although there were modest differences in behavior, the group still wanted to check whether anything had changed in the signaling pathways in the brain, especially at higher levels of THC. In the second paper they did so by injecting alcohol-drinking or non-drinking adolescent rats with THC doses ranging from 3 mg/kg to 20 mg/kg.

Similar to the first study, the injections and alcohol drinking were then stopped and the rats were tested once they reached early adulthood.

Just like humans, rat brains undergo significant changes during adolescence, particularly in the prefrontal cortex, which helps them adapt to changing environments. The neurons in the prefrontal cortex modify their connectionsa process referred to as synaptic plasticityfrom the end of adolescence into young adulthood, according to Gulley.

The researchers wanted to test whether drug exposure during adolescence could change the ability of the brain to undergo synaptic plasticity as an adult. Therefore, they sacrificed the rats and measured the electrical signals generated in the brain.

We found that alcohol and THC together significantly reduced, and in some cases prevented, the ability of the prefrontal cortex in drug-exposed rats to undergo plasticity in the same way that the brains from control animals can, said Linyuan Shi, a graduate student in the Gulley lab.

The effects were apparent in rats exposed to either drug alone, and they were most pronounced with co-exposure to both drugs. We also found the impaired plasticity was likely due to changes in signaling caused by gamma-aminobutyric acid, a chemical messenger in the brain.

When we used a chemical that enhances GABA, it could rescue the deficits we saw in the animals that had been exposed to the drugs.

The researchers are now interested in understanding which neurons are involved in the response to the drugs.

From these studies, and the work our group has done with methamphetamine, we know that drug exposure during adolescence has the ability to disrupt cognitive functioning by altering the development of neuronal signaling in the prefrontal cortex.

Although different drugs influence the brain in different ways, they might have the same effect on the brain that can manifest as cognitive disruptions later in life, Gulley said.

Our ultimate goal is to harness our knowledge of these changes to develop treatment approaches for reversing cognitive dysfunctions that are associated with long-term drug use and addiction.

Author: Nicholas Vasi Source: University of Illinois Contact: Nicholas Vasi University of Illinois Image: The image is credited to Neuroscience News

Original Research: Open access. Effects of combined use of alcohol and delta-9-tetrahydrocannibinol on working memory in Long Evans rats by Joshua Gulley et al. Behavioral Brain Research

Open access. Effects of combined exposure to ethanol and delta-9-tetrahydrocannabinol during adolescence on synaptic plasticity in the prefrontal cortex of Long Evans rats by Joshua Gulley et al. Neuropharmacology

Abstract

Effects of combined use of alcohol and delta-9-tetrahydrocannibinol on working memory in Long Evans rats

The increase in social acceptance and legalization of cannabis over the last several years is likely to increase the prevalence of its co-use with alcohol. In spite of this, the potential for effects unique to co-use of these drugs, especially in moderate doses, has been studied relatively infrequently.

We addressed this in the current study using a laboratory rat model of voluntary drug intake. Periadolescent male and female Long-Evans rats were allowed to orally self-administer ethanol, 9-tetrahydrocannibinol (THC), both drugs, or their vehicle controls from postnatal day (P) 30 to P47. They were subsequently trained and tested on an instrumentalbehaviortask that assesses attention, working memory and behavioral flexibility.

Similar to previous work, consumption of THC reduced both ethanol and saccharin intake in both sexes.

Blood samples taken 14h following the final self-administration session revealed that females had higher levels of the THC metabolite THC-COOH. There were modest effects of THC on our delayed matching to position (DMTP) task, with females exhibiting reduced performance compared to their control group or male, drug using counterparts.

However, there were no significant effects of co-use of ethanol or THC on DMTP performance, and drug effects were also not apparent in the reversal learning phase of the task when non-matching to position was required as the correct response.

These findings are consistent with other published studies in rodent models showing that use of these drugs in low to moderate doses does not significantly impact memory or behavioral flexibility following a protracted abstinence period.

Abstract

Effects of combined exposure to ethanol and delta-9-tetrahydrocannabinol during adolescence on synaptic plasticity in the prefrontal cortex of Long Evans rats

Significant exposure to alcohol or cannabis during adolescence can induce lasting disruptions of neuronal signaling in brain regions that are later to mature, such as the medial prefrontal cortex (mPFC). Considerably less is known about the effects of alcohol and cannabis co-use, despite its common occurrence.

Here, we used male and female Long-Evans rats to investigate the effects of early-life exposure to ethanol, delta-9-tetrahydrocannabinol (THC), or their combination on high frequency stimulation (HFS)-induced plasticity in the prelimbic region of the mPFC.

Animals were injected daily from postnatal days 3045 with vehicle or THC (escalating doses, 320mg/kg) and allowed to drink vehicle (0.1% saccharin) or 10% ethanol immediately after each injection.In vitrobrain sliceelectrophysiologywas then used to record population responses of layer V neurons following HFS in layer II/III after 34 weeks of abstinence.

We found that THC exposure reduced body weight gains observed inad libitumfed rats, and reduced intake of saccharin and ethanol. Compared to controls, there was a significant reduction in HFS-induced long-term depression (LTD) in rats exposed to either drug alone, and an absence of LTD in rats exposed to the drug combination.

Bath application ofindiplonor AR-A014418, which enhance GABAAreceptor function or inhibitglycogen synthase kinase3 (GSK3), respectively, suggested the effects of ethanol, THC or their combination were due in part to lasting adaptations in GABA and GSK3 signaling.

These results suggest the potential for long-lasting adaptations in mPFC output following co-exposure to alcohol and THC.

Go here to read the rest:
Cannabis and Alcohol Co-use Impacts Adolescent Brain and Behavior - Neuroscience News

Dopamine’s Role in Learning from Rewards and Penalties – Neuroscience News

Summary: Dopamine, a neurotransmitter, plays a vital role in encoding both reward and punishment prediction errors in the human brain.

This study suggests that dopamine is essential for learning from both positive and negative experiences, enabling the brain to adapt behavior based on outcomes. Using electrochemical techniques and machine learning, scientists measured dopamine levels in real-time during a computer game involving rewards and penalties.

The findings shed light on the intricate role of dopamine in human behavior and could have implications for understanding psychiatric and neurological disorders.

Key Facts:

Source: Wake Forest Baptist Medical Center

What happens in the human brain when we learn from positive and negative experiences? To help answer that question and better understand decision-making and human behavior, scientists are studying dopamine.

Dopamine is a neurotransmitter produced in the brain that serves as a chemical messenger, facilitating communication between nerve cells in the brain and the body. It is involved in functions such as movement, cognition and learning. While dopamine is most known for its association withpositive emotions, scientists are also exploring its role in negative experiences.

Now, a new study from researchers at Wake Forest University School of MedicinepublishedDec. 1 inScience Advancesshows thatdopamine releasein the human brain plays a crucial role in encoding both reward and punishment prediction errors.

This means that dopamine is involved in the process of learning from both positive and negative experiences, allowing the brain to adjust and adapt its behavior based on the outcomes of these experiences.

Previously, research has shown that dopamine plays an important role in how animals learn from rewarding (and possibly punishing) experiences. But, little work has been done to directly assess what dopamine does on fast timescales in thehuman brain, said Kenneth T. Kishida, Ph.D., associate professor of physiology and pharmacology and neurosurgery at Wake Forest University School of Medicine.

This is the first study in humans to examine how dopamine encodes rewards and punishments and whether dopamine reflects an optimal teaching signal that is used in todays most advanced artificial intelligence research.

For the study, researchers on Kishidas team utilized fast-scancyclic voltammetry, an electrochemical technique, paired withmachine learning, to detect and measuredopamine levelsin real-time (i.e., 10 measurements per second). However, this method is challenging and can only be performed during invasive procedures such as deep-brain stimulation (DBS) brain surgery.

DBS is commonly employed to treat conditions such as Parkinsons disease, essential tremor, obsessive-compulsive disorder and epilepsy.

Kishidas team collaborated with Atrium Health Wake Forest Baptist neurosurgeons Stephen B. Tatter, M.D., and Adrian W. Laxton, M.D., who are also bothfaculty membersin the Department of Neurosurgery at Wake Forest University School of Medicine, to insert a carbon fiber microelectrode deep into the brain of three participants at Atrium Health Wake Forest Baptist Medical Center who were scheduled to receive DBS to treat essential tremor.

While the participants were awake in theoperating room, they played a simple computer game. As they played the game, dopamine measurements were taken in the striatum, a part of the brain that is important for cognition, decision-making, and coordinated movements.

During the game, participants choices were either rewarded or punished with real monetary gains or losses. The game was divided into three stages in which participants learned from positive or negative feedback to make choices that maximized rewards and minimized penalties. Dopamine levels were measured continuously, once every 100 milliseconds, throughout each of the three stages of the game.

We found that dopamine not only plays a role in signaling both positive and negative experiences in the brain, but it seems to do so in a way that is optimal when trying to learn from those outcomes. What was also interesting, is that it seems like there may be independent pathways in the brain that separately engage the dopamine system for rewarding versus punishing experiences.

Our results reveal a surprising result that these two pathways may encode rewarding and punishing experiences on slightly shifted timescales separated by only 200 to 400 milliseconds in time, Kishida said.

Kishida believes that this level of understanding may lead to a better understanding of how the dopamine system is affected in humans with psychiatric and neurological disorders. Kishida said additional research is needed to understand how dopamine signaling is altered in psychiatric and neurological disorders.

Traditionally, dopamine is often referred to as the pleasure neurotransmitter, Kishida said.

However, our work provides evidence that this is not the way to think about dopamine. Instead, dopamine is a crucial part of a sophisticated system that teaches our brain and guides our behavior.

Thatdopamineis also involved in teaching ourbrainabout punishing experiences is an important discovery and may provide new directions in research to help us better understand the mechanisms underlying depression, addiction, and related psychiatric and neurological disorders.

Author: Kenneth T. Kishida Source: Wake Forest Baptist Medical Center Contact: Kenneth T. Kishida Wake Forest Baptist Medical Center Image: The image is credited to Neuroscience News

Original Research: Open access. Sub-second fluctuations in extracellular dopamine encode reward and punishment prediction errors in humans by Paul Sands et al. Science Advances

Abstract

Sub-second fluctuations in extracellular dopamine encode reward and punishment prediction errors in humans

In the mammalian brain, midbrain dopamine neuron activity is hypothesized to encode reward prediction errors that promote learning and guide behavior by causing rapid changes in dopamine levels in target brain regions.

This hypothesis (and alternatives regarding dopamines role in punishment-learning) has limited direct evidence in humans. We report intracranial, subsecond measurements of dopamine release in human striatum measured, while volunteers (i.e., patients undergoing deep brain stimulation surgery) performed a probabilistic reward and punishment learning choice task designed to test whether dopamine release encodes only reward prediction errors or whether dopamine release may also encode adaptive punishment learning signals.

Results demonstrate that extracellular dopamine levels can encode both reward and punishment prediction errors within distinct time intervals via independent valence-specific pathways in the human brain.

Read the original here:
Dopamine's Role in Learning from Rewards and Penalties - Neuroscience News