Category Archives: Neuroscience

The Growing Synergy of AI and Neuroscience in Decoding the Human Brain – Securities.io

Artificial intelligence (AI) has been the talk of the town lately, with chatbots like OpenAI's ChatGPT, Google's Bard, and Elon Musk's Grok gaining a lot of traction. However, AI isn't as new as these chatbots; rather, interest in AI came decades ago in 1950 when scientist Alan Turing proposed a test of machine intelligence called The Imitation Game in his paper Computer Machinery and Intelligence.

Can machines think? asks Turing in his paper, offering a Turing Test, where a human interrogator would try to distinguish between a computer and human text response.

Since then, advancements in technology have led to more sophisticated AI systems that have been used across different fields, including healthcare and the understanding and treatment of the most complex human organ, the brain.

Click here to learn all about AI brain chips.

Broadly speaking, AI systems reason, learn, and perform tasks commonly associated with human cognitive functions, such as identifying patterns and interpreting speech by processing massive amounts of data.

AI is basically a set of technologies that enable computers to perform a variety of advanced functions. The backbone of innovation in modern computing, AI encompasses different disciplines, including:

These AI models that simulate cognitive processes and aid in complex cognitive tasks such as language translation and image recognition are based on biological neural networks, which are complex systems of interconnected neurons and help train' machines to make sense of speech, images, and patterns.

The intricate and intelligent human brain has been presenting a challenge for scientists to unlock possibilities for human augmentation. However, while AI has been harnessed to create the likes of Apple's Siri, Amazon's Alexa, and IBM's Watson, the truly transformative impact will only be achieved when artificial neural networks are augmented by human native intelligence, an outcome of centuries of survival.

Although computers still can't match the complete flexibility of humans, there are programs that manage to execute specific tasks, with the scope of AI's applications expanding daily. This technological progress, coupled with advancements in science, has notably led to the utilization of AI in medical diagnosis and treatment.

By analyzing large amounts of patient data from multiple sources to assist healthcare providers, AI helps get a complete picture of a patient's health for a more accurate prediction and make more informed decisions about patient care. This further helps detect potential health problems earlier before they become potentially life-threatening. Moreover, by using AI, healthcare providers can automate routine tasks, allowing them to focus on more complex patient care.

Click here to learn how various technologies are enabling the next level of human evolution.

Groundbreaking research in neuroscience has led to the development of advanced brain imaging techniques, including:

Concurrently, as AI algorithms, particularly in machine learning and deep learning, have become more sophisticated, this has resulted in an intersection of both fields. Such synchronization is enabling scientists to analyze and understand brain data at an unprecedented scale.

The intersection of AI and neuroscience, the field focusing on the nervous system and brain, is particularly evident in the realm of data analysis. Presently, AI empowers scientists and researchers to map brain regions with unprecedented accuracy. This has been made possible by the technological advancements in AI that allow the classification of intricate patterns of brain data and then making correlations. This collaboration has also paved the way for researchers to better comprehend neural pathways.

With the help of AI, medical diagnostics could be made better by improving the prediction accuracy, speed, and efficiency of the diagnostic process. AI-powered brain image studies have found subtle changes in brain structures that make their appearance prior to their clinical symptoms becoming known, which have enormous potential for early detection and intervention, potentially revolutionizing our approach to neurodegenerative disorders.

For instance, late last month, researchers leveraged AI toanalyze specialized brain MRI scansof individuals with attention-deficit/hyperactivity disorder (ADHD). ADHD is a common disorder, with an estimated 5.7 million children and adolescents between the ages of 6 and 17 diagnosed with it in the US.

The disorder that is increasingly becoming prevalent due to the influx of smartphones can have a huge impact on the patient's quality of life, as children with ADHD tend to have trouble paying attention and regulating activity. Here, early diagnosis and intervention are key to managing it, but ADHD, as study co-author Justin Huynh said:

It is extremely difficult to diagnose.

The study used fractional anisotropy (FA) values as input for training a deep-learning AI model to diagnose ADHD in a quantitative, objective diagnostic framework.

As we saw, by feeding massive amounts of datasets related to brain scans and patient histories, algorithms can distinguish subtle markers that may not be possible for humans. This, in turn, increases diagnostic accuracy, resulting in earlier interventions and better patient outcomes.

Studying new brain-imaging technology to understand the secrets of brain science and then linking it with AI to simulate the brain is also a way to close the gap between AI and human intelligence. Already, there have been a lot of advancements in brain-computer interfaces (BCI) by companies like Neuralink. BCI connects the brain directly to external devices, allowing disabled people to control prosthetics and interact with the world just by thought, showcasing their potential for many scientific and practical applications.

This merger of human intelligence and AI ultimately can create superhumans' but needs computing models that integrate visual and natural language processing, just as the human brain does, for comprehensive communication. In this context, virtual assistants can address both simple and complex tasks, but machines need to learn to understand richer contexts for human-like communication skills.

In healthcare, diagnostics involves evaluating medical conditions or diseases by analyzing symptoms, medical history, and test results. Its goal is to make use of tests such as imaging tests, blood tests, etc, to determine the cause of a medical problem and make an accurate diagnosis to provide effective treatment. In addition, diagnostics can be used to monitor the progress of a condition and assess the effectiveness of treatment.

The potential of AI in treatment is pretty compelling. Artificial intelligence can provide an analysis of a person's brain characteristics as well as their medical history, genetics, lifestyle data, and other factors, based on which it can offer personalized medicine. This way, AI promises tailored treatment plans that take into account the unique intricacies of each patient's brain.

By identifying unique, unbiased patterns in data, AI can potentially also discover new biomarkers or intervention methods. AI-based systems are faster and more efficient than manual processes and significantly reduce human errors.

A team of researchers recently used AI to predictthe optimal method for synthesizing drug molecules. This method, according to the paper's lead author David Nippa, has the potential to reduce the number of required lab experiments significantly, as a result, increasing both the efficiency and sustainability of chemical synthesis.

The AI model was trained on data from trustworthy scientific works and experiments from an automated lab and can successfully predict the position of borylation for any molecule and provide the optimal conditions for the chemical transformation. Already being used to identify positions in existing active ingredients where additional active groups can be introduced, this model will help in developing new and more effective variants of known drug active ingredients more quickly.

Now, let's take a look at some of the publicly traded companies in the medical sector that are making use of the technology.

This pharma giant has been investing in AI for biomedical data analysis and drug discovery and development. With a market cap of $223.48 bln, Novartis stocks are currently trading at $98.27, up 8.17% this year. The company's revenue trailing twelve months (TTM) has been $47.88 bln while having EPS (TTM) of 3.59, P/E (TTM) of 27.30, and ROE (TTM) of 14.94%. Meanwhile, the dividend yield has been 3.57%.

The company has been integrating AI across its operations, including analyzing vast datasets covering public health records, prescription data, internal data, and medical insurance claims to identify potential trial patients to optimize clinical trial design. Using the AI tool has made enrolling patients in trials faster, cheaper, and more efficient, according to Novartis.

This research-based biopharmaceutical company has a market cap of $163.238 bln and its shares are currently trading at $28.97, down 43.58% this year. The company's revenue trailing twelve months (TTM) has been $68.53 bln while having EPS (TTM) of 1.82, P/E (TTM) of 15.88, and ROE (TTM) of 11.05%. Meanwhile, the dividend yield has been 5.67%.

Pfizer has been showing a lot of interest in leveraging AI to enhance its drug discovery efforts. The company has partnered with many AI companies, such as CytoReason, Tempus, Gero, and Truveta. Meanwhile, to improve its oncology clinical trials, Pfizer signed a data-sharing agreement with oncology AI company Vysioneer, which also has an FDA-cleared AI-powered brain tumor auto-contouring solution called VBrain.

In addition to creating an ML research hub to create new predictive models and tools, Pfizer also partnered with one of the largest cloud providers in the Amazon Web Services for using cloud computing in drug discovery and manufacturing. This partnership has been particularly valuable during the COVID-19 pandemic in various aspects of the vaccine's development, from manufacturing to clinical trials.

This biopharmaceutical company has a market cap of $200.8 bln, and its shares are currently trading at $64.86, down 4.44% this year. The company's revenue trailing twelve months (TTM) has been almost $45 bln while having EPS (TTM) of 1.89, P/E (TTM) of 34.29, and ROE (TTM) of 16.30%. Meanwhile, the dividend yield has been 2.22%.

The Anglo-Swedish drugmaker has been investing in AI to analyze complex biological data for drug discovery and has been collaborating with AI companies to enhance their research capabilities. Most recently, AstraZeneca signed a deal worth up to $247 million with AI-based biologics drug discovery company Absci to design an antibody to fight cancer. The biologics firm makes use of generative AI to get optimal drug candidates based on traits such as affinity, manufacturability, and safety, among others.

Last month, AstraZeneca formed a health-technology unit dubbed Evinova to accelerate innovation and bring AI to clinical trials. The company has also gained early access to AI-driven' digital twins' and signed an AI-powered drug discovery pact with Verge Genomics through its rare disease division,Alexion.

This AI-enabled drug discovery and development company has a market cap of $86.45 bln, and its shares are currently trading at $0.545, down 84.43% this year. The company's EPS (TTM) is 0.75, and P/E (TTM) is 0.72.

BenevolentAI is a clinical-stage company that aims to treat atopic dermatitis as well as potential treatments for chronic diseases and cancer. It uses predictive AI algorithms to analyze and extract the needed insights from the available data and scientific literature. Back in May this year, as part of a strategic plan to position itself for a new era in AI, the company shared that it would reduce spending and free up net cash to increase its financial flexibility.

The company has an established partnership with other big pharmaceutical companies such as GSK and Novartis, while its collaboration with AstraZeneca is to develop drugs for fibrosis and chronic kidney disease. A few months ago, BenevolentAI also partnered with Merck KGaA to leverage its expertise in oncology and neuroinflammation and support the company's AI-driven drug discovery plans by focusing on finding viable small molecule candidates.

As we saw, AI has vast potential to enhance the diagnosis and treatment of brain diseases. It can even help predict brain disorders based on minor deviations from normal brain activity, leading to improved patient outcomes and a more efficient and effective healthcare system. However, it must be noted that this intersection of AI and the human brain is not without its ethical concerns and hence demands strict privacy safeguards.

Read the original post:
The Growing Synergy of AI and Neuroscience in Decoding the Human Brain - Securities.io

Cannabis and Alcohol Co-use Impacts Adolescent Brain and Behavior – Neuroscience News

Summary: Recent studies reveal effects of cannabis and alcohol co-use on adolescent rats, simulating human behavior.

Rats voluntarily consumed THC-infused treats and alcohol, allowing researchers to observe changes in brain structure and behavior. Notably, co-use led to reduced synaptic plasticity in the prefrontal cortex, with effects more pronounced in female rats.

The studies aim to understand cognitive disruptions caused by drug use in adolescence and develop treatment approaches.

Key Facts:

Source: University of Illinois

The increased legalization of cannabis over the past several years can potentially increase its co-use with alcohol. Concerningly, very few studies have looked at the effects of these two drugs when used in combination.

In a series of new studies, researchers at the University of Illinois Urbana-Champaign used rats to understand how brain structure and behavior can change when cannabis and alcohol are taken together.

Most researchers have studied the effects of either alcohol or THC (delta-9-tetrahydrocannabinol), the primary psychoactive drug in cannabis, alone. However, when people, especially adolescents, use these drugs, they often do so in tandem.

Even when researchers study the co-use of these drugs, it involves injecting the animals with the drugs, which does not mirror what happens in humans.

Its rare that a person would have these drugs forced upon them. Also, other studies have shown that the effects of a drug are very different when an animal chooses to take it compared to when it is exposed against its will, Lauren Carrica, a graduate student in the Gulley lab.

Our study is unique because the rats have access to both these drugs and they choose to consume them.

The researchers used young male and female rats to mimic adolescence in humans. During feeding time, the animals were exposed to recreational doses (3 mg/kg-10 mg/kg) of THC that was coated on Fudge Brownie Goldfish Grahams and a sweetened 10% ethanol solution. The control group of rats were fed just the cookies and sweetened water in addition to their regular food.

Training them to eat the drug was simple. We mimicked the timing that humans are more likely to take the drugsat the end of the day. We did not deprive them of food or water. They were given an alcohol bottle in place of their water bottle during the access period and they preferred eating the cookies over their regular chow, said Nu-Chu Liang, an associate professor of psychology.

After 20 days of increasing THC doses, rats were drug-free as they grew into young adulthood. The researchers took blood samples from the rats and also tested their memories to see if the co-use of drugs had any effect.

Briefly, rats were required to remember the location of a target lever after a delay period that ranged from very short to very long. If they remembered the location, and pressed the target lever, they earned a food reward. If they responded on the wrong lever, no food was delivered.

The effects were more pronounced in females and they had higher levels of chemicals that are produced when THC is broken down. Even so, the influence of THC on memory were modest, Carrica said.

These volitional, low-to-moderate doses of alcohol, THC, or both drugs did not induce long lasting, serious cognitive defects.

The subtlety of these effects is not surprising because we have modeled how these drugs are taken in a social setting over a relatively short period of time, said Joshua Gulley (GNDP), a professor of psychology.

Our results with the female rats are in agreement with other research that has shown that women who take edibles often have a different experience, which may be due to differences in how their bodies break down the drug.

In this first study the researchers were unable to expose the rats to higher levels of THC because the rats would ignore the THC-laced cookies.

When you gave them higher doses, some animals lost interest in the cookies, and it is unclear why. Its possible that they dont like the higher doses or there is something about the taste or smell that becomes aversive, Gulley said.

Although there were modest differences in behavior, the group still wanted to check whether anything had changed in the signaling pathways in the brain, especially at higher levels of THC. In the second paper they did so by injecting alcohol-drinking or non-drinking adolescent rats with THC doses ranging from 3 mg/kg to 20 mg/kg.

Similar to the first study, the injections and alcohol drinking were then stopped and the rats were tested once they reached early adulthood.

Just like humans, rat brains undergo significant changes during adolescence, particularly in the prefrontal cortex, which helps them adapt to changing environments. The neurons in the prefrontal cortex modify their connectionsa process referred to as synaptic plasticityfrom the end of adolescence into young adulthood, according to Gulley.

The researchers wanted to test whether drug exposure during adolescence could change the ability of the brain to undergo synaptic plasticity as an adult. Therefore, they sacrificed the rats and measured the electrical signals generated in the brain.

We found that alcohol and THC together significantly reduced, and in some cases prevented, the ability of the prefrontal cortex in drug-exposed rats to undergo plasticity in the same way that the brains from control animals can, said Linyuan Shi, a graduate student in the Gulley lab.

The effects were apparent in rats exposed to either drug alone, and they were most pronounced with co-exposure to both drugs. We also found the impaired plasticity was likely due to changes in signaling caused by gamma-aminobutyric acid, a chemical messenger in the brain.

When we used a chemical that enhances GABA, it could rescue the deficits we saw in the animals that had been exposed to the drugs.

The researchers are now interested in understanding which neurons are involved in the response to the drugs.

From these studies, and the work our group has done with methamphetamine, we know that drug exposure during adolescence has the ability to disrupt cognitive functioning by altering the development of neuronal signaling in the prefrontal cortex.

Although different drugs influence the brain in different ways, they might have the same effect on the brain that can manifest as cognitive disruptions later in life, Gulley said.

Our ultimate goal is to harness our knowledge of these changes to develop treatment approaches for reversing cognitive dysfunctions that are associated with long-term drug use and addiction.

Author: Nicholas Vasi Source: University of Illinois Contact: Nicholas Vasi University of Illinois Image: The image is credited to Neuroscience News

Original Research: Open access. Effects of combined use of alcohol and delta-9-tetrahydrocannibinol on working memory in Long Evans rats by Joshua Gulley et al. Behavioral Brain Research

Open access. Effects of combined exposure to ethanol and delta-9-tetrahydrocannabinol during adolescence on synaptic plasticity in the prefrontal cortex of Long Evans rats by Joshua Gulley et al. Neuropharmacology

Abstract

Effects of combined use of alcohol and delta-9-tetrahydrocannibinol on working memory in Long Evans rats

The increase in social acceptance and legalization of cannabis over the last several years is likely to increase the prevalence of its co-use with alcohol. In spite of this, the potential for effects unique to co-use of these drugs, especially in moderate doses, has been studied relatively infrequently.

We addressed this in the current study using a laboratory rat model of voluntary drug intake. Periadolescent male and female Long-Evans rats were allowed to orally self-administer ethanol, 9-tetrahydrocannibinol (THC), both drugs, or their vehicle controls from postnatal day (P) 30 to P47. They were subsequently trained and tested on an instrumentalbehaviortask that assesses attention, working memory and behavioral flexibility.

Similar to previous work, consumption of THC reduced both ethanol and saccharin intake in both sexes.

Blood samples taken 14h following the final self-administration session revealed that females had higher levels of the THC metabolite THC-COOH. There were modest effects of THC on our delayed matching to position (DMTP) task, with females exhibiting reduced performance compared to their control group or male, drug using counterparts.

However, there were no significant effects of co-use of ethanol or THC on DMTP performance, and drug effects were also not apparent in the reversal learning phase of the task when non-matching to position was required as the correct response.

These findings are consistent with other published studies in rodent models showing that use of these drugs in low to moderate doses does not significantly impact memory or behavioral flexibility following a protracted abstinence period.

Abstract

Effects of combined exposure to ethanol and delta-9-tetrahydrocannabinol during adolescence on synaptic plasticity in the prefrontal cortex of Long Evans rats

Significant exposure to alcohol or cannabis during adolescence can induce lasting disruptions of neuronal signaling in brain regions that are later to mature, such as the medial prefrontal cortex (mPFC). Considerably less is known about the effects of alcohol and cannabis co-use, despite its common occurrence.

Here, we used male and female Long-Evans rats to investigate the effects of early-life exposure to ethanol, delta-9-tetrahydrocannabinol (THC), or their combination on high frequency stimulation (HFS)-induced plasticity in the prelimbic region of the mPFC.

Animals were injected daily from postnatal days 3045 with vehicle or THC (escalating doses, 320mg/kg) and allowed to drink vehicle (0.1% saccharin) or 10% ethanol immediately after each injection.In vitrobrain sliceelectrophysiologywas then used to record population responses of layer V neurons following HFS in layer II/III after 34 weeks of abstinence.

We found that THC exposure reduced body weight gains observed inad libitumfed rats, and reduced intake of saccharin and ethanol. Compared to controls, there was a significant reduction in HFS-induced long-term depression (LTD) in rats exposed to either drug alone, and an absence of LTD in rats exposed to the drug combination.

Bath application ofindiplonor AR-A014418, which enhance GABAAreceptor function or inhibitglycogen synthase kinase3 (GSK3), respectively, suggested the effects of ethanol, THC or their combination were due in part to lasting adaptations in GABA and GSK3 signaling.

These results suggest the potential for long-lasting adaptations in mPFC output following co-exposure to alcohol and THC.

Go here to read the rest:
Cannabis and Alcohol Co-use Impacts Adolescent Brain and Behavior - Neuroscience News

Dopamine’s Role in Learning from Rewards and Penalties – Neuroscience News

Summary: Dopamine, a neurotransmitter, plays a vital role in encoding both reward and punishment prediction errors in the human brain.

This study suggests that dopamine is essential for learning from both positive and negative experiences, enabling the brain to adapt behavior based on outcomes. Using electrochemical techniques and machine learning, scientists measured dopamine levels in real-time during a computer game involving rewards and penalties.

The findings shed light on the intricate role of dopamine in human behavior and could have implications for understanding psychiatric and neurological disorders.

Key Facts:

Source: Wake Forest Baptist Medical Center

What happens in the human brain when we learn from positive and negative experiences? To help answer that question and better understand decision-making and human behavior, scientists are studying dopamine.

Dopamine is a neurotransmitter produced in the brain that serves as a chemical messenger, facilitating communication between nerve cells in the brain and the body. It is involved in functions such as movement, cognition and learning. While dopamine is most known for its association withpositive emotions, scientists are also exploring its role in negative experiences.

Now, a new study from researchers at Wake Forest University School of MedicinepublishedDec. 1 inScience Advancesshows thatdopamine releasein the human brain plays a crucial role in encoding both reward and punishment prediction errors.

This means that dopamine is involved in the process of learning from both positive and negative experiences, allowing the brain to adjust and adapt its behavior based on the outcomes of these experiences.

Previously, research has shown that dopamine plays an important role in how animals learn from rewarding (and possibly punishing) experiences. But, little work has been done to directly assess what dopamine does on fast timescales in thehuman brain, said Kenneth T. Kishida, Ph.D., associate professor of physiology and pharmacology and neurosurgery at Wake Forest University School of Medicine.

This is the first study in humans to examine how dopamine encodes rewards and punishments and whether dopamine reflects an optimal teaching signal that is used in todays most advanced artificial intelligence research.

For the study, researchers on Kishidas team utilized fast-scancyclic voltammetry, an electrochemical technique, paired withmachine learning, to detect and measuredopamine levelsin real-time (i.e., 10 measurements per second). However, this method is challenging and can only be performed during invasive procedures such as deep-brain stimulation (DBS) brain surgery.

DBS is commonly employed to treat conditions such as Parkinsons disease, essential tremor, obsessive-compulsive disorder and epilepsy.

Kishidas team collaborated with Atrium Health Wake Forest Baptist neurosurgeons Stephen B. Tatter, M.D., and Adrian W. Laxton, M.D., who are also bothfaculty membersin the Department of Neurosurgery at Wake Forest University School of Medicine, to insert a carbon fiber microelectrode deep into the brain of three participants at Atrium Health Wake Forest Baptist Medical Center who were scheduled to receive DBS to treat essential tremor.

While the participants were awake in theoperating room, they played a simple computer game. As they played the game, dopamine measurements were taken in the striatum, a part of the brain that is important for cognition, decision-making, and coordinated movements.

During the game, participants choices were either rewarded or punished with real monetary gains or losses. The game was divided into three stages in which participants learned from positive or negative feedback to make choices that maximized rewards and minimized penalties. Dopamine levels were measured continuously, once every 100 milliseconds, throughout each of the three stages of the game.

We found that dopamine not only plays a role in signaling both positive and negative experiences in the brain, but it seems to do so in a way that is optimal when trying to learn from those outcomes. What was also interesting, is that it seems like there may be independent pathways in the brain that separately engage the dopamine system for rewarding versus punishing experiences.

Our results reveal a surprising result that these two pathways may encode rewarding and punishing experiences on slightly shifted timescales separated by only 200 to 400 milliseconds in time, Kishida said.

Kishida believes that this level of understanding may lead to a better understanding of how the dopamine system is affected in humans with psychiatric and neurological disorders. Kishida said additional research is needed to understand how dopamine signaling is altered in psychiatric and neurological disorders.

Traditionally, dopamine is often referred to as the pleasure neurotransmitter, Kishida said.

However, our work provides evidence that this is not the way to think about dopamine. Instead, dopamine is a crucial part of a sophisticated system that teaches our brain and guides our behavior.

Thatdopamineis also involved in teaching ourbrainabout punishing experiences is an important discovery and may provide new directions in research to help us better understand the mechanisms underlying depression, addiction, and related psychiatric and neurological disorders.

Author: Kenneth T. Kishida Source: Wake Forest Baptist Medical Center Contact: Kenneth T. Kishida Wake Forest Baptist Medical Center Image: The image is credited to Neuroscience News

Original Research: Open access. Sub-second fluctuations in extracellular dopamine encode reward and punishment prediction errors in humans by Paul Sands et al. Science Advances

Abstract

Sub-second fluctuations in extracellular dopamine encode reward and punishment prediction errors in humans

In the mammalian brain, midbrain dopamine neuron activity is hypothesized to encode reward prediction errors that promote learning and guide behavior by causing rapid changes in dopamine levels in target brain regions.

This hypothesis (and alternatives regarding dopamines role in punishment-learning) has limited direct evidence in humans. We report intracranial, subsecond measurements of dopamine release in human striatum measured, while volunteers (i.e., patients undergoing deep brain stimulation surgery) performed a probabilistic reward and punishment learning choice task designed to test whether dopamine release encodes only reward prediction errors or whether dopamine release may also encode adaptive punishment learning signals.

Results demonstrate that extracellular dopamine levels can encode both reward and punishment prediction errors within distinct time intervals via independent valence-specific pathways in the human brain.

Read the original here:
Dopamine's Role in Learning from Rewards and Penalties - Neuroscience News

Implant Shows Promise in Restoring Cognitive Function After Brain Injury – Neuroscience News

Summary: A groundbreaking study successfully restored cognitive function in patients with lasting impairments from traumatic brain injuries using deep-brain-stimulation devices.

This innovative technique targets the central lateral nucleus in the thalamus to reactivate neural pathways associated with attention and arousal.

The studys participants, who had suffered moderate to severe brain injuries, showed remarkable improvements in mental processing speed, concentration, and daily life activities.

These findings offer new hope for individuals struggling with the long-term effects of traumatic brain injuries.

Key Facts:

Source: Stanford

In 2001, Gina Arata was in her final semester of college, planning to apply to law school, when she suffered a traumatic brain injury in a car accident. The injury so compromised her ability to focus she struggled in a job sorting mail.

I couldnt remember anything, said Arata, who lives in Modesto with her parents. My left foot dropped, so Id trip over things all the time. I was always in car accidents. And I had no filter Id get pissed off really easily.

Her parents learned about research being conducted at Stanford Medicine and reached out; Arata was accepted as a participant. In 2018, physicians surgically implanted a device deep inside her brain, then carefully calibrated the devices electrical activity to stimulate the networks the injury had subdued.

She noticed the difference immediately: When she was asked to list items in the produce aisle of a grocery store, she could rattle off fruits and vegetables. Then a researcher turned the device off, and she couldnt name any.

Since the implant I havent had any speeding tickets, Arata said. I dont trip anymore. I can remember how much money is in my bank account. I wasnt able to read, but after the implant I bought a book,Where the Crawdads Sing, and loved it and remembered it. And I dont have that quick temper.

For Arata and four others, the experimental deep-brain-stimulation device restored, to different degrees, the cognitive abilities they had lost to brain injuries years before. The new technique, developed by Stanford Medicine researchers and collaborators from other institutions, is the first to show promise against the long-lasting impairments from moderate to severe traumatic brain injuries.

The results of the clinical trial will be published Dec. 4 inNature Medicine.

Dimmed lights

More than 5 million Americans live with the lasting effects of moderate to severe traumatic brain injury difficulty focusing, remembering and making decisions. Though many recover enough to live independently, their impairments prevent them from returning to school or work and from resuming their social lives.

In general, theres very little in the way of treatment for these patients, saidJaimie Henderson, MD, professor of neurosurgery and co-senior author of the study.

But the fact that these patients had emerged from comas and recovered a fair amount of cognitive function suggested that the brain systems that support attention and arousal the ability to stay awake, pay attention to a conversation, focus on a task were relatively preserved.

These systems connect the thalamus, a relay station deep inside the brain, to points throughout the cortex, the brains outer layer, which control higher cognitive functions.

In these patients, those pathways are largely intact, but everything has been down-regulated, said Henderson, the John and Jene Blume-Robert and Ruth Halperin Professor. Its as if the lights had been dimmed and there just wasnt enough electricity to turn them back up.

In particular, an area of the thalamus called the central lateral nucleus acts as a hub that regulates many aspects of consciousness.

The central lateral nucleus is optimized to drive things broadly, but its vulnerability is that if you have a multifocal injury, it tends to take a greater hit because a hit can come from almost anywhere in the brain, saidNicholas Schiff, MD, a professor at Weill Cornell Medicine and co-senior author of the study.

The researchers hoped that precise electrical stimulation of the central lateral nucleus and its connections could reactivate these pathways, turning the lights back up.

Precise placement

In the trial, the researchers recruited five participants who had lasting cognitive impairments more than two years after moderate to severe traumatic brain injury. They were aged 22 to 60, with injuries sustained three to 18 years earlier.

The challenge was placing the stimulation device in exactly the right area, which varied from person to person. Each brain is shaped differently to begin with, and the injuries had led to further modifications.

Thats why we developed a number of tools to better define what that area was, Henderson said. The researchers created a virtual model of each brain that allowed them to pinpoint the location and level of stimulation that would activate the central lateral nucleus.

Guided by these models, Henderson surgically implanted the devices in the five participants.

Its important to target the area precisely, he said. If youre even a few millimeters off target, youre outside the effective zone.

A pioneering moment

After a two-week titration phase to optimize the stimulation, the participants spent 90 days with the device turned on for 12 hours a day.

Their progress was measured by a standard test of mental processing speed, called the trail-making test, which involves drawing lines connecting a jumble of letters and numbers.

Its a very sensitive test of exactly the things that were looking at: the ability to focus, concentrate and plan, and to do this in a way that is sensitive to time, Henderson said.

At the end of the 90-day treatment period, the participants had improved their speeds on the test, on average, by 32%, far exceeding the 10% the researchers had aimed for.

The only surprising thing is it worked the way we predicted it would, which is not always a given, Henderson said.

For the participants and their families, the improvements were apparent in their daily lives. They resumed activities that had seemed impossible reading books, watching TV shows, playing video games or finishing a homework assignment. They felt less fatigued and could get through the day without napping.

The therapy was so effective the researchers had trouble completing the last part of their study. They had planned a blinded withdrawal phase, in which half the participants would be randomly selected to have their devices turned off.

Two of the patients declined, unwilling to take that chance. Of the three who participated in the withdrawal phase, one was randomized to have their device turned off. After three weeks without stimulation, that participant performed 34% slower on the trail-making test.

The clinical trial is the first to target this region of the brain in patients with moderate to severe traumatic brain injury, and it offers hope for many who have plateaued in their recovery.

This is a pioneering moment, Schiff said. Our goal now is to try to take the systematic steps to make this a therapy. This is enough of a signal for us to make every effort.

Researchers from Weill Cornell Medicine, Spaulding Rehabilitation Hospital in Boston, Harvard Medical School, the University of Utah, the University of Florida, Vanderbilt University, the University of Washington, the University of Bordeaux and the Cleveland Clinic also contributed to the study.

Funding: The study was supported by funding from the National Institute of Health BRAIN Initiative and a grant from the Translational Science Center at Weill Cornell Medical College. Surgical implants were provided by Medtronic.

Author: Nina Bai Source: Stanford Contact: Nina Bai Stanford Image: The image is credited to Neuroscience News

Original Research: Closed access. Thalamic deep brain stimulation in traumatic brain injury: a phase 1, randomized feasibility study byJaimie Henderson et al. Nature Medicine

Abstract

Thalamic deep brain stimulation in traumatic brain injury: a phase 1, randomized feasibility study

Converging evidence indicates that impairments in executive function and information-processing speed limit quality of life and social reentry after moderate-to-severe traumatic brain injury (msTBI). These deficits reflect dysfunction of frontostriatal networks for which the central lateral (CL) nucleus of the thalamus is a critical node. The primary objective of this feasibility study was to test the safety and efficacy of deep brain stimulation within the CL and the associated medial dorsal tegmental (CL/DTTm) tract.

Six participants with msTBI, who were between 3 and 18 years post-injury, underwent surgery with electrode placement guided by imaging and subject-specific biophysical modeling to predict activation of the CL/DTTm tract. The primary efficacy measure was improvement in executive control indexed by processing speed on part B of the trail-making test.

All six participants were safely implanted. Five participants completed the study and one was withdrawn for protocol non-compliance. Processing speed on part B of the trail-making test improved 15% to 52% from baseline, exceeding the 10% benchmark for improvement in all five cases.

CL/DTTm deep brain stimulation can be safely applied and may improve executive control in patients with msTBI who are in the chronic phase of recovery.

ClinicalTrials.gov identifier:NCT02881151.

Read this article:
Implant Shows Promise in Restoring Cognitive Function After Brain Injury - Neuroscience News

New neuroscience research upends traditional theories of early language learning in babies – PsyPost

New research suggests that babies primarily learn languages through rhythmic rather than phonetic information in their initial months. This finding challenges the conventional understanding of early language acquisition and emphasizes the significance of sing-song speech, like nursery rhymes, for babies. The study was published in Nature Communications.

Traditional theories have posited that phonetic information, the smallest sound elements of speech, forms the foundation of language learning. In language development, acquiring phonetic information means learning to produce and understand these different sounds, recognizing how they form words and convey meaning.

Infants were believed to learn these individual sound elements to construct words. However, recent findings from the University of Cambridge and Trinity College Dublin suggest a different approach to understanding how babies learn languages.

The new study was motivated by the desire to better understand how infants process speech in their first year of life, specifically focusing on the neural encoding of phonetic categories in continuous natural speech. Previous research in this field predominantly used behavioral methods and discrete stimuli, which limited insights into how infants perceive and process continuous speech. These traditional methods were often constrained to simple listening scenarios and few phonetic contrasts, which didnt fully represent natural speech conditions.

To address these gaps, the researchers used neural tracking measures to assess the neural encoding of the full phonetic feature inventory of continuous speech. This method allowed them to explore how infants brains process acoustic and phonetic information in a more naturalistic listening environment.

The study involved a group of 50 infants, monitored at four, seven, and eleven months of age. Each baby was full-term and without any diagnosed developmental disorders. The research team also included 22 adult participants for comparison, though data from five were later excluded.

In a carefully controlled environment, the infant participants were seated in a highchair, a meter away from their caregiver, inside a sound-proof chamber. The adults sat similarly in a normal chair. Each participant, whether infant or adult, was presented with eighteen nursery rhymes played via video recordings. These rhymes, sung or chanted by a native English speaker, were selected carefully to cover a range of phonetic features. The sounds were delivered at a consistent volume.

To capture how the infants brains responded to these nursery rhymes, the researchers used a method called electroencephalography (EEG), which records patterns of brain activity. This technique is non-invasive and involved placing a soft cap with sensors on the infants heads to measure their brainwaves.

The brainwave data was then analyzed using a sophisticated algorithm to decode the phonological information allowing them to create a readout of how the infants brains were processing the different sounds in the nursery rhymes. This technique is significant as it moved beyond the traditional method of just comparing reactions to individual sounds or syllables, allowing a more comprehensive understanding of how continuous speech is processed.

Contrary to what was previously thought, the researchers found that infants do not process individual speech sounds reliably until they are about seven months old. Even at eleven months, when many babies start to say their first words, the processing of these sounds is still sparse.

Furthermore, the study discovered that phonetic encoding in babies emerged gradually over the first year. The read out of brain activity showed that the processing of speech sounds in infants started with simpler sounds like labial and nasal ones, and this processing became more adult-like as they grew older.

Our research shows that the individual sounds of speech are not processed reliably until around seven months, even though most infants can recognize familiar words like bottle by this point, said study co-author Usha Goswami, a professor at the University of Cambridge. From then individual speech sounds are still added in very slowly too slowly to form the basis of language.

The current study is part of the BabyRhythm project, which is led by Goswami.

First author Giovanni Di Liberto, a professor at Trinity College Dublin, added: This is the first evidence we have of how brain activity relates to phonetic information changes over time in response to continuous speech.

The researchers propose that rhythmic speech the pattern of stress and intonation in spoken language is crucial for language learning in infants. They found that rhythmic speech information was processed by babies as early as two months old, and this processing predicted later language outcomes.

The findings challenge traditional theories of language acquisition that emphasize the rapid learning of phonetic elements. Instead, the study suggests that the individual sounds of speech are not processed reliably until around seven months, and the addition of these sounds into language is a gradual process.

The study underscores the importance of parents talking and singing to their babies, using rhythmic speech patterns such as those found in nursery rhymes. This could significantly influence language outcomes, as rhythmic information serves as a framework for adding phonetic information.

We believe that speech rhythm information is the hidden glue underpinning the development of a well-functioning language system, said Goswami. Infants can use rhythmic information like a scaffold or skeleton to add phonetic information on to. For example, they might learn that the rhythm pattern of English words is typically strong-weak, as in daddy or mummy, with the stress on the first syllable. They can use this rhythm pattern to guess where one word ends and another begins when listening to natural speech.

Parents should talk and sing to their babies as much as possible or use infant directed speech like nursery rhymes because it will make a difference to language outcome, she added.

While this study offers valuable insights into infant language development, its important to recognize its limitations. The research focused on a specific demographic full-term infants without developmental disorders, mainly from a monolingual English-speaking environment. Future research could look into how infants from different linguistic and cultural backgrounds, or those with developmental challenges, process speech.

Additionally, the study opens up new avenues for exploring how early speech processing relates to language disorders, such as dyslexia. This could be particularly significant in understanding and potentially intervening in these conditions early in life.

The study, Emergence of the cortical encoding of phonetic features in the first year of life, was authored by Giovanni M. Di Liberto, AdamAttaheri, Giorgia Cantisani, Richard B. Reilly, ineN Choisdealbha, SineadRocha, PerrineBrusini, and Usha Goswami.

Go here to see the original:
New neuroscience research upends traditional theories of early language learning in babies - PsyPost

Link Between Childhood Adversity and Muscle Dysmorphia in Youth – Neuroscience News

Summary: A new study reveals a significant association between adverse childhood experiences (ACEs) and symptoms of muscle dysmorphia in adolescents and young adults.

The research highlights how ACEs, such as domestic violence and emotional abuse, can lead to the pathological pursuit of muscularity as a coping mechanism. The study found that boys and young men who experienced five or more ACEs were particularly at risk for muscle dysmorphia symptoms.

The findings emphasize the importance of recognizing and addressing the impact of childhood trauma on mental health and body image.

Key Facts:

Source: University of Toronto

A new study published inClinical Social Work Journalfound that adolescents and young adults who experienced adverse childhood experiences (ACEs) before the age of 18 were significantly more likely to experience symptoms of muscle dysmorphia.

With previous research showing that more than half of North American children and adolescents experience at least one adverse childhood experience in their lifetime, these new findings highlight the need for greater awareness of how adverse experiences in childhood (such as domestic violence, emotional abuse, and sexual abuse) and muscle dysmorphia (the pathological pursuit of muscularity) are linked.

Those who experience adverse childhood experiences may engage in the pursuit of muscularity to compensate for experiences where they once felt inferior, small, and at risk, as well as to protect against future victimization, says lead author Kyle T. Ganson, PhD, MSW, an assistant professor at the University of Torontos Factor-Inwentash Faculty of Social Work.

The experience of adverse childhood experiences may also increase body dissatisfaction, specifically muscle dissatisfaction, which is a key feature of muscle dysmorphia.

Previous studies have shown that adverse experiences in childhood can lead to harmful health effects. While prior research has demonstrated that adverse childhood experiences are highly common in people with eating disorders and body dysmorphic disorder, few studies have looked at the association between adverse childhood experiences and muscle dysmorphia.

The studys researchers analyzed data from over 900 adolescents and young adults who participated in the Canadian Study of Adolescent Health Behaviors. In total, 16% of participants who experienced five or more adverse childhood experiences were at clinical risk for muscle dysmorphia, underscoring the significant traumatic effects that such experiences can have on mental health and well-being.

Importantly, our study found that gender was an important factor in the relationship between adverse childhood experiences and muscle dysmorphia symptoms, says Ganson.

Boys and young men in the study who have experienced five or more adverse childhood experiences had significantly greater muscle dysmorphia symptoms when compared to girls and young women.

The authors note that boys and young men who experience adverse childhood experiences may feel that their masculinity was threatened from these experiences. Therefore, they engage in the pursuit of muscularity to demonstrate their adherence to masculine gender norms such as dominance, aggression, and power.

It is important for health care professionals to assess for symptoms of muscle dysmorphia, including muscle dissatisfaction and functional impairment related to exercise routines and body image, among young people who have experienced adverse childhood experiences, particularly boys and young men, concludes Ganson.

Author: Dale Duncan Source: University of Toronto Contact: Dale Duncan University of Toronto Image: The image is credited to Neuroscience News

Original Research: Closed access. Adverse Childhood Experiences and Muscle Dysmorphia Symptomatology: Findings from a Sample of Canadian Adolescents and Young Adults by Kyle T. Ganson et al. Clinical Social Work Journal

Abstract

Adverse Childhood Experiences and Muscle Dysmorphia Symptomatology: Findings from a Sample of Canadian Adolescents and Young Adults

Adverse childhood experiences (ACEs) are relatively common among the general population and have been shown to be associated with eating disorders and body dysmorphic disorder. It remains relatively unknown whether ACEs are associated with muscle dysmorphia.

The aim of this study was to investigate the association between ACEs and muscle dysmorphia symptomatology among a sample of Canadian adolescents and young adults. A community sample of 912 adolescents and young adults ages 1630 years across Canada participated in this study.

Participants completed a 15-item measure of ACEs (categorized to 0, 1, 2, 3, 4, and 5 or more) and the Muscle Dysmorphic Disorder Inventory. Multiple linear regression analyses were utilized to determine the association between the number of ACEs experienced and muscle dysmorphia symptomatology.

Participants who experienced five or more ACEs, compared to those who had experienced no ACEs, had more symptoms of muscle dysmorphia, as well as more symptoms related to Appearance Intolerance and Functional Impairment.

There was no association between ACEs and Drive for Size symptoms. Participants who experienced five or more ACEs (16.1%), compared to 10.6% who experienced no ACEs, were at clinical risk for muscle dysmorphia (p=.018).

Experiencing ACEs, particularly five or more, was significantly associated with muscle dysmorphia symptomatology, expanding prior research on eating disorders and body dysmorphic disorder. Social workers should consider screening for symptoms of muscle dysmorphia among adolescents and young adults who experience ACEs.

Link:
Link Between Childhood Adversity and Muscle Dysmorphia in Youth - Neuroscience News

The Role of Protein Misfolding in Neurodegenerative Diseases – Neuroscience News

Summary: Neurodegenerative diseases share a common factor: protein misfolding and deposits in the brain. Misfolded proteins can lead to toxic activity or the loss of the proteins physiological function, causing damage to neurons.

Recent research explores the cross-seeding phenomenon, where misfolded proteins in one disease can induce the aggregation of others. The study specifically focuses on the interaction between the prion protein and TDP-43, shedding light on how they collaborate to impact neurodegenerative diseases.

Key Facts:

Source: RUB

The causes of neurodegenerative diseases such as Alzheimers disease, Parkinsons disease, frontotemporal dementia and prion diseases can be many and varied. But there is a common denominator, namely protein misfolding and the occurrence of protein deposits in the brain.

Various approaches and models have shown that misfolded proteins play a crucial role in the disease process, says Jrg Tatzelt.

Still, theres an ongoing debate about the nature of the harmful protein species and how misfolded proteins selectively damage specific neurons.

Studies on genes associated with pathologies have revealed two basic mechanisms by which misfolded proteins can lead to neurodegeneration: Firstly, misfolding can cause the protein to acquire toxic activity. Secondly, the misfolding can lead to a loss of the physiological function of the protein, which impairs important physiological processes in the cell.

The assumption used to be that every neurodegenerative disease was characterized by the misfolding of a specific protein, explains Jrg Tatzelt.

However, it has since been shown that misfolded proteins that are produced more frequently in one disease can also induce the aggregation of other proteins, a mechanism referred to as cross-seeding.

The prion protein and TDP-43

TDP-43 (TAR DNA-binding protein 43) is a protein that helps to translate genetic information into specific proteins. It thus helps to maintain the protein balance in nerve cells. The clumping of TDP-43 in the cell is a characteristic feature in the brains of patients suffering from amyotrophic lateral sclerosis or frontotemporal dementia.

Misfolding of the prion protein triggers prion diseases such as Creutzfeldt-Jakob disease. All research findings to date indicate that the misfolded prion protein acquires toxic activity. However, the exact mechanisms by which disease-associated prion proteins trigger the death of nerve cells are only partially understood.

TDP-43 loses its physiological function through PrP-mediated cross-seeding

Using in vitro and cell culture approaches, animal models and brain samples from patients with Creutzfeldt-Jakob disease, the researchers showed that misfolded prion proteins can trigger the clumping and inactivation of TDP-43.

The prion proteins interact with TDP-43 in vitro and in cells, thus inducing the formation of TDP aggregates in the cell. As a result, TDP-43-dependent splicing activity in the cell nucleus is significantly reduced, leading to altered protein expression.

Prion protein and TDP-43 are partners in crime in neurodegenerative diseases, so to speak, says Jrg Tatzelt.

An analysis of brain samples showed that in some Creutzfeld-Jacob patients, TDP-43 aggregates were found alongside the prion protein deposits. This study has revealed a new mechanism of how disease-associated prion proteins can affect physiological signaling pathways through cross-seeding.

Author: Meike Driessen Source: RUB Contact: Meike Driessen RUB Image: The image is credited to Neuroscience News

Original Research: Closed access. Cross-Seeding by Prion Protein Inactivates TDP-43 by Jrg Tatzelt et al. Brain

Abstract

Cross-Seeding by Prion Protein Inactivates TDP-43

A common pathological denominator of various neurodegenerative diseases is the accumulation of protein aggregates. Neurotoxic effects are caused by a loss of the physiological activity of the aggregating protein and/or a gain of toxic function of the misfolded protein conformers. In transmissible spongiform encephalopathies or prion diseases, neurodegeneration is caused by aberrantly folded isoforms of the prion protein (PrP).

However, it is poorly understood how pathogenic PrP conformers interfere with neuronal viability. Employingin vitroapproaches, cell culture, animal models and patients brain samples, we show that misfolded PrP can induce aggregation and inactivation of TAR DNA-binding protein-43 (TDP-43).

Purified PrP aggregates interact with TDP-43in vitroand in cells and induce the conversion of soluble TDP-43 into non-dynamic protein assemblies. Similarly, mislocalized PrP conformers in the cytosol bind to and sequester TDP-43 in cytosolic aggregates.

As a consequence, TDP-43-dependent splicing activity in the nucleus is significantly decreased, leading to altered protein expression in cells with cytosolic PrP aggregates. Finally, we present evidence for cytosolic TDP-43 aggregates in neurons of transgenic flies expressing mammalian PrP and CreutzfeldtJakob disease patients.

Our study identified a novel mechanism of how aberrant PrP conformers impair physiological pathways by cross-seeding.

See the rest here:
The Role of Protein Misfolding in Neurodegenerative Diseases - Neuroscience News

Anthrobots: Tiny Biobots From Human Cells Heal Neurons – Neuroscience News

Summary: Researchers developed Anthrobots, microscopic biological robots made from human tracheal cells, demonstrating potential in healing and regenerative medicine.

These self-assembling multicellular robots, ranging from hair-width to pencil-point size, show remarkable healing effects, particularly in neuron growth across damaged areas in lab conditions.

Building on earlier Xenobot research, this study reveals that Anthrobots can be created from adult human cells without genetic modification, offering a new approach to patient-specific therapeutic tools.

Key Facts:

Source: Tufts University

Researchers at Tufts University and Harvard Universitys Wyss Institute have created tiny biological robots that they call Anthrobots from human tracheal cells that can move across a surface and have been found to encourage the growth of neurons across a region of damage in a lab dish.

The multicellular robots, ranging in size from the width of a human hair to the point of a sharpened pencil, were made to self-assemble and shown to have a remarkable healing effect on other cells. The discovery is a starting point for the researchers vision to use patient-derived biobots as new therapeutic tools for regeneration, healing, and treatment of disease.

The work follows from earlier research in the laboratories of Michael Levin, Vannevar Bush Professor of Biology at Tufts UniversitySchool of Arts & Sciences, and Josh Bongard at the University of Vermont in which they created multicellular biological robots from frog embryo cells calledXenobots, capable of navigating passageways, collecting material,recording information, healing themselves from injury, and evenreplicating for a few cycleson their own.

At the time, researchers did not know if these capabilities were dependent on their being derived from an amphibian embryo, or if biobots could be constructed from cells of other species.

In the current study, published inAdvanced Science, Levin, along with PhD student Gizem Gumuskaya discovered that bots can in fact be created from adult human cells without any genetic modification and they are demonstrating some capabilities beyond what was observed with the Xenobots.

The discovery starts to answer a broader question that the lab has posedwhat are the rules that govern how cells assemble and work together in the body, and can the cells be taken out of their natural context and recombined into different body plans to carry out other functions by design?

In this case, researchers gave human cells, after decades of quiet life in the trachea, a chance to reboot and find ways of creating new structures and tasks.

We wanted to probe what cells can do besides create default features in the body, said Gumuskaya, who earned a degree in architecture before coming into biology.

By reprogramming interactions between cells, new multicellular structures can be created, analogous to the way stone and brick can be arranged into different structural elements like walls, archways or columns.

The researchers found that not only could the cells create new multicellular shapes, but they could move in different ways over a surface of human neurons grown in a lab dish and encourage new growth to fill in gaps caused by scratching the layer of cells.

Exactly how the Anthrobots encourage growth of neurons is not yet clear, but the researchers confirmed that neurons grew under the area covered by a clustered assembly of Anthrobots, which they called a superbot.

The cellular assemblies we construct in the lab can have capabilities that go beyond what they do in the body, said Levin, who also serves as the director of the Allen Discovery Center at Tufts and is an associate faculty member of the Wyss Institute. It is fascinating and completely unexpected that normal patient tracheal cells, without modifying their DNA, can move on their own and encourage neuron growth across a region of damage, said Levin.

Were now looking at how the healing mechanism works, and asking what else these constructs can do.

The advantages of using human cells include the ability to construct bots from a patients own cells to perform therapeutic work without the risk of triggering an immune response or requiring immunosuppressants. They only last a few weeks before breaking down, and so can easily be re-absorbed into the body after their work is done.

In addition, outside of the body, Anthrobots can only survive in very specific laboratory conditions, and there is no risk of exposure or unintended spread outside the lab. Likewise, they do not reproduce, and they have no genetic edits, additions or deletions, so there is no risk of their evolving beyond existing safeguards.

How Are Anthrobots Made?

Each Anthrobot starts out as a single cell, derived from an adult donor. The cells come from the surface of the trachea and are covered with hairlike projections called cilia that wave back and forth. The cilia help the tracheal cells push out tiny particles that find their way into air passages of the lung.

We all experience the work of ciliated cells when we take the final step of expelling the particles and excess fluid by coughing or clearing our throats. Earlier studies by others had shown that when the cells are grown in the lab, they spontaneously form tiny multicellular spheres called organoids.

The researchers developed growth conditions that encouraged the cilia to face outward on organoids. Within a few days they started moving around, driven by the cilia acting like oars. They noted different shapes and types of movement the first. important feature observed of the biorobotics platform.

Levin says that if other features could be added to the Anthrobots (for example, contributed by different cells), they could be designed to respond to their environment, and travel to and perform functions in the body, or help build engineered tissues in the lab.

The team, with the help of Simon Garnier at the New Jersey Institute of Technology, characterized the different types of Anthrobots that were produced. They observed that bots fell into a few discrete categories of shape and movement, ranging in size from 30 to 500 micrometers (from the thickness of a human hair to the point of a sharpened pencil), filling an important niche between nanotechnology and larger engineered devices.

Some were spherical and fully covered in cilia, and some were irregular or football shaped with more patchy coverage of cilia, or just covered with cilia on one side. They traveled in straight lines, moved in tight circles, combined those movements, or just sat around and wiggled. The spherical ones fully covered with cilia tended to be wigglers.

The Anthrobots with cilia distributed unevenly tended to move forward for longer stretches in straight or curved paths. They usually survived about 45-60 days in laboratory conditions before they naturally biodegraded.

Anthrobots self-assemble in the lab dish, said Gumuskaya, who created the Anthrobots. Unlike Xenobots, they dont require tweezers or scalpels to give them shape, and we can use adult cells even cells from elderly patients instead of embryonic cells. Its fully scalablewe can produce swarms of these bots in parallel, which is a good start for developing a therapeutic tool.

LittleHealers

Because Levin and Gumuskaya ultimately plan to make Anthrobots with therapeutic applications, they created a lab test to see how the bots might heal wounds. The model involved growing a two-dimensional layer of human neurons, and simply by scratching the layer with a thin metal rod, they created an open wound devoid of cells.

To ensure the gap would be exposed to a dense concentration of Anthrobots, they created superbots a cluster that naturally forms when the Anthrobots are confined to a small space. The superbots were made up primarily of circlers and wigglers, so they would not wander too far away from the open wound.

Although it might be expected that genetic modifications of Anthrobot cells would be needed to help the bots encourage neural growth, surprisingly the unmodified Anthrobots triggered substantial regrowth, creating a bridge of neurons as thick as the rest of the healthy cells on the plate.

Neurons did not grow in the wound where Anthrobots were absent. At least in the simplified 2D world of the lab dish, the Anthrobot assemblies encouraged efficient healing of live neural tissue.

According to the researchers, further development of the bots could lead to other applications, including clearing plaque buildup in the arteries of atherosclerosis patients, repairing spinal cord or retinal nerve damage, recognizing bacteria or cancer cells, or delivering drugs to targeted tissues. The Anthrobots could in theory assist in healing tissues, while also laying down pro-regenerative drugs.

Making New Blueprints, Restoring Old Ones

Gumuskaya explained that cells have the innate ability to self-assemble into larger structures in certain fundamental ways.

The cells can form layers, fold, make spheres, sort and separate themselves by type, fuse together, or even move, Gumuskaya said.

Two important differences from inanimate bricks are that cells can communicate with each other and create these structures dynamically, and each cell is programmed with many functions, like movement, secretion of molecules, detection of signals and more. We are just figuring out how to combine these elements to create new biological body plans and functionsdifferent than those found in nature.

Taking advantage of the inherently flexible rules of cellular assembly helps the scientists construct the bots, but it can also help them understand how natural body plans assemble, how the genome and environment work together to create tissues, organs, and limbs, and how to restore them withregenerative treatments.

Author: Mike Silver Source: Tufts University Contact: Mike Silver Tufts University Image: The image is credited to Gizem Gumuskaya, Tufts University

Original Research: Open access. Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells by Michael Levin et al. Advanced Science

Abstract

Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells

Fundamental knowledge gaps exist about the plasticity of cells from adult soma and the potential diversity of body shape and behavior in living constructs derived from genetically wild-type cells.

Here anthrobots are introduced, a spheroid-shaped multicellular biological robot (biobot) platform with diameters ranging from 30 to 500microns and cilia-powered locomotive abilities.

Each Anthrobot begins as a single cell, derived from the adult human lung, and self-constructs into a multicellular motile biobot after being cultured in extra cellular matrix for 2 weeks and transferred into a minimally viscous habitat.

Anthrobots exhibit diverse behaviors with motility patterns ranging from tight loops to straight lines and speeds ranging from 550micronss1. The anatomical investigations reveal that this behavioral diversity is significantly correlated with their morphological diversity.

Anthrobots can assume morphologies with fully polarized or wholly ciliated bodies and spherical or ellipsoidal shapes, each related to a distinct movement type. Anthrobots are found to be capable of traversing, andinducing rapid repair of scratches in, cultured human neural cell sheets in vitro.

By controlling microenvironmental cues in bulk, novel structures, with new and unexpected behavior and biomedically-relevant capabilities, can be discovered in morphogenetic processes without direct genetic editing or manual sculpting.

View original post here:
Anthrobots: Tiny Biobots From Human Cells Heal Neurons - Neuroscience News

Neuroscience and Neurology: New Insights into Neurodegenerative Diseases – Medriva

Recent findings in neuroscience and neurology have started to shed light on the intricate connections between personality traits, dementia diagnoses, Parkinsons disease, and multiple sclerosis. These understandings not only contribute to the scientific communitys growing knowledge of these complex conditions but also potentially pave the way for innovative treatment options.

A recent meta-analysis revealed that personality traits are strong predictors of dementia diagnoses. However, the association between these traits and neuropathology at autopsy was not consistently found. This suggests that while personality traits may help predict the risk of dementia, they may not directly correlate with the physical manifestations of the disease in the brain.

Another significant finding is that neuronally derived extracellular vesicle-associated alpha-synuclein in serum correctly identified 80% of at-risk individuals who phenoconverted to Parkinsons disease and related dementia. This discovery suggests that this biomarker could be instrumental in identifying individuals at risk of developing Parkinsons disease and related dementia.

Groundbreaking treatment approaches are also being explored. High-dose nicotinamide riboside, a form of vitamin B3, showed promise in easing Parkinsons motor symptoms in a phase I trial. Additionally, a phase I study demonstrated the tolerability of injecting allogeneic neural stem cells into the brains of people with secondary progressive multiple sclerosis, suggesting potential new therapeutic approaches for these neurodegenerative diseases.

Another area of recent research has focused on the link between blood-based biomarkers of amyloid, tau, and neurodegeneration and domain-specific neuropsychological performance in women with and without HIV. The results could have significant implications for understanding cognitive impairment in both the general population and those living with HIV.

The role of the TREM2 protein in neurodegeneration has also been a focus of recent research. Specifically, a mutation in this protein may promote synapse loss in mice, contributing to cognitive decline. Furthermore, salty immune cells surrounding the brain were associated with hypertension-induced dementia in mice, suggesting a possible link between dietary salt intake, hypertension, and dementia.

Finally, a Norwegian study found a moderate association between objectively measured hearing impairment and dementia in people aged 70 to 85. This correlation underlines the importance of early detection and intervention in hearing impairment to potentially reduce the risk of dementia.

In conclusion, these developments in neuroscience and neurology are expanding our understanding of neurodegenerative diseases and offering new avenues for potential treatments. The ongoing research in this field continues to bring hope for those affected by these conditions and their families.

Read more here:
Neuroscience and Neurology: New Insights into Neurodegenerative Diseases - Medriva