Category Archives: Neuroscience

Autism May Be Linked to Different Perceptions of Movement in Infancy – Neuroscience News

Summary: At the age of five months, children who are later diagnosed with autism show different activity in the visual cortex when viewing certain types of movement. Findings reveal those with ASD experience differences in perceptions of their surroundings from a young age, and this may affect their learning and overall development.

Source: Uppsala University

A new study from researchers at Uppsala University and Karolinska Institutet shows that children who go on to develop symptoms of autism have different activity in their brains visual cortex from as early as five months when looking at certain types of movement.

This finding may indicate that autistic people perceive their surroundings in a different way even from a very young age, which could affect their development and learning.

Autism is defined by challenges with social communication together with restricted and repetitive features in behavior and interests.

However, research shows thatautistic peoplealso have a different perception of and reaction to various stimuli. In particular, many studies have shown a connection between autism and difficulties in perceiving whole units in visual movement patternssuch as when a flock of birds forms a common movement in the sky.

Being able to integrate movement signals into an overall figure is important in terms of the ability to correctly perceive how objects and surfaces move in relation to the viewer.

The new study, published inCommunications Biology, examined activity in the brains of five-month-old infants sitting on their parents laps while viewing different types of visual information.

The researchers measured both how thebrainreacted to simple visual changes in light (such as a line changing direction) and morecomplex patternswhere the ability to see whole units was put to the test.

The assessment used EEG technology, which records weak electrical signals created naturally in the brains cerebral cortex when processing information. The signals were measured using electrodes placed around the head on a specially adapted cap.

The infants who later onat age threeexhibited many of the classic symptoms of autism had different brain activity when complex movement patterns were shown on the screen.

This suggests that the brains of autistic people process visual motion differently from early infancy. Simpler visual changes, on the other hand, produced a clear and similar response in all of the childrens brains.

Seeing this difference several years before the symptoms of autism develop is something completely new, and contributes to our understanding of whatearly developmentlooks like in autism. Autism has a strong hereditary component, and it is likely that the differences we see in visual perception in infancy are connected togenetic differences, explains Terje Falck-Ytter, Professor at the Department of Psychology at Uppsala University and principal of the study.

We can only guess at the infants subjective experience of visual motion. However, given the results and previous studies of the relationship between brain activity and experience in adults with the diagnosis, it is plausible to believe that they experience it in a different way.

It is also possible that this finding is related to the perception of complex social movement, such as the interpretation of facial expressions. This is something we want to investigate in future studies.

The study is part of the larger research project EASE (Early Autism Sweden), a collaboration between Uppsala University and the Center of Neurodevelopmental Disorders at Karolinska Institutet (KIND).

When the children were three years old, a standardized play observation was carried out with a psychologist, and based on this, each child received a score corresponding to symptoms ofautism.

The study also included acontrol groupof over 400 infants, meaning the researchers had good knowledge of how childrens brains usually react to these stimuli.

Autism cannot currently be diagnosed with good accuracy until around two to three years of age, but we hope that more knowledge about early development will enable us to make these assessments earlier in the future. This would make it easier for families to get support and hopefully individualized training sooner. It could also stimulate completely new research into early interventions.

The results of this study showed statistically significant differences between groups, but it is important to emphasize that the accuracy of the EEG measurement was too low to be able to predict the development of individual children. It is therefore too early to tell whether this method will have clinical value for early detection, for example, concludes Falck-Ytter.

Author: Press OfficeSource: Uppsala UniversityContact: Press Office Uppsala UniversityImage: The image is in the public domain

Original Research: Open access.Global motion processing in infants visual cortex and the emergence of autism by Irzam Hardiansyah et al. Communications Biology

Abstract

Global motion processing in infants visual cortex and the emergence of autism

Autism is a heritable and common neurodevelopmental condition, with behavioural symptoms typically emerging around age 2 to 3 years. Differences in basic perceptual processes have been documented in autistic children and adults.

Specifically, data from many experiments suggest links between autism and alterations in global visual motion processing (i.e., when individual motion information is integrated to perceive an overall coherent pattern).

Yet, no study has investigated whethera distinctive organization of global motion processing precede the emergence of autistic symptoms in early childhood.

Here, using a validated infant electroencephalography (EEG) experimental paradigm, we first establish the normative activation profiles for global form, global motion, local form, and local motion in the visual cortex based on data from two samples of 5-month-old infants (totaln=473).

Further, in a sample of 5-month-olds at elevated likelihood of autism (n=52), we show that a different topographical organization of global motion processing is associated with autistic symptoms in toddlerhood.

These findings advance the understanding of neural organization of infants basic visual processing, and its role in the development of autism.

Read more from the original source:
Autism May Be Linked to Different Perceptions of Movement in Infancy - Neuroscience News

Overwhelmed? Your Astrocytes Can Help With That – Neuroscience News

Summary: New research reveals a newly discovered brain circuit that involves astrocytes, a type of brain cell that tunes into and moderates the chatter between overactive neurons. This discovery could hold the key to treating attention disorders like ADHD, and sheds new light on how the brain processes information when overwhelmed.

Source: UCSF

A brimming inbox on Monday morning sets your head spinning. You take a moment to breathe and your mind clears enough to survey the emails one by one. This calming effect occurs thanks to a newly discovered brain circuit involving a lesser-known type of brain cell, the astrocyte.

According to new research from UC San Francisco, astrocytes tune into and moderate the chatter between overactive neurons.

This new brain circuit, described March 30, 2023 inNature Neuroscience, plays a role in modulating attention and perception, and may hold a key to treating attention disorders like ADHD that are neither well understood nor well treated, despite an abundance of research on the role of neurons.

Scientists found that noradrenaline, a neurotransmitter that can be thought of as adrenaline for the brain, sends one chemical message to neurons to be more alert, while sending another to astrocytes to quiet down the over-active neurons.

When youre startled or overwhelmed, theres so much activity going on in your brain that you cant take in any more information, saidKira Poskanzer, PhD, an assistant professor of biochemistry and biophysics and senior author of the study.

Until this study, it was assumed that brain activity just quieted down with time as the amount of noradrenaline in the brain dissipated.

Weve shown that, in fact, its astrocytes pulling the handbrake and driving the brain to a more relaxed state, Poskanzer said.

A Missing Piece

Astrocytes are star-shaped cells woven between the brains neurons in a grid-like pattern. Their many star arms connect a single astrocyte to thousands of synapses, which are the connections between neurons. This arrangement positions astrocytes to eavesdrop on neurons and regulate their signals.

These cells have traditionally been thought of as simple support cells for neurons, but new research in the last decade shows that astrocytes respond to a variety of neurotransmitters and may have pivotal roles in neurologic conditions like Alzheimers disease.

Michael Reitman, PhD, first author of the paper who was a graduate student in Poskanzers lab when he did the research, wanted to know whether astrocyte activity could explain how the brain recovers from a burst of noradrenaline.

It seemed like there was a central piece missing in the explanation of how our brains recover from that acute stress, said Reitman. There are these other cells right nearby which are sensitive to noradrenaline and might help coordinate what the neurons around them are doing.

Gatekeepers of Perception

The team focused on understanding perception, or how the brain processes sensory experiences, which can be quite different depending on what state a person (or any other animal) is in at the time.

For example, if you hear thunder while cozying up indoors, the sound may seem relaxing and your brain may even tune it out. But if you hear the same sound out on a hike, your brain may become more alert and focused on safety.

These differences in our perception of a sensory stimulus happen because our brains are processing the information differently, based on the environment and state were already in, said Poskanzer, who is also a member of theKavli Institute for Fundamental Neuroscience.

Our team is trying to understand how this processing looks different in the brain under these different circumstances, she said.

Completing the Puzzle

To do that, Poskanzer and Reitman looked at how mice responded when given a drug that stimulates the same receptors that respond to noradrenaline. They then measured how much the mices pupils dilated and looked at brain signals in the visual cortex.

But what they found seemed counterintuitive: rather than exciting the mice, the drug relaxed them.

This result really didnt make sense, given the models we have, and that led us down the path of thinking that another cell type could be important here, Poskanzer said.

It turns out that these two things are yoked together in a feedback circuit. Given how many neurons each astrocyte can talk to, this system makes them really important and nuanced regulators of our perception.

The researchers suspect that astrocytes may play a similar role for other neurotransmitters in the brain, since being able to transition smoothly from one brain state to another is essential for survival.

We didnt expect the cycle to look like this, but it makes so much sense now, Poskanzer said. Its so elegant.

Authors:Additional authors on the paper include Vincent Tse, Drew D. Willoughby, Alba Peinado, Bat-Erdene Myagmar, and Paul C. Simpson, Jr. of UCSF, Xuelong Mi and Guoqiang Yu of Virginia Polytechnic Institute and State University, and Alexander Aivazidis and Omer A. Bayraktar of the Wellcome Sanger Institute.

Funding:This work was supported by grants from the National Institutes of Health (R01NS099254, R01MH121446, R01MH110504) and the National Science Foundation (grant no. 1750931 and CAREER 1942360).

Author: Robin MarksSource: UCSFContact: Robin Marks UCSFImage: The image is in the public domain

Original Research: Closed access.Norepinephrine links astrocytic activity to regulation of cortical state by Kira Poskanzer et al. Nature Neuroscience

Abstract

Norepinephrine links astrocytic activity to regulation of cortical state

Cortical state, defined by population-level neuronal activity patterns, determines sensory perception. While arousal-associated neuromodulatorsincluding norepinephrine (NE)reduce cortical synchrony, how the cortex resynchronizes remains unknown.

Furthermore, general mechanisms regulating cortical synchrony in the wake state are poorly understood. Using in vivo imaging and electrophysiology in mouse visual cortex, we describe a critical role for cortical astrocytes in circuit resynchronization.

We characterize astrocytes calcium responses to changes in behavioral arousal and NE, and show that astrocytes signal when arousal-driven neuronal activity is reduced and bi-hemispheric cortical synchrony is increased. Using in vivo pharmacology, we uncover a paradoxical, synchronizing response to Adra1a receptor stimulation.

We reconcile these results by demonstrating that astrocyte-specific deletion ofAdra1aenhances arousal-driven neuronal activity, while impairing arousal-related cortical synchrony.

Our findings demonstrate that astrocytic NE signaling acts as a distinct neuromodulatory pathway, regulating cortical state and linking arousal-associated desynchrony to cortical circuit resynchronization.

Visit link:
Overwhelmed? Your Astrocytes Can Help With That - Neuroscience News

Hear, Hear! How Music and Sound Soothes and Connects Us – Neuroscience News

Summary: Researchers explore how sounds and music have the power to soothe, energize, and connect us to one another.

Source: USC

When Ludwig van Beethoven began losing his hearing as a young man in 1798, he blamed it on a fall, though modern researchers believe illness, lead poisoning or a middle ear deformity could have been factors.

Whatever the cause, the hearing impairment did nothing to sweeten the acclaimed composers notoriously sour disposition, understandably contributing to his melancholy and ill temper.

Today, more than 200 years after the onset of Beethovens hearing problems, we know far more about the nature of sound and the causes of hearing loss. We also better understand how the brain comprehends language, and the power of music to affect brain activity.

But if we now have the means to protect against certain diseases that affect hearing, solutions to address the most common cause of hearing loss, aging, have been more challenging. The effects of aging on hearing can be slowed or partially ameliorated without biomedical devices, but they cannot be reversed yet.

New hope for the deaf

USC DornsifesCharles McKenna, professor ofchemistry, believes he, along with scientists at Harvard Medical Schools Massachusetts Eye and Ear Institute, may havediscovereda drug to repair inner ear cells that are damaged not only from aging, but from prolonged exposure to noise. This drug has the potential to treat damaged areas without being washed away by the ears natural fluid a crucial breakthrough.

McKenna explains that neural sensors turn the vibrations we perceive as sounds into electrical impulses that the brain can register and decipher. When these sensors are damaged, hearing loss and other issues occur.

A nerve can send a signal to the brain that lets the brain say, This is a Mozart composition or This is someone speaking, McKenna says.

The theory is that if you could regenerate the neural sensors, you would restore hearing to those who have lost it. Though there are drugs that appear to have the ability to induce regeneration of these neural sensors, successfully deploying those drugs has been a tremendous challenge.

First, the cochlea, the part of the inner ear where damaged cells are located, is bony, making it difficult for drugs to adhere to it. Second, even if a compound is shown to attach to the structure, the inner ears naturally occurring fluid tends to wash it away before it can work.

Based on encouraging findings from their latest study, McKenna says he and his colleagues are optimistic their compound will adhere to the cochlea long enough to be effective. With more research, they hope to prove its efficacy.

The Power of Music

While Beethoven struggled with hearing problems, his music, perhaps paradoxically, may help improve the brain functions of others.

Assal Habibi, head of theBrain & Music Labat USC DornsifesBrain and Creativity Instituteand associate professor (research) ofpsychology, explores how music and song affect brain activity using data collected through electroencephalography and neuroimaging.

She and her colleagues have found that music can have several quantifiable benefits for the human brain, particularly in children. For example, playing music can help children hone their concentration skills.

Music training helps with what is known as speech-in-noise perception for example, when youre in a noisy environment and someone is calling your name or saying something you need to hear, Habibi says. This is a crucial ability for children in a noisy classroom who need to be able to hear the teacher and tune out background noise.

Music training has also been shown to help some children reach developmental milestones faster. If ongoing research can establish the connection, music training might be able to prevent the onset of certain behavioral and learning issues and lead to new therapies for children who struggle with them.

One hypothesis is that if music can assist children in reaching developmental milestones faster, for example if they develop language skills earlier, they will be able to better express their feelings and communicate moreeffectively, Habibi says.

The Science of Language

While music therapy can help individuals sharpen their ability to discern the signal from the noise, linguistics is the discipline that deals with how we create and process the signal speech itself.

Linguists specialize in the building blocks of language, or how sounds combine to create a word that is understood by different people, despite the fact that no two people will speak a word completely identically.

Dani Byrd, professor oflinguisticsat USC Dornsife, examines how the vocal tract creates and combines these sounds in everyday speech, and how languages evolve to structure these sounds for encoding information.

As a linguist I ask, What are the rules that languages use to build their structures, to build their words and phrases? How do they differ from language to language? And I look at how and why we can understand these sounds as we do.

Byrd says our complicated and incredibly nuanced sense of hearing mirrors a corresponding complexity in how we shape our words and sounds to convey meaning.

The sensory cells of the inner ear are the most sensitive mechanoreceptor of the body. They have movements on a nanometer scale, she says. When air pressure fluctuations move your eardrum, that creates movement and an electrochemical cascade inside the inner ear.

Our sense of hearing has the power to move us in a myriad ways. It also has the power to inspire wonder at its many as yet still unsolved mysteries: Why is it that we understand a gasp as a signal of surprise, or possibly fear? Why does the key of D minor often provoke feelings of sadness in one listener but not another? And how is it that our brain can take these vibrations of air and transform them into words, emotionsor messages?

Isnt it amazing, says Byrd, that these tiny fluctuations in air pressure can make you laugh or cry, can convey urgency, can make you fall in love?

Author: Meredith McGroartySource: USCContact: Meredith McGroarty USCImage: The image is in the public domain

Read the original here:
Hear, Hear! How Music and Sound Soothes and Connects Us - Neuroscience News

The Social Consequences of Using AI in Conversations – Neuroscience News

Summary: When using AI-enabled chat tools, people have more effective conversations, perceive each other more positively, and use more positive language.

Source: Cornell University

Cornell University researchers have found people have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool.

Thestudy, published inScientific Reports, examined how the use of AI in conversations impacts the way that people express themselves and view each other.

Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension, saidMalte Jung, associate professor of information science.

We do not live and work in isolation, and the systems we use impact our interactions with others.

However, in addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.

I was surprised to find that people tend to evaluate you more negatively simply because theysuspectthat youre using AI to help you compose text, regardless of whether you actually are, saidJess Hohenstein, lead author and postdoctoral researcher. This illustrates the persistent overall suspicion that people seem to have around AI.

For their first experiment, researchers developed a smart-reply platform the group called Moshi (Japanese for hello), patterned after the now-defunct Google Allo (French for hello), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs (large language models) to predict plausible next responses in chat-based interactions.

Participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.

Researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).

But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.

While AI might be able to help you write, Hohenstein said, its altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, youre sacrificing some of your own personal voice.

Said Jung: What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on peoples interactions, language and perceptions of each other.

Funding: This work was supported by the National Science Foundation.

Author: Becka BowyerSource: Cornell UniversityContact: Becka Bowyer Cornell UniversityImage: The image is in the public domain

Original Research: Open access.Artificial intelligence in communication impacts language and social relationships by Malte Jung et al. Scientific Reports

Abstract

Artificial intelligence in communication impacts language and social relationships

Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AIs negative effects on society the social consequences of using it to communicate remain largely unexplored.

We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions (smart replies), which are used to send billions of messages each day.

Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways.

We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative.

However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses.

Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly.

See the rest here:
The Social Consequences of Using AI in Conversations - Neuroscience News

warm nest by ark-shelter uses neuroscience to achieve comfort – Designboom

Warm nest provides a calming environment for recovery

Ark-shelter and ARCHEKTA showcase their expertise in creating calm environments with Warm Nest, a Maggie Center in Belgium. The healthcare facility is designed to provide a comfortable setting while patients receive cancer treatment and heal. Maggie Keswick Jencks conceptualized the Maggie Center after experiencing cancer diagnosis, treatment, remission and recurrence. Her insights were valuable in pioneering a new architectural approach to cancer care. In Warm Nest each room is specifically designed to reflect the level of intimacy and the emotions that occur within.

Ark-shelter and ARCHEKTA showcase their expertise in creating calm environments with Warm Nest

images by BoysPlayNice |@boysplaynice

The design practice Ark-shelter specializes in prefabricated dwelling constructions with organic materials, dark tones and heavy glazing, exuding a sense of peace. The feelings evoked in these dwellings are what AZ Zeno wished to capture in the healing center. In the design process, Ark-Shelter teamed up with ARCHEKTA, and collaborated with a neuroscientist in order to better grasp the influence of space on the human consciousness. The task was to carefully analyze the various emotional touchpoints that occur through cancer treatment and to construct brain healthy spaces.

the healthcare facility is designed to provide a comfortable setting while patients heal

The concept for Warm Nest is a welcoming, non-intrusive space that focuses on calm gatherings, time to regain strength, and the journey to recovery. A soft ramp leads to the entrance and almost every inch of the building has views to the outdoors. The light wood interiors coupled with abundant windows removes the hospital look and feel from the facility. A comfortable courtyard provides a serene slice of nature while protecting from the wind.

Maggie Centers pioneer a new architectural approach to cancer care

See original here:
warm nest by ark-shelter uses neuroscience to achieve comfort - Designboom

Learning to Love Music – Neuroscience News

Summary: Researchers report on how using music therapy can help improve social and emotional formation in children on the autism spectrum.

Source: University of Delaware

In an inviting space full of vibrant bold colors, fiber optic curtains, and a vibrating haptic chair, sounds of Row Row Row Your Boat and other popular childrens songs fill the air, and children with autism are becoming their own composers, learning to love music.

This is the scene in the Sensory Room at theRoute 9 Library and Innovation Center, where the music is theirs to alter as they see fit. When children like what they hear, they pause to listen more closely, smile, or dance. Other children focus intently as they explore the many combinations of sound available at their fingertips.

Some young listeners take delight in adding a drumbeat or fast countermelody while others seem to prefer a calmer rendition of a familiar tune. As they listen, these children learn what they like to listen to and what they dont, providing a valuable glimpse into how they respond to musical sounds.

The children are piloting a listening device developed by University of Delaware researchersDaniel Stevens, a professor ofmusic theoryin theSchool of Musicwithin theCollege of Arts and Sciences,Matthew Mauriello, assistant professor ofcomputer and information sciencesin theCollege of Engineering, and their respective students.

The professors divergent backgrounds were a complementary match for this innovative project that aims to better the lives of children with developmental disabilities.

Together, they applied for and were awarded $50,000 from the Maggie E. Neumann Health Sciences Research Fund to advance their research. The fund specifically targets interdisciplinary research and innovation that aims to improve the lives of people with disabilities.

The device is the dream of sophomore Elise Ruggiero, a double major in music performance and psychology. Her younger brother was diagnosed with autism at age 2.

I started playing violin at age 9. As I advanced in the music field and had recitals, I noticed that sitting still and listening to music was a challenge for my brother, Ruggiero said.

When wed go out to eat, if the restaurant was playing music too loudly, it would make him extremely anxious, and there wasnt much we could do about it.

In a freshman honors music theory class, Stevens tasked his students with solving a problem in the community.

I asked students: How would you like to change the world in which you live and work with your music skills? My challenge was met with stunned silence, Stevens recalled.

But students quickly got to work, reaching out to local organizations, identifying issues, and dreaming up ways to solve problems. Ruggiero used her personal experience to team up withAutism Delaware, and her idea to create an interactive music device for children with autism was ultimately selected to move forward as the class project.

It was really satisfying knowing that something I knew was a problem I wanted to tackle for so long is achievable, Ruggiero said. Seeing other people who are passionate about it too made me realize that together we can make a difference.

Music theory students in Stevens class spent hours designing various renditions of what the team has been describing as modular music thats modifiable to suit a childs listening needs and preferences.

Listeners with autism have real needs. Those with auditory sensitivities, for example, may be unable to participate in the formative experiences that children have singing songs with their parents or classmates, in part, because the music might be too fast, or it might have too much stimulation, or it might not have enough stimulation, Stevens said.

Every child with autism is different, so we need to compose music that would address various needs.

Had a device like this existed years ago, Ruggiero said it could have helped her brother.

He was turned off by the idea of making music at a young age because he was so sensitive to sound, Ruggiero said. For other kids with autism, I want them to have the option to want to make music.

Mauriello joined the project shortly after its inception to help design, build and deploy the technology in the field. Hes passionate about applying computing to challenges related to social good using his background in human-computer interaction, a blend of computer science and engineering, design, and psychology.

I enjoy opportunities to understand and empathize with users. This allows me to build technologies that meet their specific needs, Mauriello said.

With generous support from the Maggie E. Neumann Health Sciences Research Fund, the researchers transformed an idea into a prototype.

Now, a controller housed inside a white 3D-printed box with a series of presets, or light-up buttons with pictures of instruments provides a potentially infinite amount of sound combinations and aims to enhance the listening experience for children with autism.

Every time a child presses a button, the sound or melody changes, sometimes slightly, other times dramatically; each interaction is recorded so Stevens and Mauriello can gather data about listening preferences and find new ways to display this data back to composers to help them create more suitable music.

We want to understand the way children with autism hear the world and interact with music by looking at the larger patterns that start to emerge in the data, Stevens said.

Music is such a rich artform, and yet we hear it so frequently, we take for granted melody, harmony, texture, rhythm and all these elements that work together to make every listening experience enjoyable.

When it comes to listeners with autism, every sound is up for grabs. Its been really rewarding to think about how music can serve the listener.

The needs of this particular group of listeners invite us to think creatively about how sounds can be manipulated and designed to meet their needs.

Thats an area of particular interest to Simon Brugel. The sophomore computer science major, whos on the spectrum, brings personal experience to the project. He said he is sensitive to loud noises.

I dont like squeaking or alarms, Brugel said. I can notice some subtle sounds others might not notice, and I prefer some instruments over others.

Brugel helped design and write the software for the prototype and never expected to work on a project with potential for broad impact this early in his college career.

Its satisfying to know that my creations are having an impact on the community or the advancement of research, Brugel said.

By participating in this interdisciplinary research, Mauriello wants his students to understand that computing technology can serve diverse populations.

To help broaden participation in computing, we need to demonstrate that computing can have an impact on diverse problems that are facing society, Mauriello said.

This project offers a nice opportunity for that as computer science and engineering students work with music students to build something that can have a real impact on the world.

Abby Von Ohlen, a sophomore music education major, loved playing a role in this project and watching the idea blossom.

Seeing this idea come to fruition has been such a good experience, Von Ohlen said. Ive always been able to enjoy music and not be overstimulated by it. Its interesting to see that even just changing one track or sound level can affect someone. Its fulfilling to know that others will be able to enjoy music as much as I do.

Ruggiero has observed initial trials for the device and said feedback has shown the device can be engaging and might be more attractive to children if it looked more like a toy.

A parent of one of the children suggested that he might enjoy the device more if it was shaped like a fire truck that they could wheel around while listening to music, Ruggiero said. If it was more physically appealing, it might make kids more inclined to play with it.

For older children, Ruggiero envisions an app being useful.

If a teen or adult is out in public and something bothers them, they can modify it or use their own music on their phone to calm themselves, I would love that, she said.

Through working on this project, Ruggiero got a lot more than she ever dreamed of in her first year of college. She had simply hoped to meet new friends and become well-adjusted to college life.

I was not expecting to have my idea go as far as its gone. It makes me so happy and excited, she said.

Now, shes dreaming of a career in music therapy.

This project made me interested in the research aspects of music and psychology, she said. I want to work with people on the spectrum and make music more accessible to them.

Ultimately, Mauriello and Stevens said they hope the music listening device becomes a permanent fixture in the Route 9 Librarys sensory room. They also hope to incorporate the device in music and special education classes.

The research is very clear music participation is incredibly important to a childs social and emotional formation, their motor development, and their interactions with family members, other children and their community, Stevens said.

Were inspired to make formative, engaging, participatory musical experiences accessible to every child with autism in our state and beyond over time.

For more information on the project, email[emailprotected].

Maggie E. Neumann Health Sciences Research Fund was established in 2020tosupport research designed to improve health and quality of life outcomes for children and adults with physical and developmental disabilities. While the fund resides atthe College of Health Sciences, the intent is to support interdisciplinary research across all colleges.

The research fund was created with a gift from Donald J. Puglisi and Marichu C. Valencia in honor of their granddaughter, Maggie E. Neumann. Puglisi is a member of UDs Board of Trustees and they both serve on the Presidents Leadership Council.

Author: Marina AffoSource: University of DelawareContact: Marina Affo University of DelawareImage: The image is credited to Ashley Barnas/University of Delaware

Read the original here:
Learning to Love Music - Neuroscience News

Treating Brain Hotspots and Networks to Address Autism … – Neuroscience News

Summary: A new study identifies specific brain network hotspots linked to autism, aggression, and a range of other social behavioral problems. The findings reveal those with ASD who score lower on facial processing tests are more likely to have more severe symptoms of autism, especially social behavioral problems. The researchers report treatments such as tRMS may provide some hope for those on the autism spectrum.

Source: Childrens Hospital Boston

What if doctors could break down conditions like autism into their key symptoms, map these symptoms to hotspots in the brain, and then treat those areas directly with brain stimulation? If it bears out, such an approach could turn the care of neurologic and developmental disorders on its head, focusing on symptoms that are shared across multiple conditions.

Thats the vision of Dr. Alexander Li Cohen, a child neurologist and researcher who leads the Laboratory of Translational Neuroimaging and is part of the Autism Spectrum Center at Boston Childrens. What weve seen is that individual parts of human behavior map onto differentbrainnetworks, he says.

Dr. Cohen began by studying a common problem inautism:face blindness, or the inability to recognize faceseven faces of loved ones. Studying people withautism spectrum disorder, he had found that those who scored poorly on tests of face processinghad more severe symptoms, especially social impairments. Could understanding face blindness provide a way to understand autism?

To answer this question,he first studiedpeople who developed face blindness after a stroke. Analyzing their brain MRIs, he found that many had damage in a location known as the fusiform face area. Others had no direct damage there, but did have damage to parts of the brain that connect to that area, as shown by a technique called lesion network mapping.

It may take a whole brain network to cause a symptom, Dr. Cohen explains.

To begin to connect the dots to autism, he next studied patients withtuberous sclerosis. In thisrare genetic condition, abnormal growths called tubers form in the brain and other organs. Forty percent of affected children go on to develop autism.

I was curious to understand whether the pattern of tubers in the brain influences the chance of developing autism, Dr. Cohen says.

Indeed it did. In an analysis of 115young childrenwith tuberous sclerosis, he found that those with tubers at or near the fusiform face area were 3.7 times more likely to develop autism. The study is published in the journalAnnals of Neurology.

Dr. Cohen now wants to see whether children with autism who dont have tuberous sclerosis have abnormalities in this area or in brain networks connected to it. To that end, he and his colleagues have begunrecruiting teens age 15 to 18 for a studycomparing brain MRIs from those with and without autism. The team is assessing each participant for face processing ability, social impairment, and autism symptom severity to see how these correlate with brain imaging findings.

Could differences in face processing cause autism, or do they result from autism? Thats another question Dr. Cohen hopes to answer. He suspects that people with autism may rely too much on a particular brain network to process faces, perhaps one focused on small details rather than faces as a whole.

That is something that kids with autism can have a lot of difficulty with, he says.

If face blindness could be treated in children with autism, would it also improve their social functioning? Now that weve started looking at autism directly, well see what we can figure out.

Dr. Cohen envisions noninvasive treatments liketranscranial magnetic stimulation(TMS), in which a small electromagnet induces currents on the surface of the brain, in targeted locations. TMS has been found safe and is approved for treating depression and obsessive compulsive disorder in adults. Boston Childrens researchers are currently testing it in some children with epilepsy who cannot be effectively treated with drugs or surgery.

Ultimately, Dr. Cohen wants to identify brain hotspots and networks that drive a variety of autism symptoms and behaviors, not just face processing. At the top of his list are aggression and agitation, which can create difficult situations for children with autism and their families. Today, they are often treated with medications originally meant for psychosis, which have significant side effects and dont always work.

Dr. Cohen hopes a new study will help change this. He and his colleagues are tapping brain mapping data from a variety of groups who are at risk for developing aggressive behaviors, including people with autism, people who have had a stroke, and people with other forms of brain injury, looking for aggression hotspots. To date, they have gathered data from more than 1,200 children and adults.

You can sort people by the most versus the least aggression and ask, What in the brain is different?' Dr. Cohen explains.

We can try to find what those with aggression have in common and see if theres something we could turn into a treatment target. If we can nip some of these symptoms in the bud early on, we might be able to help the brain move onto a different path.

Author: Nancy FlieslerSource: Childrens Hospital BostonContact: Nancy Fliesler Childrens Hospital BostonImage: The image is credited to Annals of Neurology/ The Researchers

Original Research: Open access.Tubers Affecting the Fusiform Face Area Are Associated with Autism Diagnosis by Alexander Li Cohen et al. Annals of Neurology

Abstract

Tubers Affecting the Fusiform Face Area Are Associated with Autism Diagnosis

Tuberous sclerosis complex (TSC) is associated with focal brain tubers and a high incidence of autism spectrum disorder (ASD). The location of brain tubers associated with autism may provide insight into the neuroanatomical substrate of ASD symptoms.

We delineated tuber locations for 115 TSC participants with ASD (n=31) and without ASD (n=84) from the Tuberous Sclerosis Complex Autism Center of Excellence Research Network. We tested for associations between ASD diagnosis and tuber burden within the whole brain, specific lobes, and at 8 regions of interest derived from the ASD neuroimaging literature, including the anterior cingulate, orbitofrontal and posterior parietal cortices, inferior frontal and fusiform gyri, superior temporal sulcus, amygdala, and supplemental motor area. Next, we performed an unbiased data-driven voxelwise lesion symptom mapping (VLSM) analysis. Finally, we calculated the risk of ASD associated with positive findings from the above analyses.

There were no significant ASD-related differences in tuber burden across the whole brain, within specific lobes, or within a priori regions derived from the ASD literature. However, using VLSM analysis, we found that tubers involving the right fusiform face area (FFA) were associated with a 3.7-fold increased risk of developing ASD.

Although TSC is a rare cause of ASD, there is a strong association between tuber involvement of the right FFA and ASD diagnosis. This highlights a potentially causative mechanism for developing autism in TSC that may guide research into ASD symptoms more generally. ANN NEUROL 2023;93:577590

Excerpt from:
Treating Brain Hotspots and Networks to Address Autism ... - Neuroscience News

A Reading List About the Neuroscience of Reading – Longreads

I learned to read when my older sisters returned from elementary school and practiced with our family. I remember sitting on the left side of my mom, fingers running over pictures of ladybugs and small golden dogs, while my sister sat on her right side and read the story aloud. She could read more words than I could, but I was getting there. By the time I was 9, I hid books under my bed and pulled them out in the middle of the night to read one more chapter. By the time I was 18, packing my things for college, I puzzled over what to do with my floor-to-ceiling, overflowing bookshelf. Everything I read became a part of my identity, and everything I could keep (or steal) became a member of the sprawling crowd of voices that eventually converged into my own.

When you look up the key features of a civilization, most historians agree that a group of people must implement a system of writing in order to be civilized. Reading makes us human.

But what if I told you that humans were never meant to read in the first place?Our brains come hard-wired with the ability to hear and speak language (from a place called Wernickes area in the temporal lobe) and the ability to understand and remember symbols (the parietal lobe). There is no specific area in the brain that is meant to read; thats why children have to be taught to read, and why some people have an easier time learning than others. Every time a reader starts a new story, they are taking advantage of a system that is both brand-new and generations in the making. As humans evolved, our brains learned to combine the use of multiple regions and a process called neuronal recycling to repurpose the skills we already have. Its a miracle.

Reading a new book, learning a new language, and even speaking our own language to communicate with friends and loved ones are the results of a multifaceted, living system. Learning that reading and writing are far from natural changed the way I read my favorite books. As a writer, I can treat myself with more patience knowing the lengths to which my brain has gone so I have the chance to write anything at all. As a reader, I value every word more knowing that it has traveled through countless geographical locations and definitions so it can hold that exact spot in one specific sentence.

The reading list below is a selection of works that explain in more depth how we got to where we are today an age when literacy is not just considered an essential skill but an outlet for escapism, obsession, and self-expression. Spoiler alert: This process hasnt finished yet. For as long as we read and write, our brains and our language influence one another and adapt to the literary climate. It is our gift to not only learn how this process takes place but to take advantage of the positive changes it could make for ourselves and our society.

Wolf is the author of many books about reading, including Proust and the Squid and Reader, Come Home. Although she works as a neuroscientist at the University of California San Francisco, she has a gift for explaining complicated processes like neuronal recycling to audiences unfamiliar with high-brow academic jargon. This essay speaks to book lovers, analyzing the process that allows readers to step into another persons clothes. Wolf explains how this experience, at first appearing straightforward, is actually the product of several different parts of your brain (semantic and grammatical systems) working together to attach symbols to words. When we mature as readers, the cognitive process expands and we begin to feel what we read, truly living through words. As it turns out, Wolf reveals, the long process that has led to symbol comprehension is only just the beginning.

Human beings invented reading, and it took them thousands of years of cognitive breakthroughs to go from simple markings called tokens to text encoded in writing systems like Sumerian, Chinese, or the Greek alphabet. Reading has expanded the ways we are able to think and altered the cultural development of our species; still, it is a wholly learned skill, one that effects deep and lasting neurological changes in the individual.

Living in literature changes us emotionally, but the effects of reading fiction at a close level are apparent cognitively, too. Here, Pawlik pulls together a variety of sources that discuss and interrogate what happens to us when we read fiction. Does literature actually pose a benefit to society beyond the individual route of escapism? Summaries of various cognitive studies reveal that reading does activate parts of the brain that are involved in interpreting social cues. More than that, Pawlik interrogates these effects on a societal level. Fiction readers are more tolerant, more empathetic, and even more likely to accept new technologies like robots.

A study, conducted by Martina Mara and Markus Appel, looked at whether science fiction can change our feelings towards robots. They had people read either a science fiction story or a non-fiction pamphlet, before interacting with a human-like robot. The participants who read the sci-fi story reported reduced feelings of eeriness, which didnt occur when people read the same information in the form of a leaflet. This led the authors to suggest that science fiction may provide meaning for otherwise unsettling future technologies.

But what happens to your brain if youre not one to sit and binge-read novels? Even though understanding, interpreting, and speaking language are natural parts of our brains, something magical still happens when we learn to speak a new language. Saga Briggs writes about how people who recently learned a language show increased activity in the parts of their brains responsible for auditory processing, memory, and grammatical comprehension. Here, Briggs lays out a step-by-step process: what happens to your brain as you learn a new language, how we measure language learning, and what this means for new language-learners. It takes a lot of the scare away from learning a new language, and for us monolingual speakers out there, it helps us appreciate just how wonderful it is that we know one language already and what the benefits could be of two.

Theres an important lesson to be gleaned from the neuroscience of language learning, then, one we can keep in mind as we tackle our next target language: our brains are adaptable, and we can trust them to take on the challenge.

In this beautiful examination of the multiple faces of writing, Erik Gleibermann interviews eight bilingual writers about their writing processes and the writing relationship between their mother tongue and their adopted one.

Gleibermann explores the universe of the bilingual writer in this essay, bringing to light the way that bilingual writers use variations in tongue to resurface childhood memories or imply a tone of sexual whimsy. This piece also examines the reality of the bilingual writer in the Trump-administration era and upper-level American academia, during which times many bilingual writers were encouraged to silence their backgrounds and write only in English. In the end, though, bilingual writers support and inspire one another. Even if they speak (and write) completely different languages, they form an extended family that welcomes everyones stories.

Traveling back and forth can be a journey of both reconciliation and conflict.

In living this duality, these writers voice the daily experience of many bilingual immigrants around the world who are cooking breakfast, attending staff meetings, posing questions in class, and buying the weeks groceries. Collectively, bilingual writers play a formative cultural role in the United States, reflecting the lives of a growing community.

Outside of the human experience, though, even language itself is constantly evolving. Or rather, it is evolving because of the human experience, just as weve seen how reading changes the human brain. John McWhorter, linguist and author of several books, including Our Magnificent Bastard Tongue and Words on the Move, is a spirited tour guide for the spontaneous and sometimes baffling journey English words have gone through.

Throughout this essay, McWhorter never leaves readers by the wayside. He explains the nuances of definitions, the history of the English language, and something called a zombie-word. The survey on English language is precise and all-encompassing, not only examining new words but comparing English to other languages that may be (not-so) similar.

The central point is this: The fit between words and meanings is much fuzzier and more unstable than we are led to suppose by the static majesty of the dictionary and its tidy definitions. What a word means today is a Polaroid snapshot of its lexical life, long-lived and frequently under transformation.

Human language, as we can see, changes and adapts in its moving, complex relationship with humans themselves. This even includes parts of language that arent words! There are more ways we communicate over writing than just with letters, and our brains with their symbol-comprehension capabilities are prepared for that. Internet linguist (yes, thats a thing!) Gretchen McCulloch explains the growing use of emojis in this essay for Slate. According to McCulloch, writing is a technology that removes the body from the language, making it easier to communicate across distance and time but harder to convey tone of voice. She debunks the idea that emojis are a new language there isnt even a way to say emoji in emoji but asserts that they function either as elements of language called emblems or co-speech gestures.

McCulloch takes readers through her experience researching emojis in an informal, down-to-earth way, but she still takes the search for answers seriously. Like McWhorter, McCulloch presents linguistics in a way that is accessible to the regular person. She also honestly communicates her conversations with other linguists, including multiple perspectives and some computer analysis. McCulloch defines a specific function and purpose to the use of the emoji, and reveals that human beings continually seek connection despite time and distance.

When the world was wondering if emoji were a new kind of language, sequences that retold familiar stories in emoji got a lot of attention. Its easy to see how this fit in with the idea of emoji as gesture: Theyre like playing digital charades or pantomiming to a friend across a loud bar. But this is rarely the way that emoji combos interface with our casual written communication.

Neuroscience and linguistics are interesting, sure, but they matter outside of the classroom, too. Nothing is stable: not our own brains, and not the words in the language we create. Because of this, says Helen Rubinstein, we need to make new rules no more grammar police. A former copyeditor, Rubinstein reflects on her previous career and makes various arguments that acknowledge not just changing the landscape of English but the personal experiences of writers, such as those who speak with a dialect but are encouraged to use only proper English. This piece is hot and unapologetic: It takes into account the cultural scenes and power dynamics implicit in copyediting, challenging the practice.

I sense a kind of hysteria in these protests against fiddling with language, the same hysteria that led me to reject the work of copy editors with stridence. Yes, such changes are unbearably minor in the face of ongoing incarceration and murder; yes, they can resemble the peacocking of those corporate BLM statements that did little more than advertise corporations whiteness. But its absurd to insist that any choice about language be apolitical.

Melanie Hamon is a freelance writer, grant writer, and full-time student in Ohio. Her work has been published inNUVO IndyandIntrovert, Dear.

Looking for more on reading lists on language and reading?

Recommended reads on six punctuation marks, from the comma to the asterisk.

Heres a list for Emoji Day.

Follow this link:
A Reading List About the Neuroscience of Reading - Longreads

How I wrote a popular science book about consciousness and why – Nature.com

Anil Seths public-engagement work includes a 2017 TED talk that has had more than 13 million views.Credit: Bret Hartman/TED

Anil Seth recalls standing in front of a bathroom mirror aged eight or nine, and suddenly understanding that he would die one day. That realization made him wonder about where he came from, and why he was who he was. Those childhood thoughts about consciousness developed in his teenage years, resulting in debates with friends about free will and the mind. Seth now investigates such questions as a neuroscientist, and is the author of the 2021 book Being You: A New Science of Consciousness (Faber & Faber). His 2017 TED Talk, Your brain hallucinates your conscious reality, has had more than 13 million views. Here he talks to Nature about his career and book, and about the other public-engagement activities he undertakes as professor of cognitive and computational neuroscience at the University of Sussex near Brighton, UK.

Consciousness is linked to subjective experience, which isnt the same as being intelligent or having language or writing poetry. I want people to understand that the science of consciousness is alive and well. It doesnt mean we will find the answer to it, but we can make a lot of progress in understanding it.

I make three arguments in the book. The first is that consciousness can be addressed by science. I divide it from one big scary mystery into a few smaller, more-tractable ones. For example, how can we explain the difference between various levels of consciousness such as between general anaesthesia and wakeful awareness or falling into a dream sleep, a psychedelic state and so on.

The second argument is based on how we perceive the world around us the idea that we live in a controlled hallucination and that our experiences of the world dont give us direct, unfettered access to whatevers out there. The neuroscience theory here is that the brain is continually generating predictions about our surroundings.

The third argument is that the self is another kind of controlled hallucination, whether its the experience of free will, of having a body of emotion, of mood all different kinds of perception.

At the end of the book, I explore some of these implications for consciousness in non-human animals, and question whether artificial intelligence will become not only intelligent, but also sentient.

Why women arent from Venus, and men arent from Mars

I like to talk about what I do and, perhaps unlike with some other areas of science, people are naturally curious about consciousness and more willing to listen. I have also always liked writing. When I was an undergraduate, I realized that writing is fulfilling. Through your academic career, you write more and more, be it papers, research grants or editing. I did some public-engagement work, initially giving talks, then writing short pieces for outlets such as New Scientist and The Guardian, which was extremely satisfying.

In 2016, I presented a Friday Discourse at the UK Royal Institution the most prestigious thing I had done. (These talks were set up in their current format in 1826 as informal conversations about science with the public.) I chatted to people who were working in public engagement, including geneticist and BBC broadcaster Adam Rutherford, and I just felt then that it was the appropriate moment to write the book.

I started doing physics at university, mainly because it is seen as the most fundamental of the sciences and the best way to plug any gaps in understanding. But I felt I was moving too far away from the mind, so I switched to psychology and, later, during my masters degree and PhD, to computer science and artificial intelligence.

After my PhD, a postdoctoral opportunity in brain-based robotics arose at the Neurosciences Institute in San Diego, California. I got the job, not because of my interest in consciousness, but because I could help to build biologically inspired robots.

How a grisly historical accident set one neuroscientist on the road to writing a book

At that time, in the early 2000s, the institute was one of the few places where it was acceptable to study consciousness. There was a sense that the field was still predominantly philosophical and that it might be a poor career choice, because people didnt really know what consciousness was or how it worked. However, things changed when senior academics began to talk about consciousness and to set up dedicated research institutions. I ended up staying at the institute for more than six years, working on diverse projects (it ceased its research operations in 2018). Also, living in San Diego is not bad, learning to surf and all.

It was an inspiring time, with the feeling that I had found an intellectual community. I started attending meetings of the Association for the Scientific Study of Consciousness an international non-profit organization co-founded by the German philosopher Thomas Metzinger in 1994. The community includes some of the smartest and most interesting people I have met, from disciplines across philosophy, psychology, neuroscience, medicine and computer science. I thought, this is the work that I want to do.

I think we are all interested in ourselves, how we work and who we are, so I am grateful that I have been able to make a career out of my interest in these fundamental questions.

With difficulty. One struggle is that universities want academics to do public engagement, but do not give much credit in terms of time or teaching remission. Sometimes, the attitude is that if it is fun, then you should do it in your spare time. Good public engagement takes time, and is very important for inspiring new generations of scientists and increasing the impact of your work.

Collection: Science communication

Luckily, I had an Engagement Fellowship from the UK research funder Wellcome that provided me with a break from teaching duties. However, I was still writing the book mainly in the evenings and at weekends. The rest was done ad hoc by setting myself deadlines. If you write 1,000 words every couple of days, you will soon have a book.

A key challenge was balancing what I wanted to say with what people will want to read, which is where having a good editor really helps. I was worried that, having put lots of effort into the book, it would sink like a stone and go unnoticed. However, the reception was extraordinary and exceeded my expectations.

One current project is Dreamachine, which brought together scientists, philosophers, architects, musicians and digital designers to develop a collective, immersive art experience. It is based on the neuroscience concept that fast, flickering lights on closed eyes give rise to visual hallucinations. Last year, the installation formed part of the UNBOXED festival, a UK-wide event featuring ten creative projects that straddled the arts, sciences, technology and mathematics. During the festival, more than 30,000 people experienced Dreamachine, which is amazing. Hopefully it is reigniting peoples curiosity in the brain.

Another focus is the Perception Census, a big online citizen-science survey overseen by the University of Sussex and the University of Glasgow, UK. Were asking members of the public to take part in a series of online, interactive tasks from the comfort of their own homes, so we can try to learn more about perceptual diversity.

Collection: Spotlight on neuroscience

Over the past decade or so, there has been much emphasis on neurodiversity, the idea that there are many different ways of experiencing the world, and that this cognitive and perceptual variation enriches society. However, the neurodiversity label has come to be associated with specific conditions, such as autism or attention deficit hyperactivity disorder, ironically reinforcing the idea that if you dont identify with a neurodivergent condition, then you experience the world as it is, in a neurotypical way.

But perceptual diversity exists among all of us. Two people might experience different colours when they look up at the sky, but they wont know it because they will use the same descriptive words. Its also because the differences they see are not enough to influence behaviour, and, crucially, because perceptual experience seems to be a window on to objective reality, rather than a brain-based construction. Im now very interested in mapping out this hidden perceptual diversity. I want to know about the middle, not the extremes. Our Perception Census project is doing exactly this. If we can recognize that everyone literally sees the world in a different way, then it might become easier to accommodate the fact that others might see, and therefore believe, different things.

View post:
How I wrote a popular science book about consciousness and why - Nature.com