As part of the study, the scientists examined more than 20 million blog posts featuring personal stories through a software created by experts from the USC Institute for Creative Technologies. Forty blog entries were subsequently chosen and translated into Mandarin Chinese and Farsi. The blog entries, which featured stories such as divorce and lying, were then read by 90 American, Chinese, and Iranian respondents in their native language. The researchers also scanned the respondents' brains while reading, and asked the participants a few questions thereafter.
The results showed that the participants' brains exhibited unique activation patterns during its "default mode network" area. According to the research team, the brain area is found to activate interconnected brain regions including the medial prefrontal cortex, the posterior cingulate cortex, the inferior parietal lobe, the lateral temporal cortex, and hippocampal formation.
The findings demonstrate that the brain continues to work despite the mind being at rest in order to find meaning to narratives, the scientists have explained. The research team added that this mechanism acts as an autobiographical memory retrieval function that helps people relate to the past, the future, and external relationships.
"Even given these fundamental differences in language, which can be read in a different direction or contain a completely different alphabet altogether, there is something universal about what occurs in the brain at the point when we are processing narratives," lead author Morteza Dehghani said.
A study conducted by a researcher at the Montreal Neurological Institute of McGill University showed that motor signals in the brain may bolster sound perception. As part of the study, the researcher recruited 21 participants who were instructed to listen to complex tone sequences. The participants were likewise told to identify whether a target melody was either higher or lower-pitched compared with a reference.
An intertwined, distracting melody was also played to assess the respondents' capacity to focus on the target melody. The exercises were conducted in two ways, with the first one instructing the participants to be still and the other asking participants to tap on a touchpad in sync with the target sound.
The researcher observed a burst of fast neural responses in the auditory regions of the brain. According to the expert, these neural responses are associated with the anticipation of the incoming sound pattern. The findings suggest that the brain's motor system has the capacity to predict a sound pattern before it even takes place.
"A realistic example of this is the cocktail party concept: when you try to listen to someone but many people are speaking around at the same time. In real life, you have many ways to help you focus on the individual of interest: pay attention to the timbre and pitch of the voice, focus spatially toward the person, look at the mouth, use linguistic cues, use what was the beginning of the sentence to predict the end of it, but also pay attention to the rhythm of the speech. This latter case is what we isolated in this study to highlight how it happens in the brain," researcher Benjamin Morillon explained.