• Profile
Close

New tool for analyzing mouse vocalizations may provide additional insights for autism modeling

Children's Hospital Los Angeles news May 12, 2017

ignal processing technique improves analysis of ultrasonic vocalizations.
Vocalization plays a significant role in social communication across species such as speech by humans and song by birds. Male mice produce ultrasonic vocalizations in the presence of females and both sexes sing during friendly social encounters. Mice have been genetically well characterized and used extensively for research on autism as well as in other areas, but until now there have been limitations to studying their ultrasonic vocalizations. In a unique collaboration between Pat Levitt, PhD and Allison Knoll, PhD, of The Saban Research Institute of Children’s Hospital Los Angeles and Shrikanth Narayanan, PhD, and Maarten Van Segbroeck, PhD, of the Viterbi School of Engineering at USC, investigators have developed and demonstrated a novel signal–processing tool that enables unbiased, data–driven analysis of these sounds.

The study was published in the journal Neuron on May 3.

Research into the underlying neurobiological basis and heritable nature of vocalizations in humans and animals has identified promising genes and neural networks involved in vocal production, auditory processing and social communication. “Understanding the complicated vocalizations of mice – and how they relate to their social behavior – will be crucial to advancing vocal and social communication research, including understanding how genes that affect vocal communication relate to children with developmental disorders including autism,” said Levitt, who is also WM Keck Provost Professor in Neurogenetics at the Keck School of Medicine at USC.

The team of investigators developed and demonstrated a signal–processing tool that provides rapid, automated, unsupervised and time/date stamped analysis of the ultrasonic vocalizations of mice. Because of the time and date stamp attached to the vocalizations, the investigators expect that this tool will be useful in correlating vocalizations with video recorded behavioral interactions, allowing additional information to be mined from mouse models relevant to the social deficits experienced by persons with autism.

According to Allison Knoll, PhD, of CHLA, first co–author on the study, researchers in the field have been aware of and working to interpret the meaning of mouse vocalization by categorizing the sounds using a syllable classification system – with discrete sounds defined as syllables. Because there is such a wide variation in the types of ultrasonic vocalizations made by mice, in order to analyze the information researchers have had to develop ways of categorizing and combining sounds they perceived to be similar using manual or semi–automated techniques.

“This tool removes bias by fully automating the processing of vocalizations using signal–processing methods employed in human speech and language analysis,” said Knoll. The signal–processing tool, called Mouse Ultrasonic Profile ExTraction (MUPET), is available through open–access software.
Go to Original
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
  • Exclusive Write-ups & Webinars by KOLs

  • Nonloggedininfinity icon
    Daily Quiz by specialty
  • Nonloggedinlock icon
    Paid Market Research Surveys
  • Case discussions, News & Journals' summaries
Sign-up / Log In
x
M3 app logo
Choose easy access to M3 India from your mobile!


M3 instruc arrow
Add M3 India to your Home screen
Tap  Chrome menu  and select "Add to Home screen" to pin the M3 India App to your Home screen
Okay