Narrator: This is Science Today. Emulating how the human brain processes complex sounds may one day help improve speech recognition software programs. Elise Piazza, a doctoral student at the University of California, Berkeley, explains that the brain can quickly summarize different sounds to get the overall gist of what's being heard.
Piazza: In normal human speech, pitch is an incredibly important cue for semantic meaning. So, if I'm listening to my friend talk, I might be constantly computing an average of her voice to detect things like the irony in her voice, whether she's asking a question or saying a sentence, whether there's some kind of emotional meaning just based on the shape of that pitch over time. And if speech recognition software systems could emulate this kind of information compression that we're finding is happening in the human brain, then they would be able to represent sound in a much more efficient way and using a lot less memory and processing power.Narrator: For Science Today, I'm Larissa Branin.