The University of Kentucky Center for Visualization & Virtual Environments

Distributed Audio Systems

Our idea of sound turned upside down with Thomas Edison’s invention of the phonograph in 1877.  Never before could we record a sound and reproduce it on demand.  From that point on, we obtained more and more access to audio.  Now we can turn on the radio in a car, at home, or online to play a song.  We can buy the same song and keep it on an iPod in our pocket, along with thousands of other songs.  We can even record ourselves singing the same song.  The Vis Center’s research goes beyond playing back sounds and works to manipulate audio input and output.

Audio research at the Vis Center delves into electronically focusing on the speech of a single individual in a noisy room, such as a cocktail party buzzing with chatter.  The recording system uses a microphone array of up to 40 microphones scattered throughout a room to pick out individual voices.  As the sound leaves a certain location, it hits the microphones at different times.  We can use this time difference to find a sound source’s location.  Then, we use an algorithm to pick out the sounds we want to hear, and suppress noise and other voices.  Typical microphone arrays require the microphones to be in a regular arrangement.  Researchers at the Vis Center focus on using random arrangements of the microphones, making analysis of the sounds more difficult but making the system more applicable to real world situations.  This technology is valuable for criminal justice and law enforcement agencies to covertly record and listen to conversations in noisy areas.  The FBI does not have the luxury of placing microphones in a regular arrangement – they must hide the microphones wherever they can.

Not only do we conduct research on recording audio, but we also research audio output.  We want to recreate the impression of a sound source where there is no loudspeaker. Speakers traditionally need to be in a regular arrangement, much like the microphones in the microphone arrays.  However, this is not always possible.  In immersive displays, there may be a 3D projection of a person, but you cannot put a loudspeaker there because the speaker would disrupt the image.  Thus, we need to be able to scatter the loudspeakers irregularly in places where they will not disturb the projection, but also create a virtual sound source.  The US army put this technology to use for combat simulation.  To simulate sniper fire with no visual cues, classic technologies like a Dolby 5.1 array were not accurate enough.  They needed our system of randomly distributing the speakers to create a virtual sound source.

Home entertainment and computer gaming will vastly improve with the implementation of our research. In classic home entertainment systems, the speakers must be regularly arranged to create a “sweet spot” for audio.  However, some homes have architecturally constrained viewing spaces so that you cannot put the sofa in front of the screen with the loudspeakers.  With our system of randomly arranged speakers, we can move the “sweet spot” anywhere in the room.

NOTICE: Some websites to which these materials provide links for the convenience of users are not managed by the University of Kentucky. The University does not review, control, or take responsibility for the contents of those sites. Equal Opportunity/Affirmative Action Employer.