Moneycontrol PRO
Black Friday Sale
Black Friday Sale
HomeNewsTechnologyMIT and MIT-IBM Watson AI Lab help machines see better using spatial acoustics

MIT and MIT-IBM Watson AI Lab help machines see better using spatial acoustics

By modelling acoustics, machines can learn more about their environment

November 02, 2022 / 17:00 IST
(Image Courtesy: MIT News)

Researchers at MIT and MIT-IBM Watson AI Lab have created a technique that allows machines to better perceive their immediate environment.

This is done using a machine-learning model that accurately captures the way sound interacts with an environment. Using the captured acoustic information, the system can then remodel an accurate representation of the environment like how, "humans use sound when estimating the properties of their physical environment".

In a blog post, the researchers say that the system analyses how humans perceive sound and is able to calculate the sound's source, its distance or any obstacles present between the source and its intended path.

The researcher's pitched its potential applications in virtual and augmented reality, but it could also be used to help AI's understand the world around them.

"For instance, by modeling the acoustic properties of the sound in its environment, an underwater exploration robot could sense things that are farther away than it could with vision alone," says Yilun Du, a grad student in the Department of Electrical Engineering and Computer Science (EECS) and co-author of a paper describing the model.

“Most researchers have only focused on modeling vision so far. But as humans, we have multimodal perception. Not only is vision important, sound is also important. I think this work opens up an exciting research direction on better utilizing sound to model the world,” Du added.

Initially, the system was trained using techniques similar to visual learning models, but the researcher's found that vision models benefit from photometric consistency, i.e., an object will appear to look roughly the same from different angles.

This wasn't possible in sound models because acoustics vary greatly between distances, objects and obstacles. This was solved using the reciprocal nature of sound and the influence it has on geometry.

“If you imagine standing near a doorway, what most strongly affects what you hear is the presence of that doorway, not necessarily geometric features far away from you on the other side of the room," said Andrew Luo, grad student at Carnegie Mellon University (CMU) and co-author on the paper.

"We found this information enables better generalization than a simple fully connected network,” Luo added.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Moneycontrol News
first published: Nov 2, 2022 05:00 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347