Engineering students develop noise-canceling headphones based on machine learning

In the context: Most new headphones come with some form of noise cancellation. How well this works depends on the result. Apple AirPods are pretty good. Cheaper brands like Earfun are mediocre. But none of them eliminate external noise 100 percent.

University of Washington engineers developed a set of headphones that provide near-complete noise cancellation through machine learning. Called the ClearBuds, the headphones were recently showcased at the Association for Computing Machinery International Conference on Mobile Systems, Applications and Services. In addition to its obvious use in audio clothing, AI suppression can be used in home speakers and to help robots track location.

The short video (below) shows how the headphones drown out the vacuum cleaner and even the voice of another person. The method effectively isolates the speaker’s voice with zero noise interference. Other proven methods still miss some of the background noise. Of course, a practical demonstration would be more convincing.

Like other noise-cancelling technologies, ClearBuds use dual microphones to capture speaker audio and external sounds. However, the way the signals are processed is completely different.

Maruchi Kimdoctoral student of the School of Computer Science and Engineering. Paul G. Allen UoW, explains that each earphone creates two synchronized high-resolution audio streams containing the direction of each captured sound. This method allows the AI ​​to create a spatial audio profile of the environment and isolate the speaker’s voice and noise sources more accurately than bidirectional microphones.

“Because the speaker’s voice is close and about the same distance from the two headphones, the neural network was trained to focus only on his speech and eliminate background sounds, including other voices,” study co-author. Ishaan Chatterjee explained. “This method is a lot like how your ears work. They use the time difference between sounds in the left and right ear to determine where the sound is coming from.”

Most high-end headphones have mics on each earcup, but Allen says only one of them is actively sending audio for processing at any given time. With ClearBuds, each earbud is constantly sending simultaneous audio streams. This method required scientists to develop a custom Bluetooth headphone networking protocol that synchronizes two streams within 70 microseconds of each other.

While the ClearBuds are slightly larger than some of the more popular compact earphones available, the AI ​​processing still has to be done by a connected device that can run the AI. The team is working on making the neural network algorithms more efficient so that the processing can be done in headphones.

The researchers did not mention a commercialization plan. However, once their work is fully completed, commercial product production or technology licensing is highly likely.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button