Free Quiz
Write for Us
Learn Artificial Intelligence and Machine Learning
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books
Learn Artificial Intelligence and Machine Learning
No Result
View All Result

Home » AI learns how vision and sound are connected, without human intervention

AI learns how vision and sound are connected, without human intervention

Tarun Khanna by Tarun Khanna
May 22, 2025
in Machine Learning
Reading Time: 4 mins read
0
AI learns how vision and sound are connected, without human intervention

Photo Credit: https://karlobag.eu/

Share on FacebookShare on TwitterShare on LinkedInShare on WhatsApp

This new machine learning model can match similar audio and visual statistics, that can one day assist robots engage in the real world.

Humans naturally learn through making connections among sight and sound. For example, we will watch someone gambling the cello and recognize that the cellist’s movements are producing the music we hear.

A new approach advanced with the researchers from MIT and elsewhere enhances an AI model’s capability to learn in this identical fashion. This will be beneficial in applications including journalism and film manufacturing, where the model ought to assist with curating multimodal content thru automatic video and audio reclamation.

Also Read:

New study explores role of generative AI in the use of copyrighted material

How machine learning can spark many discoveries in science and medicine

“Periodic table of machine studying” could fuel AI discovery

Making AI-generated code more correct in any language

In the long term, this work could be used to enhance a robot’s capability to recognize real-world environments, where auditory and visual information are often closely related.

Improving upon previous work from their group, the researchers created a technique that support machine-learning model align similar audio and visual data from videos without the requirement for human labels.

They adjusted how their original model is trained so it learns a finer-grained correspondence between a selected video frame and the audio that happens in that moment. The researchers additionally made some architectural adjustments that help the system balance two different learning objectives, which improves performance.

Taken together, these relatively simple improvement increases the precision of their method in video reclamation tasks and in categories the action in audiovisual scenes. For example, the brand-new approach could robotically and exactly match the sound of a door slamming with the visual of it closing in a video clip.

“We are building AI systems that can process the world like humans do, in phrases of having both audio and visual information coming in at once and being capable of seamlessly procedures both modalities. Looking ahead, if we are able to integrate this audio-visual technology into some of the tools we use on a daily basis, like big language models, it may open up lots of latest applications,” stated Andrew Rouditchenko, an MIT graduate student and co-author of a paper on this research..

He is joined on the paper by lead author Edson Araujo, a graduate student at Goethe University in Germany; Yuan Gong, a former MIT postdoc; Saurabhchand Bhati, a current MIT postdoc; Samuel Thomas, Brian Kingsbury, and Leonid Karlinsky of IBM Research; Rogerio Feris, fundamental scientist and supervisor on the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Hilde Kuehne, professor of computer science at Goethe University and an related professor at the MIT-IBM Watson AI Lab. The work will be presented at the Conference on Computer Vision and Pattern Recognition.

Syncing up

This work builds upon a machine-learning technique the researchers developed a few years ago, which gave an efficient way to train a multi modal to concurrently procedure audio and visual data without the need for human labels.

The researchers feed this model, referred to as CAV-MAE, unlabeled videos and it encodes the visual and audio records separately one after the other into representations known as tokens. Using the natural audio from the recording, the model automatically learns to map corresponding pairs of audio and visual tokens close together within its inner representation space.

They determined that the use of two learning purposes balances the model’s learning procedure, which permits CAV-MAE to apprehend the corresponding audio and visual records while enhancing its ability recover videos that suit user queries.

But CAV-MAE treats audio and visual samples as one unit, so a 10-second video and the sound of a door slamming are mapped collectively, even if that audio event occurs in only 1- second of the video.

In their progressed model, known as CAV-MAE Sync, the researchers split of the audio into smaller windows earlier than the model computes its representations of the data, so it generates separate representations that correspond to every smaller window of audio.

During training, the model learns to associate one video frame with the audio that takes place during simply that frame.

“By doing that, the model learns a finer-grained correspondence, which helps with performance later when we combination this information,” Araujo stated.

They also integrated architectural enhancements that assist the model stability its two learning objectives.

Adding “wiggle room”

The model consists of a contrastive objective, where it learns to connect with comparable audio and visual data, and a reconstruction objective which goals to get recover particular audio and visual data based on user’s queries.

In CAV-MAE Sync, the researchers introduced two new types of statistics representations, or tokens, to enhance the model’s learning potential.

They include devoted “global tokens” that help with the contrastive learning goal and dedicated “register tokens” that help the model awareness on important info for the reconstruction objective.

“Most importantly, we add a bit more wiggle room to the model so it can carry out each of these two tasks, contrastive and reconstructive, a bit more independently. That benefitted ordinary performance,” Araujo provides.

While the researchers had some instinct those upgrades would enhance the overall performance of CAV-MAE Sync, it took a careful aggregate of strategies to shift the model in the direction they wanted it to go.

“Because we have more than one modalities, we want a good model for both modalities by themselves, but we also need to get them to fuse together and collaborate,” Rouditchenko stated.

In the end, their development improves the model’s potential to recover videos based totally on an audio questioning and are expecting the class of an audio-visual scene, like a dog barking or an instrument playing.

Its results were more correct than their earlier work, and it also executed better than more complicated, brand-new techniques that require larger amounts of training data.

“Sometimes, very simple thoughts or little patterns you spot in the data have big value while carried out on top of a model you’re working on,” Araujo stated.

In the future, the researchers want to incorporate new models that generate higher data representations into CAV-MAE Sync, which can enhance performance. They also want to enable their systems to deal with text data, which would be an important step closer to generating an audiovisual large language model.

This work is funded, in part, by the German Federal Ministry of Education and Research and the MIT-IBM Watson AI Lab.

ShareTweetShareSend
Previous Post

Strength in Numbers: Ensembling Models with Bagging and Boosting

Next Post

Chinese Internet Regulator Shuts Down Accounts Illegally Touting Crypto Trading

Tarun Khanna

Tarun Khanna

Founder DeepTech Bytes - Data Scientist | Author | IT Consultant
Tarun Khanna is a versatile and accomplished Data Scientist, with expertise in IT Consultancy as well as Specialization in Software Development and Digital Marketing Solutions.

Related Posts

The Rise of AI: Leading Computer Scientists anticipate a Star Trek-Like Future
Machine Learning

The Rise of AI: Leading Computer Scientists anticipate a Star Trek-Like Future

April 15, 2025
Researchers teach LLMs to solve complex planning challenges
Machine Learning

Researchers teach LLMs to solve complex planning challenges

April 3, 2025
Like human brains, large language models reason about diverse data in a standard way
Machine Learning

Like human brains, large language models reason about diverse data in a standard way

March 21, 2025
machine learning
Technology

Accelerating Machine Learning Model Deployment with MLOps Tools

November 16, 2024
Next Post
Chinese Internet Regulator Shuts Down Accounts Illegally Touting Crypto Trading

Chinese Internet Regulator Shuts Down Accounts Illegally Touting Crypto Trading

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

+ 10 = 11

TRENDING

The Youth Is Now Finding Data Science As Their Best Bid For A Career In 2021 And In The Future

data-science
by Tarun Khanna
April 6, 2021
0
ShareTweetShareSend

Global Blockchain IoT Market, Growing At A CAGR of 91.5% By 2026: Research Dive

Global Blockchain IoT Market, Growing At A CAGR of 91.5% By 2026: Research Dive
by Tarun Khanna
January 23, 2023
0
ShareTweetShareSend

Most Disruptive AI Startups in India 2020

by Tarun Khanna
December 14, 2020
0
ShareTweetShareSend

Joyce, sister of Robot-Sophia with an eye on Computer Vision Capabilities

Joyce,-sister-of-Robot-Sophia-with-an-eye-on-Computer-Vision-Capabilities
by Yukta Chadha
April 12, 2021
0
ShareTweetShareSend

7 Most Counseling Skills to Learn to be a Data Scientist in 2021

by Tarun Khanna
March 14, 2021
0
ShareTweetShareSend

Enormous Big Data Changing The Internet Experience For Average Consumers

Enormous Big Data Changing
by Tarun Khanna
February 23, 2021
0
ShareTweetShareSend

DeepTech Bytes

Deep Tech Bytes is a global standard digital zine that brings multiple facets of deep technology including Artificial Intelligence (AI), Machine Learning (ML), Data Science, Blockchain, Robotics,Python, Big Data, Deep Learning and more.
Deep Tech Bytes on Google News

Quick Links

  • Home
  • Affiliate Programs
  • About Us
  • Write For Us
  • Submit Startup Story
  • Advertise With Us
  • Terms of Service
  • Disclaimer
  • Cookies Policy
  • Privacy Policy
  • DMCA
  • Contact Us

Topics

  • Artificial Intelligence
  • Data Science
  • Python
  • Machine Learning
  • Deep Learning
  • Big Data
  • Blockchain
  • Tableau
  • Cryptocurrency
  • NFT
  • Technology
  • News
  • Startups
  • Books
  • Interview Questions

Connect

For PR Agencies & Content Writers:

connect@deeptechbytes.com

Facebook Twitter Linkedin Instagram
Listen on Apple Podcasts
Listen on Google Podcasts
Listen on Google Podcasts
Listen on Google Podcasts
DMCA.com Protection Status

© 2024 Designed by AK Network Solutions

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books

© 2023. Designed by AK Network Solutions