Free Quiz
Write for Us
Learn Artificial Intelligence and Machine Learning
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books
Learn Artificial Intelligence and Machine Learning
No Result
View All Result

Home » MIT’s Optical AI Chip That Could Revolutionize 6G at the Speed of Light

MIT’s Optical AI Chip That Could Revolutionize 6G at the Speed of Light

Tarun Khanna by Tarun Khanna
June 23, 2025
in Technology
Reading Time: 4 mins read
0
MIT’s Optical AI Chip That Could Revolutionize 6G at the Speed of Light

Photo Credit: https://scitechdaily.com/

Share on FacebookShare on TwitterShare on LinkedInShare on WhatsApp

By permitting deep learning to run at the speed of light, this chip may want to permit edge devices to carry out real-time data analysis with improved abilities.

As more connected devices need great bandwidth for activates like teleworking and cloud computing, handling the limited wireless spectrum shared by all users is turning into increasingly more tough.

To address this, engineers are turning to artificial intelligence to handle the wireless spectrum dynamically, heading to decrease latency and enhance performance. However, most AI methods for processing and classifying wireless alerts consume power and cannot perform in real-time.

Also Read:

What is Codex, OpenAI’s latest AI coding agent capable of multitasking?

AI Fails the Social Test: New Study disclose Major Blind Spot

From Trash to Tech: Scientists Turn Pomelo Peels into Electricity-Generating Devices

World First: Engineers Train AI at Lightspeed

Now, researchers at MIT have generated a new AI hardware accelerator mainly designed for wireless signal procedures. This optical processor plays machine-learning tasks at the speed of light, classifying wireless signals within nanoseconds.

The photonic chip operates approximately 100 times quicker than the best available digital options and attains round 95% perfection in signal classification. It is also scalable and adaptable for numerous high-overall performance computing tasks. In addition, the chip is smaller, lighter, less costly, and more energy-efficient than traditional digital AI hardware.

This technology might be in particular valuable for future 6G wireless systems, inclusive of cognitive radios that enhance data rates by adjusting wireless modulation formats based on real-time conditions.

By enabling edge devices to perform deep-learning computations in real-time, the hardware accelerator could considerably accelerate various applications past signal processing. These include allowing self sufficient automobiles to react immediately to environmental changes or permitting smart pacemakers to monitor heart health continuously.

“There are many applications that would be allowed by edge devices which are capable of analyzing wireless signals. What we’ve presented in our paper ought to open up many possibilities for real-time and reliable AI inference. This work is the start of something that could be pretty impactful,” stated Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science, major investigator in the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior author of the paper.

He is joined on the paper by lead author Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc who is now an assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The research was published in Science Advances.

Light-pace processing

Recent digital AI accelerators for wireless signal processing work by transforming the sign into an image and passing it by a deep-learning model for category. While this technique is very precise, deep neural networks needs significant computing power, making the approach unsuitable for plenty applications that need fast, real-time responses.

Optical structures can boost up deep neural networks by encoding and processing data using light, which is likewise much less energy intensive than digital computing. But general have struggled to increase the performance of general-motive optical neural networks while used for signal processing, while making sure the optical device is scalable.

By growing an optical neural community structure in particular for signal processing, which they call a multiplicative analog frequency convert optical neural network (MAFT-ONN), the researchers approached that hassle head-on.

The MAFT-ONN addresses the problem of scalability by encoding all signal data and performing all machine-learning operations within what is referred to as the frequency domain — before the wireless signals are digitized.

The researchers designed their optical neural network to carry out all linear and nonlinear operations in-line. Both types of operations are needed deep learning.

Thanks to this innovative design, they only require one MAFT-ONN device per layer for the whole optical neural network, as opposed of different methods that need one device for each individual computational unit, or “neuron.”

“We can fit 10,000 neurons onto a single tool and compute the important multiplications in a single shot,” Davis stated.

The researchers achieve this using a method referred to as photoelectric multiplication, which dramatically boosts performance. It also permits them to generate an optical neural network that can be readily scaled up with additional layers without needing extra overhead.

Results in nanoseconds

MAFT-ONN takes a wireless signal as input, methods the sign data, and passes the information alongside for later operations the edge device performs. For example, by means of classifying a signal’s modulation, MAFT-ONN could allow a device to automatically understand the form of signal to extract the data it contains.

One of the biggest challenges the researchers faced while designing MAFT-ONN was deciding how to map the machine-learning computations to the optical hardware.

“We couldn’t simply take a normal machine-learning framework off the shelf and use it. We had to customize it to fit the hardware and discern out how to exploit the physics so it might perform the computations we wanted it to,” Davis stated.

When they tested their structure on signal classification in simulations, the optical neural network attained 85% accuracy in a single shot, which can quickly converge to more than 99% accuracy the use of multiple measurements. MAFT-ONN most effective needed about 120 nanoseconds to carry out the whole procedure.

“The longer you measure, the higher accuracy you’ll get. Because MAFT-ONN computes consequences in nanoseconds, you don’t adds.

While modern digital radio frequency gadgets can perform machine-learning consequences in a microseconds, optics can do it in nanoseconds or even picoseconds.

Moving forward, the researchers want to employ what are known as multiplexing schemes so they may carry out more computations and scale up the MAFT-ONN. They also need to increase their work into more complex deep learning architectures that might run transformer models or LLMs.

ShareTweetShareSend
Previous Post

Politicians’ memecoins, dropped court cases fuel crypto ‘crime supercycle’

Next Post

The OpenAI Files: Ex-staff claim profit greed betraying AI protection

Tarun Khanna

Tarun Khanna

Founder DeepTech Bytes - Data Scientist | Author | IT Consultant
Tarun Khanna is a versatile and accomplished Data Scientist, with expertise in IT Consultancy as well as Specialization in Software Development and Digital Marketing Solutions.

Related Posts

Huawei readies new AI chip for mass shipment as China seeks Nvidia options, sources stated
Artificial Intelligence

Huawei readies new AI chip for mass shipment as China seeks Nvidia options, sources stated

April 23, 2025
AI Breakthrough: Scientists Transform Everyday Transistor Into an Artificial Neuron
Artificial Intelligence

AI Breakthrough: Scientists Transform Everyday Transistor Into an Artificial Neuron

April 16, 2025
Anthropic develops ‘AI microscope’ to reveal how large language models think
Artificial Intelligence

Anthropic invented ‘AI microscope’ to show how large language models think

April 1, 2025
A 56-Qubit Quantum Computer Just Did What No Supercomputer Can
Technology

A 56-Qubit Quantum Computer Just Did What No Supercomputer Can

March 28, 2025
Next Post
The OpenAI Files: Ex-staff claim profit greed betraying AI protection

The OpenAI Files: Ex-staff claim profit greed betraying AI protection

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

35 − = 34

TRENDING

What is Automated Machine Learning (Auto ML) ?

What is Automated Machine Learning

What is Automated Machine Learning (Auto ML) ?

by Tarun Khanna
February 12, 2021
0
ShareTweetShareSend

AI will soon be smarter than human beings

AI will soon be smarter than human beings

Photo Credit: https://indianexpress.com/

by Tarun Khanna
March 17, 2025
0
ShareTweetShareSend

Anthropic launches Claude AI models for US national safety

Anthropic launches Claude AI models for US national safety

Photo Credit: https://www.artificialintelligence-news.com/

by Tarun Khanna
June 6, 2025
0
ShareTweetShareSend

List of Best Interpreters for Python

Interpreters for Python
by Manika Sharma
March 28, 2021
0
ShareTweetShareSend

List Of Common Machine Learning Algorithms

List Of Common Machine Learning Algorithms

List Of Common Machine Learning Algorithms

by Tarun Khanna
February 7, 2021
0
ShareTweetShareSend

Best Python IDEs for inscribing Analytics and Data Science Code

python-ide
by Tarun Khanna
February 19, 2021
0
ShareTweetShareSend

DeepTech Bytes

Deep Tech Bytes is a global standard digital zine that brings multiple facets of deep technology including Artificial Intelligence (AI), Machine Learning (ML), Data Science, Blockchain, Robotics,Python, Big Data, Deep Learning and more.
Deep Tech Bytes on Google News

Quick Links

  • Home
  • Affiliate Programs
  • About Us
  • Write For Us
  • Submit Startup Story
  • Advertise With Us
  • Terms of Service
  • Disclaimer
  • Cookies Policy
  • Privacy Policy
  • DMCA
  • Contact Us

Topics

  • Artificial Intelligence
  • Data Science
  • Python
  • Machine Learning
  • Deep Learning
  • Big Data
  • Blockchain
  • Tableau
  • Cryptocurrency
  • NFT
  • Technology
  • News
  • Startups
  • Books
  • Interview Questions

Connect

For PR Agencies & Content Writers:

connect@deeptechbytes.com

Facebook Twitter Linkedin Instagram
Listen on Apple Podcasts
Listen on Google Podcasts
Listen on Google Podcasts
Listen on Google Podcasts
DMCA.com Protection Status

© 2024 Designed by AK Network Solutions

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books

© 2023. Designed by AK Network Solutions