A machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the user’s writing.
Credits: Image: Digital collage by Jose-Luis Olivares, MIT, using stock images and images derived from MidJourney AI.
A new technique enables AI models to continually learn from new data on intelligent edge devices like smartphones and sensors, reducing energy costs and privacy risks.
Microcontrollers, miniature computers that can run simple commands, are the basis for billions of connected devices, from internet-of-things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models on “edge devices” that work independently from central computing resources.
Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the user’s writing. However, the training process requires so much memory that it is typically done using powerful computers at a data center, before the model is deployed on a device. This is more costly and raises privacy issues since user data must be sent to a central server.
To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers (there are 1,024 kilobytes in one megabyte).
The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.
This technique also preserves privacy by keeping data on the device, which could be especially beneficial when data are sensitive, such as in medical applications. It also could enable customization of a model based on the needs of users. Moreover, the framework preserves or improves the accuracy of the model when compared to other training approaches.
“Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices,” says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the paper describing this innovation.
Joining Han on the paper are co-lead authors and EECS PhD students Ji Lin and Ligeng Zhu, as well as MIT postdocs Wei-Ming Chen and Wei-Chen Wang, and Chuang Gan, a principal research staff member at the MIT-IBM Watson AI Lab. The research will be presented at the Conference on Neural Information Processing Systems.
Han and his team previously addressed the memory and computational bottlenecks that exist when trying to run machine-learning models on tiny edge devices, as part of their TinyML initiative.
A common type of machine-learning model is known as a neural network. Loosely based on the human brain, these models contain layers of interconnected nodes, or neurons, that process data to complete a task, such as recognizing people in photos. The model must be trained first, which involves showing it millions of examples so it can learn the task. As it learns, the model increases or decreases the strength of the connections between neurons, which are known as weights.
The model may undergo hundreds of updates as it learns, and the intermediate activations must be stored during each round. In a neural network, activation is the middle layer’s intermediate results. Because there may be millions of weights and activations, training a model requires much more memory than running a pre-trained model, Han explains.
Han and his collaborators employed two algorithmic solutions to make the training process more efficient and less memory-intensive. The first, known as sparse update, uses an algorithm that identifies the most important weights to update at each round of training. The algorithm starts freezing the weights one at a time until it sees the accuracy dip to a set threshold, then it stops. The remaining weights are updated, while the activations corresponding to the frozen weights don’t need to be stored in memory.
“Updating the whole model is very expensive because there are a lot of activations, so people tend to update only the last layer, but as you can imagine, this hurts the accuracy. For our method, we selectively update those important weights and make sure the accuracy is fully preserved,” Han says.
Their second solution involves quantized training and simplifying the weights, which are typically 32 bits. An algorithm rounds the weights so they are only eight bits, through a process known as quantization, which cuts the amount of memory for both training and inference. Inference is the process of applying a model to a dataset and generating a prediction. Then the algorithm applies a technique called quantization-aware scaling (QAS), which acts like a multiplier to adjust the ratio between weight and gradient, to avoid any drop in accuracy that may come from quantized training.
The researchers developed a system, called a tiny training engine, that can run these algorithmic innovations on a simple microcontroller that lacks an operating system. This system changes the order of steps in the training process so more work is completed in the compilation stage, before the model is deployed on the edge device.
“We push a lot of the computation, such as auto-differentiation and graph optimization, to compile time. We also aggressively prune the redundant operators to support sparse updates. Once at runtime, we have much less workload to do on the device,” Han explains.
A successful speedup
Their optimization only required 157 kilobytes of memory to train a machine-learning model on a microcontroller, whereas other techniques designed for lightweight training would still need between 300 and 600 megabytes.
They tested their framework by training a computer vision model to detect people in images. After only 10 minutes of training, it learned to complete the task successfully. Their method was able to train a model more than 20 times faster than other approaches.
Now that they have demonstrated the success of these techniques for computer vision models, the researchers want to apply them to language models and different types of data, such as time-series data. At the same time, they want to use what they’ve learned to shrink the size of larger models without sacrificing accuracy, which could help reduce the carbon footprint of training large-scale machine-learning models.
“AI model adaptation/training on a device, especially on embedded controllers, is an open challenge. This research from MIT has not only successfully demonstrated the capabilities, but also opened up new possibilities for privacy-preserving device personalization in real-time,” says Nilesh Jain, a principal engineer at Intel who was not involved with this work. “Innovations in the publication have broader applicability and will ignite new systems-algorithm co-design research.”
“On-device learning is the next major advance we are working toward for the connected intelligent edge. Professor Song Han’s group has shown great progress in demonstrating the effectiveness of edge devices for training,” adds Jilei Hou, vice president and head of AI research at Qualcomm. “Qualcomm has awarded his team an Innovation Fellowship for further innovation and advancement in this area.”
Original Article: Learning on the edge
More from: Massachusetts Institute of Technology
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
- Predicting the device performance of perovskite solar cells through machine learning
Metal halide perovskite solar cells (PSCs) have been rapidly developed in the past decade. To obtain high-performance PSCs, it is imperative to optimize the fabrication processes and the composition ...
- Harrison School District leaders report growing presence of vaping devices with THC
“Vaping devices is what we call them ... “We’re worried about students learning, and now they’re going to miss time from school, and it’s ultimately not the environment we want them to be in.” ...
- Black Friday 2022 live blog: Today's best Black Friday deals on top gifts this Christmas, toys, PS5 restocks and more
Black Friday 2022 isn't until Nov. 25, but the early Black Friday deals have already begun at Amazon and Walmart.
- Advice for entrepreneurs in the medical device industry on raising cash and growing startups
Entrepreneurs in the medical device industry face a shifting regulatory landscape, skeptical investors and a healthcare system resistant to change. But there are ways to get through the gauntlet.
- How HBCUs Can Address the Device Access Dilemma
Though providing students and faculty with laptops and tablets is critical, it’s even more important to teach them how to use these devices for better instruction and learning. “Access is one thing,” ...
Go deeper with Google Headlines on:
[google_news title=”” keyword=”on-device learning” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Intelligent edge devices
- Advantech and Actility Launch a New AI-Based Intelligent Predictive Maintenance IoT Solution for Rotating Machinery
WISE-2410 balances the wireless bandwidth between it and the gateway, so it also mitigates the data transmission fail rates between edge devices and gateways ... s corporate vision is to enable an ...
- Here's Duke Energy's cutting-edge deal with Amazon
The partnership will integrate Duke-designed Intelligent Grid Services software with expanded ... have an ability to take a ton of information that's coming off the grid from devices — not just our ...
- 3xLOGIC announces release of new edge-based analytic cameras, adding to its powerful, diverse line of IP cameras
LOGIC, Inc., a provider of integrated, intelligent security solutions, announces the release of new edge-based analytic cameras, adding to its powerful, diverse line of IP cameras.Businesses ...
- Analog Devices Announces World's First Long-Reach, Single-pair Power over Ethernet (SPoE) Solutions for Smart Building and Factory Automation
Analog Devices, Inc. (NASDAQ:ADI) today announced the world's first Single-pair Power over Ethernet (SPoE) Power Sourcing Equipment (PSE) and Power Device (PD) solutions to help customers drive greate ...
- Qualcomm to transform into connected chip company for intelligent edge: CEO
The latest premium mobile platform called Snapdragon 8 Gen 2 is another step towards realising his dream that aims to reinvent premium mobile gaming in the metaverse era ...
Go deeper with Google Headlines on:
Intelligent edge devices
[google_news title=”” keyword=”intelligent edge devices” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]