North Carolina State University researchers have developed a technique that reduces training time for deep learning networks by more than 60 percent without sacrificing accuracy, accelerating the development of new artificial intelligence (AI) applications.
“Deep learning networks are at the heart of AI applications used in everything from self-driving cars to computer vision technologies,” says Xipeng Shen, a professor of computer science at NC State and co-author of a paper on the work.
“One of the biggest challenges facing the development of new AI tools is the amount of time and computing power it takes to train deep learning networks to identify and respond to the data patterns that are relevant to their applications. We’ve come up with a way to expedite that process, which we call Adaptive Deep Reuse. We have demonstrated that it can reduce training times by up to 69 percent without accuracy loss.”
Training a deep learning network involves breaking a data sample into chunks of consecutive data points. Think of a network designed to determine whether there is a pedestrian in a given image. The process starts by dividing a digital image into blocks of pixels that are adjacent to each other. Each chunk of data is run through a set of computational filters. The results are then run through a second set of filters. This continues iteratively until all of the data have been run through all of the filters, allowing the network to reach a conclusion about the data sample.
When this process has been done for every data sample in a data set, that is called an epoch. In order to fine-tune a deep learning network, the network will likely run through the same data set for hundreds of epochs. And many data sets consist of between tens of thousands and millions of data samples. Lots of iterations of lots of filters being applied to lots of data means that training a deep learning network takes a lot of computing power.
The breakthrough moment for Shen’s research team came when it realized that many of the data chunks in a data set are similar to each other. For example, a patch of blue sky in one image may be similar to a patch of blue sky elsewhere in the same image or to a patch of sky in another image in the same data set.
By recognizing these similar data chunks, a deep learning network could apply filters to one chunk of data and apply the results to all of the similar chunks of data in the same set, saving a lot of computing power.
“We were not only able to demonstrate that these similarities exist, but that we can find these similarities for intermediate results at every step of the process,” says Lin Ning, a Ph.D. student at NC State and lead author of the paper. “And we were able to maximize this efficiency by applying a method called locality sensitive hashing.”
But this raises two additional questions. How large should each chunk of data be? And what threshold do data chunks need to meet in order to be deemed “similar”?
The researchers found that the most efficient approach was to begin by looking at relatively large chunks of data using a relatively low threshold for determining similarity. In subsequent epochs, the data chunks get smaller and the similarity threshold more stringent, improving the deep learning network’s accuracy. The researchers designed an adaptive algorithm that automatically implements these incremental changes during the training process.
To evaluate their new technique, the researchers tested it using three deep learning networks and data sets that are widely used as testbeds by deep learning researchers: CifarNet using Cifar10; AlexNet using ImageNet; and VGG-19 using ImageNet.
Adaptive Deep Reuse cut training time for AlexNet by 69 percent; for VGG-19 by 68 percent; and for CifarNet by 63 percent – all without accuracy loss.
“This demonstrates that the technique drastically reduces training times,” says Hui Guan, a Ph.D. student at NC State and co-author of the paper. “It also indicates that the larger the network, the more Adaptive Deep Reuse is able to reduce training times – since AlexNet and VGG-19 are both substantially larger than CifarNet.”
“We think Adaptive Deep Reuse is a valuable tool, and look forward to working with industry and research partners to demonstrate how it can be used to advance AI,” Shen says.
Learn more: New Technique Cuts AI Training Time By More Than 60 Percent
The Latest on: Artificial intelligence training
[google_news title=”” keyword=”artificial intelligence training” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Artificial intelligence training
- Artificial Intelligence Practioners To Support Ethical Practice, Quality Service Deliveryon May 12, 2024 at 7:11 am
The president of the National Artificial Intelligence Practioners has charged the Federal Government to position Nigeria as a leading AI hub on the continent of Africa. Prof. Eyitope Ogunbodede, the ...
- Schools turn to artificial intelligence to spot guns as companies press lawmakers for state fundson May 12, 2024 at 3:31 am
Some states are considering multimillion-dollar grant programs for the technology. But many of those bills have been written with specific criteria so only one software provider can qualify.
- DGHR organises fundamentals of generative Artificial Intelligence training program for employees to enhance professional competitivenesson May 12, 2024 at 12:27 am
The Dubai Government Human Resources Department (DGHR) organised a training program on the fundamentals of gener ...
- This Is What the Latest Artificial Intelligence (AI) Earnings Reports Say About Nvidia Stock's Futureon May 11, 2024 at 1:37 pm
Intel and AMD's latest earnings reports make one thing clear about the state of Nvidia's competition in the AI chip market.
- Artificial Intelligence And Inflation: A Volatile Relationshipon May 11, 2024 at 1:30 am
We expect to see lower prices where AI provides the biggest boost to productivity, but businesses, consumers and investors should expect a wild ride.
- How local marketing firms are using artificial intelligenceon May 10, 2024 at 3:00 am
Artificial intelligence is changing how many industries work, and the marketing and advertising space is no different. Local experts told the Business Journal how they're using AI at their firms.
- Better Artificial Intelligence (AI) Stock: Nvidia vs. Super Micro Computeron May 10, 2024 at 1:46 am
You may be surprised to find out which one of these two high-flying stocks is the better bet for investors right now.
- DatologyAI raises $46M to streamline AI model training data dietson May 8, 2024 at 7:18 pm
Artificial intelligence data curation startup DatologyAI said today it has closed on a $46 million early-stage round of funding, which comes just three months after it first announced it had raised ...
- Microsoft Racine County data center expansion, new AI training focus of Biden visit to stateon May 8, 2024 at 2:35 am
Microsoft President Brad Smith says Microsoft is accelerating data center buildout, making major investment in bringing workers, businesses into AI economy.
- Artificial Intelligence Could Help Officers Screen Applicants for Asylum at Borderon May 7, 2024 at 11:05 pm
The Department of Homeland Security is piloting artificial intelligence to train officers who review applicants for refugee status in the United States, Secretary Alejandro Mayorkas told reporters on ...
via Bing News