
via wgu.edu
Cornell Tech researchers have discovered a new type of online attack that can manipulate natural-language modeling systems and evade any known defense – with possible consequences ranging from modifying movie reviews to manipulating investment banks’ machine-learning models to ignore negative news coverage that would affect a specific company’s stock.
In a new paper, researchers found the implications of these types of hacks – which they call “code poisoning” – to be wide-reaching for everything from algorithmic trading to fake news and propaganda.
“With many companies and programmers using models and codes from open-source sites on the internet, this research shows how important it is to review and verify these materials before integrating them into your current system,” said Eugene Bagdasaryan, a doctoral candidate at Cornell Tech and lead author of “Blind Backdoors in Deep Learning Models,” which was presented Aug. 12 at the virtual USENIX Security ’21 conference. The co-author is Vitaly Shmatikov, professor of computer science at Cornell and Cornell Tech.
“If hackers are able to implement code poisoning,” Bagdasaryan said, “they could manipulate models that automate supply chains and propaganda, as well as resume-screening and toxic comment deletion.”
Without any access to the original code or model, these backdoor attacks can upload malicious code to open-source sites frequently used by many companies and programmers.
As opposed to adversarial attacks, which require knowledge of the code and model to make modifications, backdoor attacks allow the hacker to have a large impact, without actually having to directly modify the code and models.
“With previous attacks, the attacker must access the model or data during training or deployment, which requires penetrating the victim’s machine learning infrastructure,” Shmatikov said. “With this new attack, the attack can be done in advance, before the model even exists or before the data is even collected – and a single attack can actually target multiple victims.”
The new paper investigates the method for injecting backdoors into machine-learning models, based on compromising the loss-value computation in the model-training code. The team used a sentiment analysis model for the particular task of always classifying as positive all reviews of the infamously bad movies directed by Ed Wood.
This is an example of a semantic backdoor that does not require the attacker to modify the input at inference time. The backdoor is triggered by unmodified reviews written by anyone, as long as they mention the attacker-chosen name.
How can the “poisoners” be stopped? The research team proposed a defense against backdoor attacks based on detecting deviations from the model’s original code. But even then, the defense can still be evaded.
Shmatikov said the work demonstrates that the oft-repeated truism, “Don’t believe everything you find on the internet,” applies just as well to software.
“Because of how popular AI and machine-learning technologies have become, many nonexpert users are building their models using code they barely understand,” he said. “We’ve shown that this can have devastating security consequences.”
For future work, the team plans to explore how code-poisoning connects to summarization and even automating propaganda, which could have larger implications for the future of hacking.
Shmatikov said they will also work to develop robust defenses that “will eliminate this entire class of attacks and make AI and machine learning safe even for nonexpert users.”
Original Article: Hackers can ‘poison’ open-source code on the internet
More from: Cornell University
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
New type of online attack
- New report raises alarm over antisemitic attacks and incidents reaching a 43-year high
A new report suggests record levels of physical violence, harassment and vandalism against Jews that has left no U.S. city or region with a large Jewish population untouched. The Anti-Defamation ...
- Before, During and After a Ransomware Attack: What IT Leaders Need to Know
She follows tech trends and the IT leaders who shape them — reporting on entrepreneurial business, security and thought leadership. “With any ransomware attack, recovery time is crucial, but often ...
- Cyber Attack in Telecom Sector Market by Product Type-2029
Topmost manufacturers/ Key player/ Economy by Business Leaders Leading Players of Cyber Attack in Telecom Sector Market Are: Cyber Attack in Telecom Sector Market 2023 research is a key process that ...
- Symptoms of Heart Disease
Coronary artery disease, congestive heart failure, heart attack-- each type of heart problem requires ... Call your doctor if you begin to have new symptoms or if they become more frequent or ...
- 5 Crucial Steps to Survive a Heart Attack
In the US, someone has a heart attack every 40 seconds, according to the Centers for Disease Control and Prevention, making it the leading cause of death nationwide. Knowing what to do when one ...
Go deeper with Google Headlines on:
New type of online attack
[google_news title=”” keyword=”new type of online attack” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Code poisoning
- Dog killed in painful poisoning attack as his food was tainted with hidden nails
A dog has been killed by a vile trap involving nail-stuffed food items being left around a street. Croquettes and sausages, filled with needles and nails were left around streets and parks in ...
- The ground beneath their feet
It's maybe two feet away from my doorstep," Garrido said before she moved. She cleaned her kitchen counters daily. A thick layer of dust would soon reappear. She vacuumed the small rug in her home ...
- Lead keeps poisoning children. It doesn’t have to.
The main advice given to families threatened by lead exposure in toxic soil — keep your home clean — doesn't work, studies show.
- Lead poisoning leads to death: (Un)avoidable toxic reality?
Bangladesh remains one of the most severely affected countries by lead poisoning in the world. The toxic truth is that there is no shortage of regulations but proper implementation remains the sole ob ...
- Colorado dentist charged with murder in ‘complex and calculated’ poisoning death of his wife
A Colorado dentist accused of fatally poisoning his wife in a “complex and calculated murder” was arrested over the weekend, officials announced. Dr. James Toliver Craig, 45, a dentist in ...
Go deeper with Google Headlines on:
Code poisoning
[google_news title=”” keyword=”code poisoning” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]