Virtual artificial intelligence analyst developed by the Computer Science and Artificial Intelligence Lab and PatternEx reduces false positives by factor of 5
Today’s security systems usually fall into one of two categories: human or machine. So-called “analyst-driven solutions” rely on rules created by living experts and therefore miss any attacks that don’t match the rules. Meanwhile, today’s machine-learning approaches rely on “anomaly detection,” which tends to trigger false positives that both create distrust of the system and end up having to be investigated by humans, anyway.
But what if there were a solution that could merge those two worlds? What would it look like?
In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory(CSAIL) and the machine-learning startup PatternEx demonstrate an artificial intelligence platform called AI2 that predicts cyber-attacks significantly better than existing systems by continuously incorporating input from human experts. (The name comes from merging artificial intelligence with what the researchers call “analyst intuition.”)
The team showed that AI2 can detect 85 percent of attacks, which is roughly three times better than previous benchmarks, while also reducing the number of false positives by a factor of 5. The system was tested on 3.6 billion pieces of data known as “log lines,” which were generated by millions of users over a period of three months.
To predict attacks, AI2 combs through data and detects suspicious activity by clustering the data into meaningful patterns using unsupervised machine-learning. It then presents this activity to human analysts who confirm which events are actual attacks, and incorporates that feedback into its models for the next set of data.
“You can think about the system as a virtual analyst,” says CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with Ignacio Arnaldo, a chief data scientist at PatternEx and a former CSAIL postdoc. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”
Veeramachaneni presented a paper about the system at last week’s IEEE International Conference on Big Data Security in New York City.
Creating cybersecurity systems that merge human- and computer-based approaches is tricky, partly because of the challenge of manually labeling cybersecurity data for the algorithms.
For example, let’s say you want to develop a computer-vision algorithm that can identify objects with high accuracy. Labeling data for that is simple: Just enlist a few human volunteers to label photos as either “objects” or “non-objects,” and feed that data into the algorithm.
But for a cybersecurity task, the average person on a crowdsourcing site like Amazon Mechanical Turk simply doesn’t have the skillset to apply labels like “DDOS” or “exfiltration attacks,” says Veeramachaneni. “You need security experts.”
That opens up another problem: Experts are busy, and they can’t spend all day reviewing reams of data that have been flagged as suspicious. Companies have been known to give up on platforms that are too much work, so an effective machine-learning system has to be able to improve itself without overwhelming its human overlords.
AI2’s secret weapon is that it fuses together three different unsupervised-learning methods, and then shows the top events to analysts for them to label. It then builds a supervised model that it can constantly refine through what the team calls a “continuous active learning system.”
Specifically, on day one of its training, AI2 picks the 200 most abnormal events and gives them to the expert. As it improves over time, it identifies more and more of the events as actual attacks, meaning that in a matter of days the analyst may only be looking at 30 or 40 events a day.
“This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” says Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame. “This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”
The team says that AI2 can scale to billions of log lines per day, transforming the pieces of data on a minute-by-minute basis into different “features”, or discrete types of behavior that are eventually deemed “normal” or “abnormal.”
“The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachaneni says. “That human-machine interaction creates a beautiful, cascading effect.”
Learn more: System predicts 85 percent of cyber-attacks using input from human experts
The Latest on: AI2
[google_news title=”” keyword=”AI2″ num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: AI2
- Not Science Fiction: Researchers Recreate Star Trek’s Holodeck Using AIon July 24, 2024 at 1:19 am
Researchers at the University of Pennsylvania and AI2 have developed "Holodeck," an advanced system capable of generating a wide range of virtual environments for training AI agents. In Star Trek: The ...
- Allen Institute for AI’s new office lease bolsters an emerging artificial intelligence corridor in Seattleon July 22, 2024 at 5:08 pm
The Allen Institute for AI has leased a full floor in the new Northlake Commons building in Seattle, part of a growing AI corridor.
- Weber Thompsonon July 22, 2024 at 2:54 am
Despite its significance in the world of artificial intelligence, the Allen Institute for AI (AI2) can be tough to find in its current nondescript home on the northeast side of… Read More Catch every ...
- Seattle VCs invest in new startup using AI to help manufacturers with regulatory requirementson July 19, 2024 at 7:24 am
Signify, a new Seattle startup that recently spun out of the AI2 Incubator, has raised $2.1 million to build out its software platform designed to help manufacturers manage compliance-related tasks.
- Elliman’s Oliver Lloyd sells Sunset Islands spec mansion for $34Mon July 17, 2024 at 7:38 am
Douglas Elliman agent Oliver Lloyd sold a waterfront mansion on Miami Beach’s Sunset Islands for $34.4 million, among the most expensive homes ever sold there.
- CalmWave Awarded Breakthrough Technology Agreement with Premier, Inc.on July 17, 2024 at 5:59 am
CalmWave, the leader in eliminating non-actionable alarms through data science and Transparent AI, has been awarded a Breakthrough Technology agreement with Premier, Inc., a leading technology-driven ...
- BSF Sustainable Euro Corp Bd AI2 EUR (0P0001I3LL.F)on July 15, 2024 at 3:01 pm
LOS ANGELES, June 24, 2024--The TCW Group announced today that it has completed the conversion of TCW’s MetWest Flexible Income Fund into an ETF: the TCW Flexible Income ETF. OVERLAND PARK, KS / ...
- Sterling Infrastructure: The Picks And Shovels To The Future Of AIon July 14, 2024 at 4:05 am
Sterling Infrastructure is benefitting from long-term AI tailwinds, with double-digit revenue growth. Read why I'm bullish on STRL stock.
- Samsung's new Galaxy Ring delays hit UAE: No pre-orders yet, tech enthusiasts left waitingon July 11, 2024 at 10:40 am
The Ring will be rolled out to the UAE and other GCC markets in a ‘sequential manner’ Samsung Electronics on Wednesday announced the new Galaxy Ring, Galaxy Watch7 and Galaxy Watch Ultra1, expanding ...
via Bing News