Information extraction — or automatically classifying data items stored as plain text — is a major topic of artificial-intelligence research.
Image: MIT News
“Information extraction” system helps turn plain text into data for statistical analysis.
Of the vast wealth of information unlocked by the Internet, most is plain text. The data necessary to answer myriad questions — about, say, the correlations between the industrial use of certain chemicals and incidents of disease, or between patterns of news coverage and voter-poll results — may all be online. But extracting it from plain text and organizing it for quantitative analysis may be prohibitively time consuming.
Information extraction — or automatically classifying data items stored as plain text — is thus a major topic of artificial-intelligence research. Last week, at the Association for Computational Linguistics’ Conference on Empirical Methods on Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory won a best-paper award for a new approach to information extraction that turns conventional machine learning on its head.
Most machine-learning systems work by combing through training examples and looking for patterns that correspond to classifications provided by human annotators. For instance, humans might label parts of speech in a set of texts, and the machine-learning system will try to identify patterns that resolve ambiguities — for instance, when “her” is a direct object and when it’s an adjective.
Typically, computer scientists will try to feed their machine-learning systems as much training data as possible. That generally increases the chances that a system will be able to handle difficult problems.
In their new paper, by contrast, the MIT researchers train their system on scanty data — because in the scenario they’re investigating, that’s usually all that’s available. But then they find the limited information an easy problem to solve.
“In information extraction, traditionally, in natural-language processing, you are given an article and you need to do whatever it takes to extract correctly from this article,” says Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and senior author on the new paper. “That’s very different from what you or I would do. When you’re reading an article that you can’t understand, you’re going to go on the web and find one that you can understand.”
Confidence boost
Essentially, the researchers’ new system does the same thing. A machine-learning system will generally assign each of its classifications a confidence score, which is a measure of the statistical likelihood that the classification is correct, given the patterns discerned in the training data. With the researchers’ new system, if the confidence score is too low, the system automatically generates a web search query designed to pull up texts likely to contain the data it’s trying to extract.
It then attempts to extract the relevant data from one of the new texts and reconciles the results with those of its initial extraction. If the confidence score remains too low, it moves on to the next text pulled up by the search string, and so on.
“The base extractor isn’t changing,” says Adam Yala, a graduate student in the MIT Department of Electrical Engineering and Computer Science (EECS) and one of the coauthors on the new paper. “You’re going to find articles that are easier for that extractor to understand. So you have something that’s a very weak extractor, and you just find data that fits it automatically from the web.” Joining Yala and Barzilay on the paper is first author Karthik Narasimhan, also a graduate student in EECS.
Remarkably, every decision the system makes is the result of machine learning. The system learns how to generate search queries, gauge the likelihood that a new text is relevant to its extraction task, and determine the best strategy for fusing the results of multiple attempts at extraction.
Just the facts
In experiments, the researchers applied their system to two extraction tasks. One was the collection of data on mass shootings in the U.S., which is an essential resource for any epidemiological study of the effects of gun-control measures. The other was the collection of similar data on instances of food contamination. The system was trained separately for each task.
In the first case — the database of mass shootings — the system was asked to extract the name of the shooter, the location of the shooting, the number of people wounded, and the number of people killed. In the food-contamination case, it extracted food type, type of contaminant, and location. In each case, the system was trained on about 300 documents.
From those documents, it learned clusters of search terms that tended to be associated with the data items it was trying to extract. For instance, the names of mass shooters were correlated with terms like “police,” “identified,” “arrested,” and “charged.” During training, for each article the system was asked to analyze, it pulled up, on average, another nine or 10 news articles from the web.
The researchers compared their system’s performance to that of several extractors trained using more conventional machine-learning techniques. For every data item extracted in both tasks, the new system outperformed its predecessors, usually by about 10 percent.
“One of the difficulties of natural language is that you can express the same information in many, many different ways, and capturing all that variation is one of the challenges of building a comprehensive model,” says Chris Callison-Burch, an assistant professor of computer and information science at the University of Pennsylvania. “[Barzilay and her colleagues] have this super-clever part of the model that goes out and queries for more information that might result in something that’s simpler for it to process. It’s clever and well-executed.”
Callison-Burch’s group is using a combination of natural-language processing and human review to build a database of information on gun violence, much like the one that the MIT researchers’ system was trained to produce. “We’ve crawled millions and millions of news articles, and then we pick out ones that the text classifier thinks are related to gun violence, and then we have humans start doing information extraction manually,” he says. “Having a model like Regina’s that would allow us to predict whether or not this article corresponded to one that we’ve already annotated would be a huge time savings. It’s something that I’d be very excited to do in the future.”
Learn more: Artificial-intelligence system surfs web to improve its performance
The Latest on: Artificial intelligence system
via Google News
The Latest on: Artificial intelligence system
- HPE to Build HPC and AI Systems Factory in Czech Republicon May 18, 2022 at 5:04 am
Hewlett Packard Enterprise (NYSE: HPE) today announced plans to build its first factory in Europe for high performance computing (HPC) and artificial intelligence (AI) systems. The company said the ...
- Why Artificial Intelligence Creates an Unprecedented Era of Opportunity in the Near Futureon May 18, 2022 at 4:55 am
The age of artificial intelligence (AI) is finally upon us. Consumer applications of AI, in particular, have come a long way, leading to more accurate search results for online shoppers, allowing apps ...
- Artificial Intelligence (AI) in Mining Market 2022: Explore Top Factors that Will Boost the Global Market by 2030on May 18, 2022 at 4:40 am
The rising demand for advanced technologies, artificial intelligence, and machine learning to spot the location, analyze patterns, and optimize resources ...
- Top Artificial Intelligence trends for digital transformation in 2022: The future is hereon May 18, 2022 at 3:09 am
Artificial intelligence (AI) is one of the most rapidly growing and widely used data-driven technologies in the world. From governments and major enterprises to small online firms, artificial ...
- Sigma Defense Buys Sub U Systems to Grow Tech Portfolio; Matt Jones Quotedon May 18, 2022 at 2:55 am
Sigma Defense has acquired Sub U Systems for an undisclosed sum to expand technology offerings intended for tactical networking and communications. The combined company plans to offer a common ...
- Metanomic Acquires Intoolab, Developers of the First Bayesian Network Artificial Intelligence Engineon May 18, 2022 at 12:32 am
The addition of artificial intelligence helps developers create better experiences ... learning algorithm trained to solve very specific problems, but an intelligent system with its own framework.
- Tesla to host second artificial intelligence day in Auguston May 17, 2022 at 4:25 pm
Tesla Inc top boss Elon Musk said on Tuesday the electric-car maker will host its second artificial intelligence day on Aug. 19, with the company likely to expand on plans to fine-tune its ...
- Cybercrime Is Evolving: Artificial Intelligence Can Help Improve Consumer Safety Onlineon May 17, 2022 at 7:06 am
This is where artificial intelligence comes in. By harnessing the power of artificial ... And while there are “all-in-one” solutions that exist in the market, they serve largely as alarm systems, ...
- Artificial Intelligence Recruiting Software Leader Introduces Knowledge Graphon May 17, 2022 at 6:35 am
Today, Loxo announced the Loxo Knowledge Graph™, which introduces a revolutionary step change in the recruitment software market sector historically dominated by Applicant Tracking Systems (ATS) and ...
via Bing News