Web-based system automatically evaluates proposals from far-flung data scientists.
In the analysis of big data sets, the first step is usually the identification of “features” — data points with particular predictive power or analytic utility. Choosing features usually requires some human intuition. For instance, a sales database might contain revenues and date ranges, but it might take a human to recognize that average revenues — revenues divided by the sizes of the ranges — is the really useful metric.
MIT researchers have developed a new collaboration tool, dubbed FeatureHub, intended to make feature identification more efficient and effective. With FeatureHub, data scientists and experts on particular topics could log on to a central site and spend an hour or two reviewing a problem and proposing features. Software then tests myriad combinations of features against target data, to determine which are most useful for a given predictive task.
In tests, the researchers recruited 32 analysts with data science experience, who spent five hours each with the system, familiarizing themselves with it and using it to propose candidate features for each of two data-science problems.
The predictive models produced by the system were tested against those submitted to a data-science competition called Kaggle. The Kaggle entries had been scored on a 100-point scale, and the FeatureHub models were within three and five points of the winning entries for the two problems.
But where the top-scoring entries were the result of weeks or even months of work, the FeatureHub entries were produced in a matter of days. And while 32 collaborators on a single data science project is a lot by today’s standards, Micah Smith, an MIT graduate student in electrical engineering and computer science who helped lead the project, has much larger ambitions.
FeatureHub — like its name — was inspired by GitHub, an online repository of open-source programming projects, some of which have drawn thousands of contributors. Smith hopes that FeatureHub might someday attain a similar scale.
“I do hope that we can facilitate having thousands of people working on a single solution for predicting where traffic accidents are most likely to strike in New York City or predicting which patients in a hospital are most likely to require some medical intervention,” he says. “I think that the concept of massive and open data science can be really leveraged for areas where there’s a strong social impact but not necessarily a single profit-making or government organization that is coordinating responses.”
Smith and his colleagues presented a paper describing FeatureHub at the IEEE International Conference on Data Science and Advanced Analytics. His coauthors on the paper are his thesis advisor, Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems, and Roy Wedge, who began working with Veeramachaneni’s group as an MIT undergraduate and is now a software engineer at Feature Labs, a data science company based on the group’s work.
FeatureHub’s user interface is built on top of a common data-analysis software suite called the Jupyter Notebook, and the evaluation of feature sets is performed by standard machine-learning software packages. Features must be written in the Python programming language, but their design has to follow a template that intentionally keeps the syntax simple. A typical feature might require between five and 10 lines of code.
The MIT researchers wrote code that mediates between the other software packages and manages data, pooling features submitted by many different users and tracking those collections of features that perform best on particular data analysis tasks.
In the past, Veeramachaneni’s group has developed software that automatically generatesfeatures by inferring relationships between data from the manner in which they’re organized. When that organizational information is missing, however, the approach is less effective.
Still, Smith imagines, automatic feature synthesis could be used in conjunction with FeatureHub, getting projects started before volunteers have begun to contribute to them, saving the grunt work of enumerating the obvious features, and augmenting the best-performing sets of features contributed by humans.
Learn more: Crowdsourcing big-data analysis
The Latest on: Big-data analysis
[google_news title=”” keyword=”big-data analysis” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
- Decile Launches Second Generation Predictive Models, Revolutionizing Customer Data Analyticson March 28, 2024 at 8:08 am
Customer data and analytics platform Decile has introduced its second generation predictive models aimed to empower marketers with deeper insights into profitable retention strategies and enhance ...
- Online big data dashboard to help map enteric infectious diseaseson March 28, 2024 at 6:01 am
The Planetary Child Health & Enterics Observatory dashboard aims to shed light on the global public health threat of diarrheal diseases in children.
- Bedrock Security raises $10M to enhance big-data security with AI-based reasoningon March 26, 2024 at 2:10 pm
Bedrock Labs Inc., a data security startup that likes to be known simply as Bedrock Security, said today it has closed on a $10 million seed funding round led by Greylock. The round came as the ...
- MongoDB Big Data Industry Predictions for 2024on March 26, 2024 at 3:00 am
Our friends over at MongoDB have prepared a special set of compelling technology predictions for the year ahead. MongoDB team’s predictions for the upcoming year cover topics including AI, diversity ...
- 3 Sorry Big Data Stocks to Sell in March While You Still Canon March 26, 2024 at 3:00 am
InvestorPlace - Stock Market News, Stock Advice & Trading Tips The continued popularity of big tech and big data companies capitalizing on ...
- Investors Can’t Always Trust the Data, and That’s OKon March 25, 2024 at 5:30 am
The spread of analytics has been good for almost everyone, but it’s more important than ever to separate the signal from the noise.
- Why bigger data sets doesn’t mean better insightson March 25, 2024 at 12:34 am
While there is great truth in that having access to data can lead to greater business intelligence insights, what companies actually need is access to ‘good’ data and its insights. However, knowing ...
- Tom Snyder: Data automation promises big advances in the next decadeon March 24, 2024 at 9:53 pm
The next 10 years will bring significant advances in curing disease, improving crop yields and personalizing medicine. All of these are driven by our capability to analyze data sets at unprecedented ...
- Data-Driven Healthcare Revolution: Unlocking Nigeria’s Potential in Era of Big Data/Analyticson March 24, 2024 at 6:36 pm
As a data professional and tech co-founder deeply entrenched in Nigeria’s tech ecosystem, I have witnessed firsthand the transformative impact that data analysis can have—particularly in the ...
- How Ashwini Pai is Improving Healthcare with Big Data Analyticson March 22, 2024 at 2:22 pm
The United States Healthcare industry generates a tremendous volume of data every day. This includes everything from Electronic Health Records (EHR) to medical imaging to genomic sequencing. All this ...
via Google News and Bing News