
Robot and Human
Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions.
The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal Science Robotics.
Most AI agents—computer systems that could endow robots or other machines with intelligence—are trained for very specific tasks—such as to recognize an object or estimate its volume—in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.
“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” Grauman said. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”
The scientists used deep learning, a type of machine learning inspired by the brain’s neural networks, to train their agent on thousands of 360-degree images of different environments.
Now, when presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses—like a tourist standing in the middle of a cathedral taking a few snapshots in different directions—that together add up to less than 20 percent of the full scene. What makes this system so effective is that it’s not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene. This is much like if you were in a grocery store you had never visited before, and you saw apples, you would expect to find oranges nearby, but to locate the milk, you might glance the other way. Based on glimpses, the agent infers what it would have seen if it had looked in all the other directions, reconstructing a full 360-degree image of its surroundings.
“Just as you bring in prior information about the regularities that exist in previously experienced environments—like all the grocery stores you have ever been to—this agent searches in a nonexhaustive way,” Grauman said. “It learns to make intelligent guesses about where to gather visual information to succeed in perception tasks.”
One of the main challenges the scientists set for themselves was to design an agent that can work under tight time constraints. This would be critical in a search-and-rescue application. For example, in a burning building a robot would be called upon to quickly locate people, flames and hazardous materials and relay that information to firefighters.

A new AI agent developed by researchers at the University of Texas at Austin takes a few “glimpses” of its surroundings, representing less than 20 percent of the full 360 degree view, and infers the rest of the whole environment. What makes this system so effective is that it’s not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene. Credit: David Steadman/Santhosh Ramakrishnan/University of Texas at Austin.
For now, the new agent operates like a person standing in one spot, with the ability to point a camera in any direction but not able to move to a new position. Or, equivalently, the agent could gaze upon an object it is holding and decide how to turn the object to inspect another side of it. Next, the researchers are developing the system further to work in a fully mobile robot.
Using the supercomputers at UT Austin’s Texas Advanced Computing Center and Department of Computer Science, it took about a day to train their agent using an artificial intelligence approach called reinforcement learning. The team, with Ramakrishnan’s leadership, developed a method for speeding up the training: building a second agent, called a sidekick, to assist the primary agent.
“Using extra information that’s present purely during training helps the [primary] agent learn faster,” Ramakrishnan said.
Learn more: New AI Sees Like a Human, Filling in the Blanks
The Latest on: Artificial intelligence agent
via Google News
The Latest on: Artificial intelligence agent
- An intro to the fast-paced world of artificial intelligenceon January 19, 2021 at 1:09 pm
In two years, the MIT Quest for Intelligence has allowed more than 300 undergraduate students to explore AI in its many applications through the Undergraduate Research Opportunities Program.
- Who needs a teacher? Artificial intelligence designs lesson plans for itselfon January 19, 2021 at 12:46 pm
Unlike human students, computers don’t seem to get bored or frustrated when a lesson is too easy or too hard. But just like humans, they do better when a lesson plan is “just right” for their level of ...
- Ninety Percent of Large Pharma Companies Initiated Artificial Intelligence/Machine Learning Projects In 2020on January 19, 2021 at 6:16 am
Marketing Mix, Patient Identification and Healthcare Practitioner Targeting Top AIML Use Cases Within Large Pharma, According to the New TGaS Report from Trinity Life SciencesWALTHAM, Mass.--(BUSINESS ...
- Should We Fear Artificial Superintelligence?on January 18, 2021 at 4:43 am
Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored ...
- AWS Goes Big with Contact Center Intelligenceon January 17, 2021 at 11:48 am
Artificial intelligence is redefining contact centers and AWS is taking a leading position. We are two weeks into 2021, and Amazon Web Services has wasted no time in making a splash in the contact ...
- 4 Ways to Leverage Artificial Intelligence in Customer Serviceon January 15, 2021 at 10:21 pm
The use of Artificial Intelligence (“AI”) in customer service is currently utilized in just under one-quarter of companies today, although, 56% of service decision makers say their organizations are ...
- Fetch.ai is bringing AI travel agents to 770,000 hotels worldwideon January 15, 2021 at 11:00 am
Fetch.ai’s network of Autonomous AI Travel Agents provides a decentralized, multi-agent-based system that creates personalized agents capable of collecting data on behalf of the user in order to book ...
- Human Error - Not Training - is #1 Cause of Contact Center Agent Mistakes, New Balto Survey Revealson January 14, 2021 at 10:27 am
PRNewswire/ -- Balto, the leader in real-time guidance for contact centers powered by AI, today released results of a survey - ” Why Agents Fail: A Contact Center Survey for Understanding Mistakes ...
- Using Artificial Intelligence To Analyze Positions In The Portfolios Of Top-Performing Growth Managerson January 14, 2021 at 1:41 am
High rates of growth are best backed up by improving financial and balance sheet strength. Otherwise, the growth investor is incurring inordinate risk. Artificial intelligence gives investors a ...
- AWS Expands Its AI-Powered Contact Center Intelligence (CCI)on January 12, 2021 at 1:38 pm
Amazon Web Services is enabling customers to leverage its artificial intelligence capabilities with their current contact center providers.
via Bing News