Showing posts with label ROBOTICS. Show all posts
Showing posts with label ROBOTICS. Show all posts

Tuesday, June 9, 2015

S. KOREA ROBOT WINS FIRST PRIZE AT DARPA ROBOT FINALS

FROM:  U.S. DEFENSE DEPARTMENT

Right:  Team Kaist’s robot DRC-Hubo uses a tool to cut a hole in a wall during the DARPA Robotics Challenge Finals, June 5-6, 2015, in Pomona, Calif. Team Kaist won the top prize at the competition. DARPA photo
   
Robots from South Korea, U.S. Win DARPA Finals
By Cheryl Pellerin
DoD News, Defense Media Activity

POMONA, Calif., June 7, 2015 – A robot from South Korea took first prize and two American robots took second and third prizes here yesterday in the two-day robotic challenge finals held by the Defense Advanced Research Projects Agency.

Twenty-three human-robot teams participating in the DARPA Robotics Challenge, or DRC, finals competed for $3.5 million in prizes, working to get through eight tasks in an hour, under their own onboard power and with severely degraded communications between robot and operator.

A dozen U.S. teams and 11 from Japan, Germany, Italy, South Korea and Hong Kong competed in the outdoor competition.

DARPA launched the DRC in response to the nuclear disaster at Fukushima, Japan, in 2011 and the need for help to save lives in the toxic environment there.

Progress in Robotics

The DRC’s goal was to accelerate progress in robotics so robots more quickly can gain the dexterity and robustness they need to enter areas too dangerous for people and mitigate disaster impacts.

Robot tasks were relevant to disaster response -- driving alone, walking through rubble, tripping circuit breakers, using a tool to cut a hole in a wall, turning valves and climbing stairs.

Each team had two tries at the course with the best performance and times used as official scores. All three winners each had final scores of eight points, so they were arrayed from first to third place according to least time on the course.

DARPA program manager and DRC organizer Gill Pratt congratulated the 23 participating teams and thanked them for helping open a new era of human-robot partnerships.

Loving Robots

The DRC was open to the public, and more than 10,000 people over two days watched from the Fairplex grandstand as each robot ran its course. The venue was formerly known as the Los Angeles County Fairgrounds.

"These robots are big and made of lots of metal, and you might assume people seeing them would be filled with fear and anxiety," Pratt said during a press briefing at the end of day 2.

"But we heard groans of sympathy when those robots fell, and what did people do every time a robot scored a point? They cheered!” he added.

Pratt said this could be one of the biggest lessons from DRC -- “the potential for robots not only to perform technical tasks for us but to help connect people to one another."

South Korean Winning Team

Team Kaist from Daejeon, South Korea, and its robot DRC-Hubo took first place and the $2 million prize. Hubo comes from the words ‘humanoid robot.’

Team Kaist is from the Korea Advanced Institute of Science and Technology, which professor JunHo Oh of the Mechanical Engineering Department called “the MIT of Korea,” and he led Team Kaist to victory here.

In his remarks at the DARPA press conference, Oh noted that researchers from a university commercial spinoff called Rainbow Co., built the Hubo robot hardware.

The professor said his team’s first-place prize doesn’t make DRC-Hubo the best robot in the world, but he’s happy with the prize, which he said helps demonstrate Korea’s technological capabilities.

Team IHMC Robotics

Coming in second with a $1 million prize is Team IHMC Robotics of Pensacola, Florida -- the Institute of Human and Machine Cognition -- and its robot Running Man.

Jerry Pratt leads a research group at IHMC that works to understand and model human gait and its applications in robotics, human assistive devices and man-machine interfaces.

“Robots are really coming a long way,” Pratt said.

“Are you going to see a lot more of them? It's hard to say when you'll really see humanoid robots in the world,” he added. “But I think this is the century of the humanoid robot. The real question is what decade? And the DRC will make that decade come maybe one decade sooner.”

Team Tartan Rescue

In third place is Team Tartan Rescue of Pittsburgh, winning $500,000. The robot is CHIMP, which stands for CMU highly intelligent mobile platform. Team members are from Carnegie Mellon University and the National Robotics Engineering Center.

Tony Stentz, NREC director, led Team Tartan Rescue, and during the press conference called the challenge “quite an experience.”

That experience was best captured, he said, “with our run yesterday when we had trouble all through the course, all kinds of problems, things we never saw before.”

While that was happening, Stentz said, the team operating the robot from another location kept their cool.

Promise for the Technology

“They figured out what was wrong, they tapped their deep experience in practicing with the machine, they tapped the tools available at their fingertips, and they managed to get CHIMP through the entire course, doing all of the tasks in less than an hour,” he added.

“That says a lot about the technology and it says a lot about the people,” Stentz said, “and I think it means that there's great promise for this technology.”

All the winners said they would put most of the prize money into robotics research and share a portion with their team members.

After the day 2 competition, Arati Prabhakar, DARPA director, said this is the end of the 3-year-long DARPA Robotics Challenge but “the beginning of a future in which robots can work alongside people to reduce the toll of disasters."

Tuesday, April 14, 2015

ARMY LETHALITY CHIEF PREDICTS FUTURE OF MILITARY ROBOTICS

FROM:  U.S. ARMY

WASHINGTON (April 10, 2015) -- Doctrine drives training and modernization, and new doctrine to be released in January 2016, will provide impetus for growth in the rapidly-evolving field of robotics, Lt. Col. Matt Dooley predicted.

Dooley, chief of the lethality branch at the Army Capabilities Integration Center, discussed the future of robotics in the Army during the National Defense Industrial Association-sponsored Ground Robotics Capabilities Conference and Exhibition, here, April 8.

Dooley said the new doctrine, "U.S. Army Robotics and Autonomous Systems Strategy," will drive science and technology investments, inform acquisition decisions, further the integration of robots throughout the force and codify the path forward.

Currently, there are references to manned-unmanned teaming and science and technology investments in Army Training and Doctrine Command, or TRADOC, Pamphlet 525-3-1, also called the "Army Operating Concept." But those references are in the appendix of that document. Right now, there is no single Army doctrinal manual devoted wholly to robotics.

Robotics consists of both ground and air vehicles, but Dooley's focus at the panel discussion was the ground aspects.

While the sky is full of unmanned air vehicles, Dooley said, squads have yet to see a similar number of systems in use on the ground, although there are some being used for explosive ordnance disposal and improvised explosive device, or IED, clearing operations.

Systems that a squad might find useful, he said, are those that can carry supplies, locate targets, and carry out surveillance and reconnaissance operations.

Dooley stressed, however, that no work is being done to give unmanned ground systems autonomous authority to engage targets.

War is essentially a human endeavor, he said, and the trigger-puller will be the Soldier. Besides that, Department of Defense, or DoD, Directive 3000.09 prohibits robots from using lethal force. The directive reads, in part: "Human-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets."

That restriction does not negate the tremendous capabilities robots bring to the battlefield, Dooley said.

ROBOTIC ANTI-ARMOR SYSTEM PREVIEW

Dooley was carrying a draft of the doctrine, which is being reviewed by various stakeholders - so he could not go into any detail about what is in it. But he did provide overall themes.

Robotic Anti-Armor System, or RAS, will tie robotics in with future expeditionary maneuver capabilities that will enable mutual support and mission command across extended distances, where forces are widely dispersed, he said.

Robotics will help Soldiers make contact with the enemy under conditions favorable to Soldiers, while presenting multiple dilemmas to the enemy. The human will always be in the loop when deciding to use lethal force, he said.

The new doctrinal manual will also cover the value of robots in force protection, he said, which brings up a critical question. What cost will the Army and the United States be willing to pay to develop robotics systems that can demonstrably save lives? It is "a morale and ethical decision" that will have to be made, he said.

Dooley explained that very expensive widgets can be added to robotics that would increase force protection, but a cost and a capabilities curve will need to be drawn to determine just how much Soldier protection the nation is willing to pay for.

Safeguards will also need to be built into such systems, he said, citing the DoD guidance which reads: "Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator."

PRICKLY QUESTION

With the floor open for questions, a representative from industry asked why the Army would consider spending limited resources to develop robotics capabilities that will likely end up "flawed." Additionally, he said, the Army has already been successful using contractors to drive supply convoys, so there is not likely a need for autonomous or semi-autonomous vehicles.

"The Army will need to articulate what levels [of protection] we get from our investments," Dooley said, and demonstrate that such autonomous robotics systems are not "pie-in-the-sky" investments.

Retired Army Lt. Col. Joe Bell, also on the panel, said "there's an urgent need to reduce risk [to Soldiers] today," not 10 years hence. "That's our No. 1 motivator."

Bell, now involved in the commercial defense industry, laid out a business model for robotics, saying it can cost $200,000 to armor some vehicles, not including storing and maintaining the armor kits. That would have to be factored into the cost-benefit analysis of using an autonomous or semi-autonomous vehicle.

A semi-autonomous system used in a leader-follower configuration would also save lives, because if the vehicle hit a mine or took enemy fire, no one would be killed.

Bell said if current technology were applied to a leader-follower system, as few as two Soldiers could convoy four to eight trucks.

Although there would be fewer Soldiers for the enemy to target, that also brings up the problem of less firepower. This issue could be addressed, he said, through mission command, meaning the commander would need to closely monitor the situation and have backup tactics, techniques and procedures in place to handle the unexpected.

Jim Parker, another panelist, argued against the notion that robotics is too expensive or not ready for development.

He said the Army is already making robotics work. At Fort Bragg, North Carolina, and at the U.S. Military Academy at West Point, New York, for instance, autonomous vehicles are being tested to shuttle visitors and personnel around the installations.

Parker said that such incremental improvements will serve as building blocks toward the ultimate goal of off-road, difficult-weather and terrain negotiation. Parker is the associate director for Ground Vehicle Robotics, Army Tank Automotive Research, Development and Engineering Center.

Friday, April 10, 2015

DOCTORS TRAIN WITH HUMAN PATIENT SIMULATOR

FROM:  NATIONAL SCIENCE FOUNDATION
How robots can help build better doctors
Research seeks to make better 'human patient simulators'

A young doctor leans over a patient who has been in a serious car accident and invariably must be experiencing pain. The doctor's trauma team examines the patient's pelvis and rolls her onto her side to check her spine. They scan the patient's abdomen with a rapid ultrasound machine, finding fluid. They insert a tube in her nose. Throughout the procedure, the patient's face remains rigid, showing no signs of pain.

The patient's facial demeanor isn't a result of stoicism--it's a robot, not a person. The trauma team is training on a "human patient simulator," (HPS) a training tool which enables clinicians to practice their skills before treating real patients. HPS systems have evolved over the past several decades from mannequins into machines that can breathe, bleed and expel fluids. Some models have pupils that contract when hit by light. Others have entire physiologies that can change. They come in life-sized forms that resemble both children and adults.

But they could be better, said Laurel D. Riek, a computer science and engineering professor at the University of Notre Dame. As remarkable as modern patient simulators are, they have two major limitations.

"Their faces don't actually move, and they are unable to sense or respond to the environment," she said.

Riek, a roboticist, is designing the next generation of HPS systems. Her NSF-supported research explores new means for the robots to exhibit realistic, clinically relevant facial expressions and respond automatically to clinicians in real time.

"This work will enable hundreds of thousands of doctors, nurses, EMTs, firefighters and combat medics to practice their treatment and diagnostic skills extensively and safely on robots before treating real patients," she said.

One novel aspect of Riek's research is the development of new algorithms that use data from real patients to generate simulated facial characteristics. For example, Riek and her students have recently completed a pain simulation project and are the first research group to synthesize pain using patient data. This work won them best overall paper and best student paper at the International Meeting on Simulation in Healthcare, the top medical simulation conference.

Riek's team is now working on an interactive stroke simulator that can automatically sense and respond to learners as they work through a case. Stroke is the fifth leading cause of death in the United States, yet many of these deaths could be prevented through faster diagnosis and treatment.

"With current technology, clinicians are sometimes not learning the right skills. They are not able to read diagnostic clues from the face," she said.

Yet learning to read those clues could yield lifesaving results. Preventable medical errors in hospitals are the third-leading cause of death in the United States.

"What's really striking about this is that these deaths are completely preventable," Riek said.

One factor contributing to those accidents is clinicians missing clues and going down incorrect diagnostic paths, using incorrect treatments or wasting time. Reading facial expressions, Riek said, can help doctors improve those diagnoses. It is important that their training reflects this.

In addition to modeling and synthesizing patient facial expressions, Riek and her team are building a new, fully-expressive robot head. By employing 3-D printing, they are working to produce a robot that is low-cost and will be one day available to both researchers and hobbyists in addition to clinicians.

The team has engineered the robot to have interchangeable skins, so that the robot's age, race and gender can be easily changed. This will enable researchers to explore social factors or "cultural competency" in new ways.

"Clinicians can create different patient histories and backgrounds and can look at subtle differences in how healthcare workers treat different kinds of patients," Riek said.

Riek's work has the potential to help address the patient safety problem, enabling clinicians to take part in simulations otherwise impossible with existing technology.

-- Rob Margetta,
Investigators
Laurel Riek
Related Institutions/Organizations
University of Notre Dame

Monday, February 16, 2015

THE HOUSEHOLD BOT

FROM:  NATIONAL SCIENCE FOUNDATION
Human insights inspire solutions for household robots
New algorithms designed by Berkeley and UMass researchers allow autonomous systems to deal with uncertainty

People typically consider doing the laundry to be a boring chore. But laundry is far from boring for artificial intelligence (AI) researchers like Siddharth Srivastava, a scientist at the United Technologies Research Center, Berkeley.

To AI experts, programming a robot to do the laundry represents a challenging planning problem because current sensing and manipulation technology is not good enough to identify precisely the number of clothing pieces that are in a pile and the number that are picked up with each grasp. People can easily cope with this type of uncertainty and come up with a simple plan. But roboticists for decades have struggled to design an autonomous system able to do what we do so casually--clean our clothes.

In work done at the University of California, Berkeley, and presented at the Association for Advancement of Artificial Intelligence conference in Austin, Srivastava (working with Abhishek Gupta, Pieter Abbeel and Stuart Russell from UC Berkeley and Shlomo Zilberstein from University of Massachusetts, Amherst) demonstrated a robot that is capable of doing laundry without any specific knowledge of what it has to wash.

Earlier work by Abbeel's group had demonstrated solutions for the sorting and folding of clothes. The laundry task serves as an example for a wide-range of daily tasks that we do without thinking but that have, until now, proved difficult for automated tools assisting humans.

"The widely imagined helper robots of the future are expected to 'clear the table,' 'do laundry' or perform day-to-day tasks with ease," Srivastava said. "Currently however, computing the required behavior for such tasks is a challenging problem--particularly when there's uncertainty in resource or object quantities."

Humans, on the other hand, solve such problems with barely a conscious effort. In their work, the researchers showed how to compute correct solutions to problems by using some assumptions about the uncertainty.

"The main issue is how to develop what we call 'generalized plans,'" said Zilberstein, a professor of computer science and director of the Resource Bound Reasoning Lab at UMass Amherst. "These are plans that don't just work in a particular situation that is very well defined and gets you to a particular goal that is also well defined, but rather ones that work on a whole range of situations and you may not even know certain things about it."

The researchers' key insight was to use human behavior--the almost unconscious action of pulling, stuffing, folding and piling--as a template, adapting both the repetitive and thoughtful aspects of human problem-solving to handle uncertainty in their computed solutions.

By doing so, they enabled a PR2 robot to do the laundry without knowing how many and what type of clothes needed to be washed.

Out of the 13 or so tasks involved in the laundry problem, the team's system was able to complete more than half of them autonomously and nearly completed the rest--by far the most effective demonstration of laundering AI to date.

The framework that Srivastava and his team developed combines several popular planning paradigms that have been developed in the past using complex control structures such as loops and branches and optimizes them to run efficiently on modern hardware. It also incorporates an effective approach for computing plans by learning from examples, rather than through rigid instructions or programs.

"What's particularly exciting is that these methods provide a way forward in a problem that's well known to be computationally unsolvable in the worst case," Srivastava said. "We identified a simpler formulation that is solvable and also covers many useful scenarios."

"It is exciting to see how this breakthrough builds upon NSF-funded efforts tackling a variety of basic-research problems including planning, uncertainty, and task repetition," said Héctor Muñoz-Avila, program director at NSF's Robust Intelligence cluster.

Though laundry robots are an impressive, and potentially time-saving, application of AI, the framework that Srivastava and his team developed can be applied to a range of problems. From manufacturing to space exploration to search-and-rescue operations, any situation where artificially intelligent systems must act, despite some degree of uncertainty, can be addressed with their method.

"Using this approach, solutions to high-level planning can be generated automatically," Srivastava said. "There's more work to be done in this direction, but eventually we hope such methods will replace tedious and error-prone task-specific programming for robots."

-- Aaron Dubrow, NSF
-- Siddharth Srivastava, United Technologies Research Center
Investigators
Siddharth Srivastava
Shlomo Zilberstein
Related Institutions/Organizations
United Technologies Research Center
University of Massachusetts Amherst
Locations
Berkeley , California
Amherst , Massachusetts
Related Programs
Robust Intelligence
Related Awards
#0915071 RI: Small: Foundations and Applications of Generalized Planning
Years Research Conducted
2009 - 2015

Total Grants
$503,519

Monday, February 9, 2015

AI AND SAFE SELF-DRIVING CARS

FROM:  NATIONAL SCIENCE FOUNDATION
Programming safety into self-driving cars
UMass researchers improve artificial intelligence algorithms for semi-autonomous vehicles
February 2, 2015

For decades, researchers in artificial intelligence, or AI, worked on specialized problems, developing theoretical concepts and workable algorithms for various aspects of the field. Computer vision, planning and reasoning experts all struggled independently in areas that many thought would be easy to solve, but which proved incredibly difficult.

However, in recent years, as the individual aspects of artificial intelligence matured, researchers began bringing the pieces together, leading to amazing displays of high-level intelligence: from IBM's Watson to the recent poker playing champion to the ability of AI to recognize cats on the internet.

These advances were on display this week at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas, where interdisciplinary and applied research were prevalent, according to Shlomo Zilberstein, the conference committee chair and co-author on three papers at the conference.

Zilberstein studies the way artificial agents plan their future actions, particularly when working semi-autonomously--that is to say in conjunction with people or other devices.

Examples of semi-autonomous systems include co-robots working with humans in manufacturing, search-and-rescue robots that can be managed by humans working  remotely and "driverless" cars. It is the latter topic that has particularly piqued Zilberstein's interest in recent years.

The marketing campaigns of leading auto manufacturers have presented a vision of the future where the passenger (formerly known as the driver) can check his or her email, chat with friends or even sleep while shuttling between home and the office. Some prototype vehicles included seats that swivel back to create an interior living room, or as in the case of Google's driverless car, a design with no steering wheel or brakes.

Except in rare cases, it's not clear to Zilberstein that this vision for the vehicles of the near future is a realistic one.

"In many areas, there are lots of barriers to full autonomy," Zilberstein said. "These barriers are not only technological, but also relate to legal and ethical issues and economic concerns."

In his talk at the "Blue Sky" session at AAAI, Zilberstein argued that in many areas, including driving, we will go through a long period where humans act as co-pilots or supervisors, passing off responsibility to the vehicle when possible and taking the wheel when the driving gets tricky, before the technology reaches full autonomy (if it ever does).

In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop.

"People are unpredictable. What happens if the person is not doing what they're asked or expected to do, and the car is moving at sixty miles per hour?" Zilberstein asked. "This requires 'fault-tolerant planning.' It's the kind of planning that can handle a certain number of deviations or errors by the person who is asked to execute the plan."

With support from the National Science Foundation (NSF), Zilberstein has been exploring these and other practical questions related to the possibility of artificial agents that act among us.

Zilberstein, a professor of computer science at the University of Massachusetts Amherst, works with human studies experts from academia and industry to help uncover the subtle elements of human behavior that one would need to take into account when preparing a robot to work semi-autonomously. He then translates those ideas into computer programs that let a robot or autonomous vehicle plan its actions--and create a plan B in case of an emergency.

There are a lot of subtle cues that go into safe driving. Take for example a four-way stop. Officially, the first car to the crosswalk goes first, but in actuality, people watch each other to see if and when to make their move.

"There is a slight negotiation going on without talking," Zilberstein explained. "It's communicating by your action such as eye contact, the wave of a hand, or the slight revving of an engine."

In trials, autonomous vehicles often sit paralyzed at such stops, unable to safely read the cues of the other drivers on the road. This "undecidedness" is a big problem for robots. A recent paper by Alan Winfield of Bristol Robotics Laboratory in the UK showed how robots, when faced with a difficult decision, will often process for such a long period of time as to miss the opportunity to act. Zilberstein's systems are designed to remedy this problem.

"With some careful separation of objectives, planning algorithms could address one of the key problems of maintaining 'live state', even when goal reachability relies on timely human interventions," he concluded.

The ability to tailor one's trip based on human-centered factors--like how attentive the driver can be or the driver's desire to avoid highways--is another aspect of semi-autonomous driving that Zilberstein is exploring.

In a paper with Kyle Wray from the University of Massachusetts Amherst and Abdel-Illah Mouaddib from the University of Caen in France, Zilberstein introduced a new model and planning algorithm that allows semi-autonomous systems to make sequential decisions in situations that involve multiple objectives--for example, balancing safety and speed.

Their experiment focused on a semi-autonomous driving scenario where the decision to transfer control depended on the driver's level of fatigue. They showed that using their new algorithm a vehicle was able to favor roads where the vehicle can drive autonomously when the driver is fatigued, thus maximizing driver safety.

"In real life, people often try to optimize several competing objectives," Zilberstein said. "This planning algorithm can do that very quickly when the objectives are prioritized. For example, the highest priority may be to minimize driving time and a lower priority objective may be to minimize driving effort. Ultimately, we want to learn how to balance such competing objectives for each driver based on observed driving patterns."

It's an exciting time for artificial intelligence. The fruits of many decades of labor are finally being deployed in real systems and machine learning is being adopted widely and for different purposes than anyone had ever realized.

"We are beginning to see these kinds of remarkable successes that integrate decades-long research efforts in a variety of AI topics," said Héctor Muñoz-Avila, program director in NSF's Robust Intelligence cluster.

Indeed, over many decades, NSF's Robust Intelligence program has supported foundational research in artificial intelligence that, according to Zilberstein, has given rise to the amazing smart systems that are beginning to transform our world. But the agency has also supported researchers like Zilberstein who ask tough questions about emerging technologies.

"When we talk about autonomy, there are legal issues, technological issues and a lot of open questions," he said. "Personally, I think that NSF has been able to identify these as important questions and has been willing to put money into them. And this gives the U.S. a big advantage."

-- Aaron Dubrow, NSF

Saturday, November 22, 2014

THE TRAINING OF A RESEARCH ROBOT

FROM:   NATIONAL SCIENCE FOUNDATION 
A day in the life of Robotina
What might daily life be like for a research robot that's training to work closely with humans?

On the day of the Lego experiment, I roll out of my room early. I scan the lab with my laser, which sits a foot off the floor, and see a landscape of points and planes. My first scan turns up four dense dots, which I deduce to be a table's legs...
Robotina is a sophisticated research robot. Specifically, it's a Willow Garage PR2, designed to work with people.

But around the MIT Computer Science and Artificial Intelligence Laboratory, it is most-often called Robotina.

"We chose a name for every robot in our lab. It's more personal that way," said graduate student Claudia Pérez D'Arpino, who grew up watching the futuristic cartoon The Jetsons. In the Spanish-language version, Rosie, the much-loved household robot, is called Robotina.

Robotina has been in the interactive robotics lab of engineering professor Julie Shah since 2011, where it is one of three main robot platforms Shah's team works with. Robotina is aptly named, as an aim is to give it many of Rosie's capabilities: to interact with humans and perform many types of work.

In her National Science Foundation (NSF)-supported research, Shah and her team study how humans and robots can work together more efficiently. Hers is one of dozens of projects supported by the National Robotics Initiative, a government-wide effort to develop robots that can work alongside humans.

"We focus on how robots can assist people in high-intensity situations, like manufacturing plants, search-and-rescue situations and even space exploration," Shah said.

What Shah and her team are finding in their experiments is that humans often work better and feel more at ease when Robotina is calling the shots--that is, when it's scheduling tasks. In fact, a recent MIT experiment showed that a decision-making robotic helper can make humans significantly more productive.

Part of the reason for this seems to be that people not only trust Robotina's impeccable ability to crunch numbers, they also believe the robot trusts and understands them.

As roboticists develop more sophisticated, human-like robotic assistants, it's easy to anthropomorphize them. Indeed, it's nothing new.

So, what is a day in the life of Robotina like as she struggles to learn social skills?

Give that robot a Coke

I don't just crash into things all the time like some two-year-old human, if that's what you're wondering. My mouth also contains a laser scanner, so I can get a 3-D sense of my surroundings. My eyes are cameras and I can recognize objects...

Robotina has sensors from head to base to help it interact with its environment. With proper programming, its pincher-like hands can do everything from fold towels to fetch Legos (more on that soon).

It could even sip a Coke if it wanted to. Well, not quite. But it could pick up the can without smashing it.

Matthew Gombolay, graduate student and NSF research fellow, once witnessed the act. At the time, he wasn't sure how Robotina would handle the bendable aluminum can.

"I wanted it to pick up a Coke can to see what would happen," Gombolay said. "I thought it'd be really strong and crush the Coke can, but it didn't. It stopped."

That's because Robotina has the ability to gauge how much pressure is just enough to hold or manipulate an object. It can also sense when it is too close to something--or someone--and stop.

Look, I'm 5-feet-and-4.7-inches tall--even taller if I stretch my metal spine--and weigh a lot more than your average human. If I sense something, I stop...

Proximity awareness in robots designed to work around people not only prevents dangerous or awkward robot-human collisions, it builds trust.

"I am definitely someone who likes to test things to failure. I want to know if I can trust it," Gombolay said. "So, I know it's not going to crush a Coke can, and I'm strong enough to crush a Coke can, so I feel safer."

Roboticists who aim to integrate robots into human teams are serious about trying to hard-wire robots to follow the spirit of Isaac Asimov's first Law of Robotics: A robot may not injure a human being.

Luckily, when decision-making robots like Robotina move into factories, they don't have to be ballet dancers. They just have to move well enough to do their jobs without hurting anyone. Perhaps as importantly, the people around them must know that the robots won't hurt them.

Robots love Legos, too

The day of the Lego experiment is eight hours of fetching Legos and making decisions about how to assemble them. The calculations are easy enough, but all that labor makes my right arm stop working. So I switch to my left...

In an exercise last fall that mimicked a manufacturing scenario, the researchers set up an experiment that required robot-human teams to build models out of Legos.

In one trial, Robotina created a schedule to complete the tasks; in the other, a human made the decisions. The goal was to determine whether having an autonomous robot on the team might improve efficiency.

The researchers found that when Robotina organized the tasks, they took less time--both for scheduling and assembly. The humans trusted the robot to make impartial decisions and do what was best for the team.

I have to decide what task needs doing next to complete the Lego structure. The humans text me when they are done with a task or ready to start a new one. I schedule the tasks based on the data. I don't play favorites. When I'm not fetching Legos or thinking, I sit quietly...

"People thought the robot would be unbiased, while a human would be biased based on skills," Gombolay said. "People generally viewed the robot positively as a good teammate."

As it turned out, workers preferred increased productivity over having more control. When it comes to assembling something, "the humans almost always perform better when Robotina makes all the decisions," Shah said.

Predicting the unpredictable

I stand across a table from a human. I sort Legos into cups while the human takes things out of the cups. Humans are incredibly unpredictable, but I do my best to analyze where the human is most likely to move next so that I can accommodate him...

Ideally, in the factories of the future, robots will be able to predict human behavior and movement so well they can easily stay out of the way of their human co-workers.

The goal is to have robots that never even have to use their proximity sensors to avoid collisions. They already know where a human is going and can steer clear.

"Suppose you want a robot to help you out but are uncomfortable when the robot moves in an awkward way. You may be afraid to interact with it, which is highly inefficient," Pérez D'Arpino said. "At the end of the day, you want to make humans comfortable."

To help do so, Pérez D'Arpino is developing a model that will help Robotina guess what a human will do next.

In an experiment where it and a student worked together to sort Lego pieces and build models, Robotina was able to guess in only 400 milliseconds where the human would go next based on the person's body position.

The angle of the arm, elbow, wrist... they all help me determine in what direction the hand will go. I am limited only by the rate at which sensors and processors can collect and analyze data, which means I can predict where a person will move in about the average time a human eye blinks...

Once Robotina knew where the person would reach, it reached for a different spot. The result was a more natural, more fluid collaboration.

Putting Robotinas to work

I ask myself the same question you do: Am I reaching my full potential?

While Robotina's days now involve seemingly endless cups of Legos, its successes in the MIT lab will eventually enable it to become a more well-rounded robot. The experiments also demonstrate humans' willingness to embrace robots in the right roles.

To make them the superb, cooperative assistants envisioned by the National Robotics Initiative--to give people a better quality of life and benefit society and the economy--could require that some robots be nearly as dynamic and versatile as humans.

"An old-school way of thinking is to make a robot for each task, like the Roomba," Gombolay said. "But unless we make an advanced, general-purpose robot, we won't be able to fully realize their full potential."

To have the ideal Robotina--the Jetsons' Robotina--in our home or workplace means a lot more training days for humans and robots alike. With the help of NSF funding, progress is being made.

"We're at a really exciting time," Gombolay said.

What would I say if I could talk? Probably that I'd really like to watch that Transformers movie.

-- Sarah Bates,
Investigators
Julie Shah
Related Institutions/Organizations
Massachusetts Institute of Technology
Association for the Advancement of Artificial Intelligence

Wednesday, October 8, 2014

DARPRA DEMONSTRATES FIVE NEW TECHNOLOGIES UNDER DEVELOPMENT

FROM:  U.S. DEFENSE DEPARTMENT
DARPA Officials Show Hagel Technologies Under Development
American Forces Press Service

WASHINGTON, April 23, 2014 – Defense Advanced Research Projects Agency program personnel demonstrated five technologies under development to Defense Secretary Chuck Hagel in the secretary's conference room yesterday.
DARPA Director Arati Prabhakar provided the secretary with a demonstration of the agency's latest prosthetics technology.

The wounded warrior demonstrating the device was Fred Downs Jr., an old friend of Hagel's who lost an arm in a landmine explosion while fighting in Vietnam. Hagel hugged him and shook his mechanical hand, with Downs joking, "I don't want to hurt you."

"He and I worked together many years ago," said Hagel, who earned two Purple Hearts during his service as an enlisted soldier in Vietnam. "How you doing, Fred? How's your family?"

Downs demonstrated how he controls movements of the arm, which appeared to be partly covered in translucent white plastic, with two accelerometers strapped to his feet. Through a combination of foot movements, he's able to control the elbow, wrist and fingers in a variety of movements, including the “thumbs-up” sign he gave Hagel.

It took only a few hours to learn to control the arm, Downs said.
"It's the first time in 45 years, since Vietnam, I'm able to use my left hand, which was a very emotional time," he said.

Dr. Justin Sanchez, a medical doctor and program manager at DARPA who works with prosthetics and brain-related technology, told Hagel that DARPA's arm is designed to mimic the shape, size and weight of a human arm. It's modular too, so it can replace a lost hand, lower arm or a complete arm.
Hagel said such technology would have a major impact on the lives of injured troops.

"This is transformational," he said. "We've never seen anything like this before."
Next, Sanchez showed Hagel a video of a patient whose brain had been implanted with a sensor at the University of Pittsburgh, allowing her to control an arm with her thoughts.

Matt Johannes, an engineer from the Johns Hopkins University Applied Physics Laboratory, showed Hagel a shiny black hand and arm that responds to brain impulses. The next step is to put sensors in the fingers that can send sensations back to the brain.

"If you don't have line of sight on something you're trying to grab onto, you can use that sensory information to assist with that task," Johannes said.
The tactile feedback system should be operational within a few months, he said.
"People said it would be 50 years before we saw this technology in humans," Sanchez said. "We did it in a few years."

Next, officials gave Hagel an overview of the DARPA Robotic Challenge, a competition to develop a robot for rescue and disaster response that was inspired by the March 2011 Fukushima nuclear incident in Japan.

Virginia Tech University's entrant in the contest, the hulking 6-foot-2-inch Atlas robot developed by Boston Dynamics, stood in the background as Hagel was shown a video of robots walking over uneven ground and carrying things.

Brad Tousley, head of DARPA's Tactical Technology Office, explained to Hagel that Hollywood creates unrealistic expectations of robotic capability. In fact, he said, building human-like robots capable of autonomously doing things such as climbing ladders, opening doors and carrying things requires major feats of engineering and computer science.

Journalists were escorted out before the remaining three technologies could be demonstrated because of classified concerns. A defense official speaking on background told reporters that Hagel was brought up to date on the progress of three other DARPA programs:

-- Plan X, a foundational cyberwarfare program to develop platforms for the Defense Department to plan for, conduct and assess cyberwarfare in a manner similar to kinetic warfare;

-- Persistent close air support, a system to, among other things, link up joint tactical air controllers with close air support aircraft using commercially available tablets; and

-- A long-range anti-ship missile, planned to reduce dependence on intelligence, surveillance and reconnaissance platforms, network links and GPS navigation in electronic warfare environments. Autonomous guidance algorithms should allow the LRASM to use less-precise target cueing data to pinpoint specific targets in the contested domain, the official said. The program also focuses on innovative terminal survivability approaches and precision lethality in the face of advanced countermeasures.

(From a pool report.)



Tuesday, July 29, 2014

NSF REPORTS ON TELE-ROBOTICS

FROM:  NATIONAL SCIENCE FOUNDATION 
Tele-robotics puts robot power at your fingertips
University of Washington research enables robot-assisted surgery and underwater spill prevention

At the Smart America Expo in Washington, D.C., in June, scientists showed off cyber-dogs and disaster drones, smart grids and smart healthcare systems, all intended to address some of the most pressing challenges of our time.

The event brought together leaders from academia, industry and government and demonstrated the ways that smarter cyber-physical systems (CPS)--sometimes called the Internet of Things--can lead to improvements in health care, transportation, energy and emergency response, and other critical areas.

This week and next, we'll feature examples of Nationals Science Foundation (NSF)-supported research from the Smart America Expo. Today: tele-robotics technology that puts robot power at your fingertips. (See Part 1 of the series.)

In the aftermath of an earthquake, every second counts. The teams behind the Smart Emergency Response System (SERS) are developing technology to locate people quickly and help first responders save more lives. The SERS demonstrations at the Smart America Expo incorporated several NSF-supported research projects.

Howard Chizeck, a professor of electrical engineering at the University of Washington, showed a system he's helped develop where one can log in to a Wi-Fi network in order to tele-operate a robot working in a dangerous environment.

"We're looking to give a sense of touch to tele-robotic operators, so you can actually feel what the robot end-effector is doing," Chizeck said. "Maybe you're in an environment that's too dangerous for people. It's too hot, too radioactive, too toxic, too far away, too small, too big, then a robot can let you extend the reach of a human."

The device is being used to allow surgeons to perform remote surgeries from thousands of miles away. And through a start-up called BluHaptics--started by Chizeck and Fredrik Ryden and supported by a Small Business Investment Research grant from NSF--researchers are adapting the technology to allow a robot to work underwater and turn off a valve at the base of an off-shore oil rig to prevent a major spill.

"We're trying to develop tele-robotics for a wide range of opportunities," Chizeck said. "This is potentially a new industry, people operating in dangerous environments from a long distance."

-- Aaron Dubrow, NSF
Investigators
Fredrik Ryden
Howard Chizeck
Blake Hannaford
Tadayoshi Kohno
Related Institutions/Organizations
BluHaptics Inc
University of Washington

Sunday, June 22, 2014

TEAMS GEAR UP FOR ROUNDUP RODEO AT LOS ALAMOS NATIONAL LABORATORY

FROM:  LOS ALAMOS NATIONAL LABORATORY
Caption: A hazardous devices team robot pulls a fire hose from a reel during a Robot Rodeo competition and exercise.  Bomb squads compete in timed scenarios at Los Alamos National Laboratory.
Hazardous devices teams showcase skills at Robot Rodeo June 24-27

LOS ALAMOS, N.M., June 19, 2014—Hazardous devices teams from around the Southwest will wrangle their bomb squad robots at the eighth annual Robot Rodeo beginning Tuesday, June 24 at Los Alamos National Laboratory.

“The Robot Rodeo gives bomb squad teams the opportunity to practice and hone their skills in a lively but low-risk setting,” said Chris Ory of LANL’s Emergency Response Group and a member of the Lab’s hazardous devices team.

The rodeo gets under way at 8 a.m. in Technical Area 49, a remote section of Laboratory property near the entrance to Bandelier National Monument. Eight teams are scheduled to participate in the three-day competition. Teams compete in events and simulations, such as

 searching vehicles for explosive devices
 recovery of a stolen weapon
 navigating obstacle courses
 investigating a possible homemade explosives lab
 operating in darkened buildings
 using common hand tools to disable a device
 attacking and rendering safe large vehicle bombs
  dealing with suicide bombers.
Teams scheduled to participate in this year’s event include New Mexico State Police, Los Alamos and Albuquerque Police departments, Dona Ana County Sheriff’s Office, Kirtland Air Force Base Explosives Ordinance Disposal team, Colorado Regional Bomb Squad, a team from the British army and a U.S. Army team from Fort Carson, Colo.

The Laboratory — along with Sandia National Laboratories, the Region II International Association of Bomb Technicians and Investigators, REMOTEC, U.S. Technical Working Group, QinetiQ, WMD Tech, Tactical Electronics, iRobot, ICOR Technology Inc., NABCO, Mistral Security Inc., QSA Global and Stratom — sponsor the Robot Rodeo.

Wednesday, January 8, 2014

NSF ARTICLE ON PHYSICS OF MOVEMENT FOR DESERT DWELLERS AND ROBOTS

FROM:  NATIONAL SCIENCE FOUNDATION 

Desert dwellers and 'bots reveal physics of movement
The Georgia-Tech based 'CRAB' lab investigates how organisms navigate tricky terrain

Physicist Daniel Goldman and his fellow researchers at the Georgia Institute of Technology shed light on a relatively unexplored subject--how organisms such as sea turtles and lizards move on (or within) sand.

If you've ever struggled to walk with even a modicum of grace on a soft, sandy beach, you may appreciate the question. The answers that Goldman's CRAB lab (Complex Rheology and Biomechanics Laboratory) uncovers--with the help of living animals and biologically inspired robots--deepen our understanding not only of animal survival, evolution and ecology, but also, potentially, the evolution of complex life forms on Earth. The lab's research also assists the design and engineering of robots that must traverse unstable, uneven terrain--those used in search and rescue operations at disaster sites, for example.

Goldman first investigated the properties of sand, which can act like a solid, fluid or even a gas, when he was a doctoral student of physics at University of Texas at Austin. Later, as a postdoc in the University of California-Berkeley lab of biologist Robert J. Full (a leader in the field of nature-inspired robots), he helped investigate locomotion on complex terrain--cockroaches' climbing of vertical surfaces, for example, or spiders running over surfaces with few footholds. A fellow researcher, Wyatt Korrf, was interested in movement on a different kind of complex terrain--granular, shifting media. Goldman became hooked, and the two men started working together.

"Some of the insights and tools we developed then were incredibly helpful in my early and current research, in particular, air fluidized beds as a way to control ground properties," Goldman says.

To a student or lover of critters, Goldman's job might seem like a dream. He has worked with a large variety of desert dwellers and other animals, including geckos, zebra-tailed lizards, sidewinders , ghost crabs, sandfish, wind scorpions, funnel weaver spiders and hatchling loggerhead sea turtles.

In the lab and in the field, he and his colleagues observe these animals as they creep, crawl, walk, run, slither and otherwise transport themselves over or in granular matter. The researchers pin down precise details--the flexible spines on a spider's legs that appear to facilitate movement over a wire mesh, for example, or the way a snake flattens itself when climbing a slope. Then they design robots with the physical elements and movement patterns they want to know more about. With these tests as well as computer simulations and analyses, the team can develop, challenge and refine hypotheses related to physics principles inspired by the animals' movements.

The CRAB lab's cast of robot characters to date includes a robot modeled after baby sea turtles, as well as a sandfish robot.

Flipperbot

Recently, the team studied newly hatched sea turtles hurrying across the beach to the sea--a treacherous journey many of us have seen in nature TV shows.

"The best robots people design and build can't out-compete a hatchling sea turtle whose life consists of swimming all the time and using these appendages on land only for half an hour, running from the nest. If a female makes it to adulthood she will use flippers again, of course, to lay eggs," Goldman said.

For this study, CRAB lab researcher Nicole Mazouchova and research technician Andrei Savu traveled with a mobile lab to Jekyll Island in Georgia. They video-recorded hatchlings' movements on the beach and in a portable test bed. Analyzing the videos back at the lab, they saw that on more packed sand, the baby turtles used their flippers as rigid struts and to pivot. On looser sand, however, the turtles dug in deeper and bent their wrists.

With the help of Flipperbot (you guessed it, a robot with flippers), a poppy-seed-filled test bed, plus theoretical modeling by mechanical engineer Paul Umbanhowar of Northwestern University (who also helped make the 'bot), the team confirmed that the turtles' wrist bending helped them avoid slipping and kept their bodies above the sand, minimizing friction and drag. The model revealed how digging in deeper to more sand provided greater efficacy, keeping the substrate from yielding underfoot.

"We found [the turtle] extremely sensitive to how deep it puts its flippers into the ground and that it did better when it bends its wrists," Goldman said.

They also found the turtles (and Flipperbot) were seriously hindered when trying to navigate sand that had already been disturbed by movement.

Flipperbot--whose movements are surprisingly graceful--is the first robot modeled on sea turtles and tested on granular materials. Its work may someday help engineers make more agile robots as well as advance our understanding of evolution on Earth--especially those first walkers to emerge from the sea.

"There is a lot of speculation about the mechanics which allowed early animals to walk on land," says Goldman. "They had hand-like fins or finlike feet and nobody knows in detail how they would have interacted with flowable substrates (like mud and sand)

"We have an eye on biological questions of existing organisms but also those who could have lived in the past. If you look at gazelles, cheetahs--these animals are incredibly agile over terrestrial ground, and they came from things that had no concept of terrestrial ground."

The Flipperbot findings may be useful in other ways as well, such as informing sea turtle conservation strategies.

Sandfish robot

In various studies, Goldman's team has uncovered patterns that can help the engineering of search and rescue robots designed to move over and into debris piles and wreckage. It confirmed, for example, something scientists long suspected--that the chiseled head of the sandfish--a lizard found in north Africa--helps it dive underground. Robot tests showed that the angular head shape not only reduces drag but also generates greater lift forces.

Using x-ray imaging to reveal how the sandfish moves under the surface, the researchers found that to escape predators the little lizard tucks its limbs close to its body and undulates through the sand--looking like a true swimmer. The sandfish uses a consistent wave pattern from head to tail that pushes its body against the sand and generates forward motion. This wave pattern optimizes speed and energy use.

In a more recent study involving a six-legged robot, the team used 3-D printing technology to make legs of different shapes and physical orientations, and learned that convex robot legs made in the shape of the letter "C" worked out best.

Developing 'terradynamics'

It may be tempting to regard the CRAB lab's unique robots as the end rather than the means of research. But the machines are first a way to develop and confirm hypotheses, Goldman says. The lab, which is funded in part by National Science Foundation's (NSF) Physics of Living Systems and Dynamical Systems programs, is steadily identifying basic principles that will significantly advance understanding of how objects move on or in granular media.

"The idea is to begin to develop a terradynamics--equivalent to aero- and hydrodynamics--which will allow us to predict mobility of devices in these complex environments," Goldman says.

The lab has had recent success in terradynamics, publishing a paper in Science that describes a new approach to predicting how small-legged robots move on sand or other flowing materials. The approach uses the forces (such as drag) applied to independent elements of the robot legs to get a measure of the net force on a moving robot (or animal).

"The lizard swimming in sand gives us a broad understanding behind all animals swimming in true fluids," Goldman says. "Analyzing sandfish turns out sufficiently simple we can use as a baseline to understand other swimmers."

What specific studies are up ahead for the busy Georgia Tech lab? In the near future, the team will test and refine theoretical models as they apply to legs and wheels thrusting into flowing material. They also will be conducting experiments to learn more about wet sand versus dry. And thirdly, they will be looking at the physics involved when teams of organisms, such as fire ants, move and dig within complex terrain.

Monday, June 24, 2013

SEVEN INITIATIVES FOR SUPPORTING WARFIGHTER AUTONOMY

FROM: U.S. DEPARTMENT OF DEFENSE

Cost-saving Pilot Programs to Support Warfighter Autonomy
By Terri Moon Cronk
American Forces Press Service

WASHINGTON, June 19, 2013 - A call from the Defense Department to industry and government for autonomous technology ideas that support the warfighter has been answered with seven initiatives.


Chosen from more than 50 submissions, the selected ideas will be tested in the Autonomy Research Pilot Initiative, officials said.

"We believe autonomy and autonomous systems will be very important for how we operate in the future," said Al Shaffer, acting assistant secretary of defense for research and engineering. Autonomous systems are capable of functioning with little or no human input or supervision.

"If we had better autonomous systems for route clearance in Afghanistan, we could offload a lot of the dangerous missions that humans undertake with autonomous systems, so we have to make a big push in autonomy," Shaffer said.

The pilot research initiative's goal is to advance technologies that will result in autonomous systems that provide more capability to warfighters, lessen the cognitive load on operators and supervisors, and lower overall operational cost," explained Jennifer Elzea, a DOD spokeswoman.

"The potential cross-cutting advances of this initiative in multiple domains provide an exciting prospect for interoperability among the military services, and potentially [in] meeting future acquisitions requirements," she said. "The seven projects are at the fundamental cutting edge of the science of autonomy. The projects also integrate several scientific disciplines [such as] neurology [and] mimetics."

The seven projects are not looking at autonomous weapons systems, but rather are investigating autonomous systems for potential capabilities such as sensing and coordination among systems, Elzea noted.

The projects focus on cost savings to DOD, critical in a time of budget cuts, Shaffer said.

The program for the initiatives is estimated to cost about $45 million in a three-year period, which is not considered to be a lot of money for a government research program, DOD officials said.

"We are trying to -- especially as we go through this tough budget period -- incentivize our younger work force," Shaffer said. "Scientists work to solve problems, and what we are doing with this project is we've challenged our in-house researchers to come up with topics that will help us better understand how to do autonomous systems."

When the pilot initiatives are completed, DOD will have the intellectual property to generate a prototype or to provide to industry to produce the systems, officials said.


The seven initiatives are:
-- Exploiting Priming Effects in Autonomous Cognitive Systems: Develops machine perception that is relatable to the way a human perceives an environment. (Navy Center for Applied Research in Artificial Intelligence, Army Research Laboratory)

-- Autonomous Squad Member: Integrates machine semantic understanding, reasoning and understanding, perception into a ground robotic system. (Army Research Laboratory, Naval Research Laboratory, Navy Center for Applied Research in Artificial Intelligence)

-- Autonomy for Adaptive Collaborative Sensing: Develops intelligent intelligence, surveillance and reconnaissance capability for sensing platforms to have capability to find and track targets. (Air Force Research Laboratory, Army Research Laboratory; Naval Research Laboratory)

-- Realizing Autonomy via Intelligent Adaptive Hybrid Control: Develops flexible unmanned aerial vehicle operator interface, enabling the operator to "call a play" or manually control the system. (Air Force Research Laboratory, Space and Naval Warfare Systems Command, Naval Research Laboratory, Army Research Laboratory)

-- Autonomy for Air Combat Missions, Mixed Human/Unmanned Aerial Vehicle Teams: Develops goal-directed reasoning, machine learning and operator interaction techniques to enable management of multiple team UAVs. (Air Force Research Laboratory, Naval Research Laboratory, Naval Air Warfare Center, Army Research Laboratory)

-- A Privileged Sensing Network-Revolutionizing Human-Autonomy Integration: Develops integrated human sensing capability to enable the human-machine team. (Army Research Laboratory, Army Tank Automotive Research Center, Air Force Research Laboratory)

-- Autonomous Collective Defeat of Hard and Deeply Buried Targets: Develops small UAV teaming algorithms to enable systems to autonomously search a cave. (Air Force Research Laboratory, Army Research Laboratory, Defense Threat Reduction Agency)

Saturday, April 20, 2013

PENTAGON OFFICIAL SAYS BUDGET CUTS LIMIT RESEARCH AND DEVELOPMENT



Credit:  U.S. Air Force. Launch Of GPS Satellite.
FROM: U.S. DEPARTMENT OF DEFENSE
Budget Reductions Limit Science, Tech Development, Official Says

By Army Sgt. 1st Class Tyrone C. Marshall Jr.
American Forces Press Service

WASHINGTON, April 18, 2013 - The Defense Department's research and engineering department faces the same challenges the rest of the department does due to limitations caused by sequestration spending cuts, a senior Pentagon official said today.

Alan R. Shaffer, acting assistant secretary of defense for research and engineering, was joined by Arati Prabhakar, director of the Defense Advanced Research Projects Agency, before the Senate Armed Services Committee's subcommittee on emerging threats and capabilities to talk about their part of the fiscal year 2014 defense budget request.

Shaffer said he represents scientists and engineers from DOD, a group that "conceives, develops and matures systems" early in the acquisition process.

"They work with multiple partners to provide the unmatched operational advantage employed by our services' men and women," he said. "As we wind down in Afghanistan, the national security and budget environments are changing."

The president's fiscal 2014 budget request for science and technology is $12 billion -- a nominal increase from fiscal 2013's $11.9 billion, Shaffer said, noting that it isn't possible to discuss the budget without addressing the impact of sequestration, "which takes 9 percent from every single program" in research, development, testing and evaluation.

"This reduction will delay or terminate some efforts," he said. "We will reduce awards. For instance, we will reduce university grants by $200 million this year alone."

Potentially, he added, the number of new SMART Scholarships —an acronym that stands for science, mathematics and research for transformation -- could go down to zero, and sequestration cuts will cause other limitations for research and engineering departments.

"Because of the way the sequester was implemented, we will be very limited in hiring new scientists this year, and the [next] several years," he said.

Each of these actions, Shaffer said, will have a negative long-term impact on the department and to national security.

"The president and secretary of defense depend upon us to make key contributions to the defense of our nation," he said. "[Science and technology] should do three things for national security."

Shaffer said science and technology should mitigate current and emerging threats and that the budget should build affordability and affordably enable current and future weapons systems to operate.

Also necessary, he said, is developing "technology surprise" to prevent potential adversaries from threatening the United States.

"In summary, the department's research and engineering program is faced with the same challenges as the rest of the DOD and the nation," he said, "but our people are performing."

Prabhakar focused on DARPA's goals in her testimony.

"[Our] objective is a new generation of technology for national security, and to realize this new set of military capabilities and systems is going to take a lot of organizations and people," she said.

"But DARPA's role in that is to make the pivotal early investments that change what's possible," she added. "[This] really lets us take big steps forward in our capabilities for the future."

The director said DARPA is investing in a host of areas to include building a future where war fighters can have cyber as a tactical tool that's fully integrated into the kinetic fight.

"And we're building a new generation of electronic warfare that leapfrogs what others around the world are able to do with widely, globally available semiconductor technology," she said.

"It means we're investing in new technologies for position, navigation and timing, so that our people and our platforms are not critically reliant as they are today on GPS," Prabhakar said.

The director also noted DARPA is investing in a new generation of space and robotics, advanced weapon systems, new platforms, and a new "foundational" infrastructure of emerging technologies in different areas of software and electronics, and material science.

The aim, Prabhakar said, is to create real and powerful options for future commanders and leaders against whatever threats the nation faces in the years ahead.

"And that work is the driver behind all of our programs," she said. "It's the reason that the people at DARPA run to work every morning with their hair on fire. They know that they're part of a mission that really does matter for our future security as a country.

Friday, April 12, 2013

THE ASTEROID RETRIEVAL INITIATIVE

 

FROM: NASA
Animation: Asteroid Retrieval Initiative


NASA's FY2014 budget proposal includes a plan to robotically capture a small near-Earth asteroid and redirect it safely to a stable lunar orbit where astronauts can visit and explore it. The proposed mission would combine the efforts of three NASA mission directorates: Human Exploration and Operations, Science and Space Technology.

Sunday, April 8, 2012

LUCAS THE ROBOT WITH THE HUMAN FACE

FROM DEPARTMENT OF DEFENSE ARMED WITH SCIENCE
Dr. Greg Trafton (left) and Lucas the Robot at the Laboratory for Autonomous Systems Research (LASR)


Admittedly, the initial idea of a robot with a face conjures up memories of every single SciFi robot movie I’ve ever seen.  Usually involving humans fleeing in terror as the autonomous voice screams “kill, kill” while shooting  rockets out of a gun-arm.  Or overly negative and depressed, like Marvin the Paranoid Android.  Frankly, I’d take my chances with the later.  He’d be a downer, but at least he has no plans for world domination.

Despite my preconceived notions of the robotic overlord race that is sure to enslave (or depress) us all, my experience at the Navy’s new robotics lab was a little less dramatic.  What I discovered was not a legion of soldier robots, but a team of highly trained scientists prepared to explain how they’re working toward a goal of integrating robotics into military life.

The brand new Laboratory for Autonomous Systems Research (LASR), located at the Naval Research Laboratory (NRL) in Washington, D.C. is spearheading efforts to combine human interaction with robotic skill and capability.  The goal is to take the best of both worlds and find a way to make missions easier and more effective for service members.  This means everything from locating IEDs to fighting fires.
So how are they doing that?  It all starts in the lab, of course.

This complicated and scientific process involves running experiments on autonomous systems in different situations and different environments.  Luckily, LASR is equipped with different environmental rooms designed to provide just that.  Scientists who work at the lab can step into the desert for a quick sandstorm, then walk across the hall to the rainforest to run experiments.  All of this without having to set foot outside the Navy’s new robotics laboratory.

“It’s the first time that we have, under a single roof, a laboratory that captures all the domains in which our sailors, Marines and fellow DOD service members operate,” said Rear Adm. Matthew Klunder, chief of naval research. “Advancing robotics and autonomy are top priorities for the Office of Naval Research. We want to reduce the time it takes to deliver capability to our warfighters performing critical missions. This innovative facility bridges the gap between traditional laboratory research and in-the-field experimentation—saving us time and money.”

Several of the projects going on in this lab are working toward creating viable solutions for problems service members might actually face.  One of these is Damage Control for the 21st Century—a program to develop firefighting robots for use aboard Navy ships.
Meet Lucas.

Lucas is a computerized cognitive model robot.  This means he’s designed to act the way a person does, and reacts the way a person might.  He’s built with a trifecta of skills: mobile, dexterous and social capabilities.  This means that he’s able to assume people think differently (e.g. don’t always come to the same conclusions), and he understands human limitations.

This concept is known as the “theory of the mind”, as Dr. Greg Trafton explained.  Trafton, a roboticist at the Navy Center for Allied Research in Artificial Intelligence, Information Technology Division, NRL explained that Lucas was created to appear more human than robot so he could solve human problems in a more practical manner.  Basically, Trafton’s working to create robots that think.
Lucas “thinks” using computational theories to find out what a person might be thinking in certain situations.  Lucas – and his female counterpart, Octavia – can see and understand words, expressions, even hand gestures.

Search This Blog

Translate

White House.gov Press Office Feed