Human-Robot Teaming Using Augmented Reality and Gesture Control

Human-robot teaming offers great potential because of the opportunities to combine strengths of heterogeneous agents. However, one of the critical challenges in realizing an effective human-robot team is efficient information exchange – both from the human to the robot as well as from the robot to the human.

In this work involves an augmented reality-enabled, gesture-based system that supports intuitive human-robot teaming through improved information exchange. Our system requires no external instrumentation aside from human-wearable devices and shows promise of real-world applicability for service-oriented missions.


Human Gaze-Driven Spatial Tasking of an Autonomous MAV

We show how a set of glasses equipped with gaze tracker, a camera, and an Inertial Measurement Unit (IMU) can be used to (a) estimate the relative position of the human with respect to a quadrotor, and (b) decouple the gaze direction from the head orientation, which allows the human to spatially task (i.e., send new 3D navigation waypoints to) the robot in an uninstrumented environment. Featured in IEEE Spectrum.


Augmented Reality to Enable Human-Robot Teaming in Field Environments

This research focuses on using augmented reality to communicate metric and symbolic information between robotic and human teammates in unstructured, uninstrumented environments to improve team performance through shared understanding.

Understanding teammate position enables cooperative navigation in a search task. Example human navigation to targets for the indoor environment. (a)-(d) show a progression as the human navigates through the interior of a building, using a navigation path and target location generated by the robot, to a target detected by the robot. The map is generated from a 2D laser scan of the environment. The human pose is displayed as a red arrow. Targets are shown as spheres, green for targets the human has reached, red for unreached targets. The autonomously generated path from the human teammate’s location to the next target is shown in cyan. (e)-(h) show the AR viewpoint of the human at time instances corresponding to (a)-(d), respectively. Path guidance shown in cyan and the target is shown as a red sphere.

Human Shaping of Autonomously Generated Surveillance Solutions

For this research I created an integrated human, autonomous planning system to explore space of surveillance solutions based on limited human interactions to maximize both quantitative task-performance and qualitative operator satisfaction.

The problem of resource-constrained surveillance is to find a set of viewpoints v j that maximize the expected target-detection rate based on sensor footprints F(v_j ) such that a path can be driven to visit all viewpoints by a mobile robot within a cost budget B. The contribution focuses on the novel formulation and approach of solving this problem in a human-robot teaming scenario, in which a human interacts with the robotic system by adjusting its prior belief on target locations (e.g., the cloud) to achieve information-gathering tours that are high-performing.
Example illustrative results showing surveillance tour shaping on a real robot. Results are visualized in RViz: occupancy grid background is overlayed with generated paths; top left is a video overlay of live robot camera view. (a) shows a surveillance tour generated with zero interactions. (b) and (c) show one and two interactions (illustrated by the red areas). Inset images are live video streams on which AprilTag target detection is performed.

Intelligent Robot and Augmented Reality Instruction

phd_rosie_making_change2
Interacting with Rosie, the Meka humanoid robot.

This research involved the creation of an intelligent, autonomous robot instructor and a context-aware augmented reality system to teach students with intellectual disabilities life skills.

IMG_1186
Rosie, ready to teach.
Context-aware augmented reality with a Google Glass.

Intelligent Safety System for Surgical Instrument Processing by Robots

2013-07-24 15.33.19
Giving a demo of the surgical instrument sorting task to Damian Kulash.

At GE Global Research, I constructed a cognition-based system to enhance the safety of humans in a surgical instrument
processing environment alongside robots.

va_sorting
The surgical instrument sorting task.

I implemented intelligent sorting of surgical instruments on a Rethink Robotics Baxter Research Robot, and leveraged the perception, reasoning, and action capabilities of the robots in the system to detect humans, evaluate risk, and select appropriate actions to ensure the safety of human co-workers.


Automated Cooperation in Human-Robot Teams

thesis_me_blimp
UAV.

I used an approach for automated task solution synthesis that algorithmically and automatically identifies periods during which a team of less-than-fully capable robots benefit from tightly-coupled, coordinated, cooperative behavior.

thesis_edi_concept
Environmentally Dependent Information concept.

I tested two hypotheses: 1) That a team’s performance can be increased by cooperating during certain specific periods of a mission and 2) That these periods can be identified automatically and algorithmically. I also demonstrated how identification of cooperative periods can be performed both off-line prior to the application and reactively during mission execution.

Reactively cooperating to localize targets.

I validated these premises in a real-world experiment using a human-piloted Unmanned Aerial Vehicle (UAV) and an autonomous mobile robot. For this experiment I constructed a UAV and use an off-the-shelf robot. To identify the cooperative periods I used the ASyMTRe task solution synthesis system, and I used the Player robot server for control tasks such as navigation and path planning.

thesis_results_maps
Example target localization results; reactive (right) vs pre-planned cooperation (left).

My results show that teams employing cooperative behaviors during algorithmically identified cooperative periods exhibit better performance than non-cooperative teams in a target localization task. I also presented results showing an increased time cost for cooperative behaviors and compared the increased time cost of two cooperative approaches that generate cooperative periods prior to and during mission execution.


Large-Scale Intruder Detection in Heterogeneous Swarm Robotics for Search Applications

sdr_amigos
SDR Project AmigoBots.

The goal of this DARPA project was to demonstrate large numbers (100+) of physical heterogeneous robots cooperating to solve indoor search applications. This project was a joint effort between Science Applications International Corporation (SAIC), The University of Tennessee, Telcordia Technologies, and the University of Southern California.

sdr_acoustic_concept
Acoustic sensor net concept.

This project has developed and utilized a number of novel collaborative control algorithms to enable this robot team to explore an unknown building (one floor), find objects of interest, and “protect” the objects of interest over a 24-hour period, autonomously returning for battery recharging when necessary. All robot actions are highly autonomous and are monitored by a human operator using a sophisticated user interface at the building entrance.

Acoustic sensor net with intruder intercept simulation.

The “protection” part of this project involved autonomously deploying a network of AmigoBots to the explored environment, and using that network to monitor for “intruders.” For this purpose, I constructed a distributed acoustic sensor network, using the simple microphones on the onboard computers of each AmigoBot, which constituted about ~70 of our robots.

Acoustic sensor net test.

The ultimate deliverable for this project was a live, week-long demonstration at Ft. A.P. Hill, officiated by the DARPA project sponsors, where the acoustic sensor network achieved 100% intruder detection without false positives.

sdr_acoustic_deployed
Acoustic sensor net of AmigoBots being deployed at Ft. A.P. Hill.

sdr_team
The SDR team.