Research

Current Research | Past Research

Robotic Telepresence

During Summer 2010, we conducted a series of user studies of two telepresence robots (Anybots' QB and VGo Communications' VGo) in an office environment at Google in Mountain View, CA. One study focused on virtual teams in which a remote teammate (n=6) used a telepresence robot to attend his/her regularly scheduled team meetings. We found that people who used to be in the same building as their teammates and then moved to a different location had the best experiences recreating this closeness with their teams using telepresence robots. Read more >>

Autonomous Robots and Trust

Research in the field of human-automation interaction (HAI) has shown that trust is a key factor that influences operator's interaction with an autonomous system. Researchers also found that proper calibration of trust is critical to safe operation of an autonomous system. Too much trust on the system can lead to abuse of automation and conversely too little trust can lead to disuse of automation. In dynamic systems, operators need a control allocation strategy that optimizes performance. Hence mis-calibrated trust can lead to inefficient operation or even catastrophic failures. Read more >>

 

Multi-touch Technologies for Human-Robot Interaction

The Robotics Labs research in multi-touch technologies spans across a variety of hardware including the Microsoft Surface, 3M Multi-touch Display, Circle Twelve DiamondTouch, and the Apple iPad. To date our research been focused on the control of individual and multiple robots for search and rescue purposes as well as the development of the DREAM (Dynamically Resizing Ergonomic and Multi-touch) Controller. Read more >>

Neural Networks and Mild Traumatic Brain Injury

We have just started an interdisciplinary project in cooperation with UML neurobiology researchers. In the course of this work, our lab will be working on artificial neural net (ANN) models of brain damage and drawing inspiration from biological mechanisms of brain damage and recovery to attempt to optimize neural network training and minimize the computational resources required to use ANNs. Read more >>

 

SUBTLE Multi-University Research Initiative

SUBTLE stands for Situation Understanding Bot Through Language and Environment. The goal of the SUBTLE Multi-University Research Initiative (MURI) is to build a robot that can accept and understand spoken commands and report its situation and past activities to a human user in English. This project is being developed in coordination with several other research teams at other universities. http://subtlebot.org/

Our team is focused on providing a platform for the linguistic technologies that provides an embodiment of the system in the real world, and allows the conversation between the user and the robot to be grounded in the robot's perception of the world around it. This will allow for a richer development environment, with the complexity of the real world, rather than the somewhat impoverished world of simulations. Read more >>