Seminar on Visual Representations for Navigation and Object Detection

DATE: September 18, 2019

TIME: 11-noon

LOCATION:  Metron Office in Reston ( 1818 Library Street, Reston VA; 6th floor)



Title: Visual Representations for Navigation and Object Detection

Jana Kosecka, Ph.D., George Mason University


Abstract: Advancements in reliable navigation and mapping rest to a large extent on robust, efficient and scalable understanding of the surrounding environment. The successes in recent years have been propelled by the use machine learning techniques for capturing geometry and semantics of environment from video and range sensors. Professor Kosecka will discuss approaches, object detection, pose recovery, 3D reconstruction and detailed semantic parsing using deep convolutional neural networks(CNNs). While data-driven deep learning approaches fueled rapid progress in object category recognition by exploiting large amounts of labelled data, extending this learning paradigm to previously unseen objects comes with challenges. I will discuss the role of active self- supervision provided by ego-motion for learning object detectors from unlabeled data. These powerful spatial and semantic representations can then be jointly optimized with policies for elementary navigation tasks. The presented explorations open interesting avenues for control of embodied physical agents and general strategies for design and development of general purpose autonomous systems.

Bio:  Jana Kosecka is a professor at the Department of Computer Science, George Mason University. She obtained Ph.D. in Computer Science from University of Pennsylvania. Following her PhD, she was a postdoctoral fellow at the EECS Department at University of California, Berkeley. Professor Kosecka is the recipient of the Marr prize and received the National Science Foundation CAREER Award. She is a chair of IEEE technical Committee of Robot Perception, Associate Editor of IEEE Robotics and Automation Letters and International Journal of Computer Vision, former editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. She held visiting positions at Stanford University, Google and Nokia Research. Professor Kosecka is a co- author of a monograph titled Invitation to 3D vision: From Images to Geometric Models. Her general research interests are in Computer Vision and Robotics. In particular she is interested ‘seeing’ systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing and human-computer interaction.


Mobile Data Collection in an Aquatic Environment: Cyber Maritime

April 26, 2019, 10:30 am-11:30 am
Cycles for Distributed Autonomy
Speaker: Dr. Fumin Zhang, Georgia Institute of Technology
Location: School of Electrical and Computer Engineering, room: ENGR 4201

There is a perceivable trend for robots to serve as networked mobile sensing platforms that are able to collect data in aquatic environments in unprecedented ways. We argue that the effective transformation between Eulerian and Lagrangian data streams represents a fundamental principle underlying many ongoing research efforts. Timely transformation of data streams is the major challenge to construct cyber cycles that are needed by marine autonomy. Data driven machine learning methods have great potential, but are constrained by special difficulties for underwater communication. A distributed autonomy structure that is able to cope with the
limited information sharing is envisioned as the future. This challenge can only be addressed by interdisciplinary efforts from researchers in underwater acoustics, underwater networking, and marine robotics. This talk will discuss recent advancements towards integrating marine robotic platforms with underwater communication and networking technology. In particular, we will address the influences from both environmental motions (caused by ocean flow) and controllable platform motion on the transformation of the
data streams. Even though such motions have been known to degrade the performance of acoustic communication and networking, the quantitative relationships have yet to be established, calling for tremendous efforts for theoretical analysis, simulations, and experimental study. One of our approaches, named Motion tomography (MT), develop generic environmental models (GEMs) to combine computational ocean models with real-time data streams collected by mobile sensing platforms to provide high-resolution
predictions of ocean current in a small spatial area around the mobile platforms. With better known environmental motion, the performance of acoustic networking can be better analyzed. This will be demonstrated through lab-based experiments leveraging micro autonomous vehicles equipped with acoustic modems. Our efforts also indicate that future research requires open and cost-effective experimental infrastructure that integrates marine robotic platforms, underwater acoustic device, and underwater networking equipment.

Dr. Fumin ZHANG is Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received the B.S. and M.S. degrees from Tsinghua University, Beijing, China, in 1995 and 1998, respectively. He received a PhD degree in 2004 from the University of Maryland (College Park) in Electrical Engineering, and held a postdoctoral position in Princeton University from 2004 to 2007. His research interests include mobile sensor networks, maritime robotics, control systems, and theoretical foundations for cyber-physical systems. He received the NSF CAREER Award in  September 2009 and the ONR Young Investigator Program Award in April 2010. He is currently serving as the co-chair for the IEEE RAS Technical Committee on Marine Robotics, associate editors for IEEE Journal of Oceanic Engineering, Robotics and Automation Letters, IEEE Transactions on Automatic Control, and IEEE Transactions
on Control of Networked Systems. He also serves as the deputy editor-in-chief for the Cyber-Physical Systems Journal.