Andrew Davison, Imperial College London

The History and Future of Visual SLAM

Over the past 25 years, SLAM (Simultaneous Localisation and Mapping) has developed from a theoretical research topic into a crucial enabling technology for augmented reality, robotics and mobile devices.
Standard cameras and other visual sensors have become increasingly important because they offer the most flexible and accurate sensing for localisation and mapping while still being low-cost and compact. Progress has most usefully been measured by advances in real-time systems, almost always supported by real-time demos open source software releases, and I will review some of the most important systems which have driven the field forward. We are now at the point where SLAM technology is entering exciting real products, while in research labs we continue to work on new challenges. Future SLAM systems for AR and beyond must deliver fully dense, semantically aware and lifelong mapping, while retaining always-on, low power operation, and there are still many problems to be solved.

 

Andrew Davison

Andrew Davison is Professor of Robot Vision and Director of the Dyson Robotics Laboratory at Imperial College London. He did a B.A. physics and a D.Phil. in computer vision from the University of Oxford in 1994 and 1998, respectively. In his doctorate at Oxford's Research Group he developed one of the first robot SLAM systems using vision. He spent two years as a post-doc at AIST, Japan, where he continued to work on visual robot navigation. In 2000 he returned to the University of Oxford and as an EPSRC Advanced Research Fellow from 2002 he developed the well known MonoSLAM algorithm for real-time SLAM with a single camera. He joined Imperial College London as a Lecturer in 2005, held an ERC Starting Grant from 2008 to 2013 and was promoted to Professor in 2012. His Robot Vision Research Group continues to focus on advancing the basic technology of real-time localisation and mapping using vision, publishing advances in particular on real-time dense reconstruction and tracking, high speed vision and tracking, object-level mapping, manipulation and the use of novel sensing and processing in vision. He maintains a deep interest in exploring the limits of computational efficiency in real-time vision problems. He worked with Dyson for over 10 years to design the breakthrough visual SLAM system in Dyson's first robotic project the 360 Eye robot vacuum cleaner. In 2014 he became the founding Director of the new Dyson Robotics Laboratory at Imperial College, a lab working on the applications of computer vision to real-world domestic robots where there is much potential to open up new product categories

 

Steven Feiner, Columbia University

Mixing It Up, Mixing It Down

Our field is approaching the 50th anniversary of its first publication. Decades without a name, it has since acquired many, referring to a range of experiences: Augmented Reality, Mixed Reality, Mediated Reality, Diminished Reality, and Augmented Virtuality—to list more than a few, yet fewer than all. Why do we find these experiences so compelling that we devote our lives to exploring them, despite limitations in the displays, sensors, algorithms, and user interfaces from which they are built? I will try to answer this question, appealing to some of the many ways in which we can mix and modify the physical and virtual worlds: individually and collaboratively, indoors and outdoors, and in specialist domains and everyday life.

 

Steven Feiner is a Professor of Computer Science at Columbia University, where he directs the Computer Graphics and User Interfaces Lab, and co-directs the Columbia Vision and Graphics Center. His lab has been doing AR research for over 25 years, designing and evaluating novel interaction and visualization techniques, creating the first outdoor mobile AR system using a see-through head-worn display, and pioneering experimental applications of AR to fields such as tourism, journalism, maintenance, and construction. Prof. Feiner received an A.B. in Music and a Ph.D. in Computer Science, both from Brown University. He has served as General Chair or Program Chair for over a dozen ACM and IEEE conferences, is coauthor of Computer Graphics: Principles and Practice, received the IEEE VGTC 2014 Virtual Reality Career Award, and was elected to the CHI Academy. Together with his students, he has won the ACM UIST Lasting Impact Award and best paper awards at IEEE ISMAR, IEEE 3DUI, ACM UIST, ACM CHI, and ACM VRST.