2018

The 13th International Workshop on the Algorithmic Foundations of Robotics
Universidad Politécnica de Yucatán
Mérida, México, December 9-11, 2018

Program

Invited Talks


Steve LaValle

University of Oulu

Finland

The Path to (Human) Perception Engineering

Abstract: Enabled by a confluence of recent technologies, we are on the verge of dramatically reshaping the way people experience life by directly stimulating or disrupting their sensory systems. This leads to the engineering of targeted perceptual experiences. For example, a robotic telepresence system that leverages current display, tracking, and imaging technologies would enable people with limited mobility to explore the world and interact with more people. This talk will briefly overview points of curiosity along my career trajectory, from motion planning to sensor and filtering, to virtual reality. This will take us to my current vision, which is that a new field is emerging in which human perceptual experiences are engineered through wearable displays, sensing systems, and robots. Call it perception engineering. As it stands now, contributors to this subject are widely distributed across engineering fields such as computer graphics, VR/AR, HCI/HRI, mobile robotics, computer vision, machine learning, MEMS, optical engineering, nanophotonics, and acoustics. Even more importantly, the most critical knowledge comes from perceptual psychology and neuroscience, which need to be adapted from the science of perception to the engineering of perception. This leads to unique challenges, including finding the correct engineering criteria for effectiveness and comfort, understanding the adaptation of sensory and perceptual systems, and designing new interfaces that exploit learnable motor programs.

Bio: Steven M. LaValle is Professor of Computer Science and Engineering, in particular Robotics and Virtual Reality, at the University of Oulu, Finland. From 2001 to 2018, he was a professor in the Department of Computer Science at the University of Illinois. He has also held positions at Stanford University and Iowa State University. His research interests include robotics, virtual reality, sensor fusion, planning algorithms, computational geometry, and control theory. In research, he is mostly known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm, which is widely used in robotics and other engineering fields. He also authored the books Planning Algorithms, Sensing and Filtering, and Virtual Reality.

With regard to industry, he was an early founder and chief scientist of Oculus VR, acquired by Facebook for $3 billion in 2014, where he developed patented tracking technology for consumer virtual reality and led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration, health and safety, and the design of comfortable user experiences. From 2016 to 2017, he was a Vice President and Chief Scientist of VR/AR/MR at Huawei Technologies, where he was a leader in mobile product development on a global scale.

Enrique Sucar

Instituto Nacional de Astrofísica, Óptica y Electrónica

México

Planning under Uncertainty in Robotics: From MDPs to Causal Models

Abstract: Uncertainty is a prevalent issue in robotics: sensors are noisy and with limited range, actuators are imprecise, hardware and software can fail, and unexpected situations often arise. Traditional approaches are based on probabilistic methods, such as MDPs, POMDPs and RRTs. I will describe some of our previous work using these techniques. First, I will illustrate the use of concurrent Markov decision processes for task coordination in service robots. Then we will consider uncertainty in view planning for 3-D object reconstruction, and I will present alternative solutions based on rapidly exploring random trees.

As robots extend their capabilities and interact more with humans, for example service robots and autonomous vehicles, it is critical that robots become aware of their limitations and can explain their actions. Traditional techniques are not able to accomplish this task, so we require more expressive representation and reasoning capabilities, e.g. causal models. I will give a brief introduction to causal models and explore some initial ideas for their application in robotics.

Bio: Luis Enrique Sucar has a PhD in Computing from Imperial College, London, 1992; a M.Sc. in Electrical Engineering from Stanford University, USA, 1982; and a B.Sc. in Electronics and Communications Engineering from ITESM, Mexico, 1980.

He has been a researcher at the Electrical Research Institute, a professor at ITESM-Cuernavaca, and is currently Senior Research Scientist at the National Institute for Astrophysics, Optics and Electronics, Puebla, Mexico. He has been an invited professor at the University of British Columbia, Canada; Imperial College, London; INRIA, France; and CREATE-NET, Italy. He has more than 300 publications, has directed more than 70 PhD and Master theses and has 2 patents.

Dr. Sucar is Member of the National Research System, the Mexican Science Academy, and Senior Member of the IEEE. He is associate editor of the Pattern Recognition and Computational Intelligence journals. He has served as president of the Mexican AI Society, as member of the Advisory Board of IJCAI, and is currently president of the Mexican Academy of Computing. In 2016 he received the National Science Prize from the Mexican President.

Dr. Sucar’s main research interests are in graphical models and probabilistic reasoning, and their applications in computer vision, robotics, energy, and biomedicine. He recently started a company related to virtual rehabilitation.

Vikas Sindhwani

Googlre Brain, NYC

USA

The Learning Continuum for Robotics: from classical optimal control to blackbox policy search

Abstract: With rapid advances in machine learning and perception, it is no longer far fetched to imagine a world -- in not too distant future -- with self-driving cars on roads, autonomous drones in the skies and perhaps even mobile manipulators in our kitchens. I will begin this talk by sharing a few vignettes of Robotics research at Google at the intersection of machine learning, optimization and control, emphasizing exciting symbiotic interactions between these disciplines. I will then present three technical topics that may be viewed as spanning a spectrum of policy learning modes in Robotics: an operator-splitting "divide-and-conquer" approach to constrained optimal control using tools developed for non-smooth, distributed machine learning; an approach to learn incrementally stable dynamical systems using control-theoretic stabilization constraints; and a collection of ideas to make vanilla "blackbox" optimization much more data efficient for policy search in completely "model-free" settings.

Bio: Vikas Sindhwani is Research Scientist in the Google Brain team in New York where he leads a research group focused on solving a range of perception, learning and control problems arising in Robotics. His interests are broadly in core mathematical foundations of statistical learning, and in end-to-end design aspects of building large-scale, robust machine intelligence systems. He received the best paper award at Uncertainty in Artificial Intelligence (UAI) 2013, the IBM Pat Goldberg Memorial Award in 2014, and was co-winner of the Knowledge Discovery and Data Mining (KDD) Cup in 2009. He serves on the editorial board of IEEE Transactions on Pattern Analysis and Machine Intelligence, and has been area chair and senior program committee member for International Conference on Learning Representations (ICLR) and Knowedge Discovery and Data Mining (KDD). He previously led a team of researchers in the Machine Learning group at IBM Research, NY. He has a PhD in Computer Science from the University of Chicago and a B.Tech in Engineering Physics from Indian Institute of Technology (IIT) Mumbai. His publications are available at: http://vikas.sindhwani.org/.


Leslie Kaelbling

Massachusetts Institute of Technology

USA

Doing for our Robots what Evolution did for Us

Abstract: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

Joint work with: Tomas Lozano-Perez, Zi Wang, Caelan Garrett and a fearless group of summer robot students

Bio: Leslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founding editor-in-chief of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. She is not a robot.


Elon Rimon

Technion, Israel Institute of Technology

Israel

Perspectives on Minimalistic Robot Hand Design and a New Class of Caging-to-Grasping Algorithms

Abstract: Ten years of research on minimalistic robot hands resulted in novel robot hand designs and culminated with a new book, The Mechanics of Robotic Grasping by Rimon and Burdick. The perspectives gained from this intensive activity will be shared with the WAFR participants, in a talk that consists of two related parts. Part I describes the configuration space analysis of multi-finger grasps. In so doing we obtain the minimalistic 2D and 3D robot hand designs in terms of number of fingers. Surprise: the minimalistic 3D design is the classical 3-finger Salisbary hand, with added security of using the hand's palm when object immobilization is necessary. Part II considers the notion of caging, which offers a robust object grasping methodology under huge uncertainty in the finger positions. A novel contact space approach resulted in a series of highly efficient and intuitive caging-to-grasping algorithms, specifically suited for minimalistic robot hands. Two such algorithms will be described for 3-finger robot hands grasping 2D objects. The first algorithm computes caging grasps for formationally similar 3-finger hands. The second algorithm computes caging grasps of 2D objects against a wall using two-finger hands, with the same computational complexity as the 3-finger algorithm. Perspectives on grasping 3D objects against the environment will end the keynote lecture.

Bio: Elon Rimon is a Professor in the Department of Mechanical Engineering at the Technion, Israel Institute of Technology. He is also a Visiting Associate Faculty member at the California Institute of Technology. Professor Rimon was a finalist for the best paper award at the IEEE International Conference on Robotics and Automation in 1994 and 1996, for the Workshop on Algorithmic Foundations of Robotics in 2014 and 2016, and awarded best paper presentation at the Robotics Science and Systems Conference in 2013. Prof. Rimon research in robotics spans the areas of autonomous mobile robot navigation with emphasis on sensor based on-line algorithms as well as robot grasping with emphasize on secure on-line grasping algorithms.