We are developing advanced multirotor models using deep recurrent neural networks, and are controlling multirotors with nonlinear path-following controllers, in an effort to extract more performance and motion precision out of todays vehicles. We have also demonstrated coordinated and autonomous docking on ground vehicles.
From motion planning for emergency evasion to vehicle control at the limit of friction and through to degradeg lane marking detection and omni-directional place recognition, we are working on some of the most challenging aspects of autonomous driving. See our latest work on the Research Projects page.
MCPTAM is a set of ROS nodes for running Real-time 3D Visual Simultaneous Localization and Mapping (SLAM) using Multi-Camera Clusters. It includes tools for calibrating both the intrinsic and extrinsic parameters of the individual cameras within the rigid camera rig. Visit the MCPTAM website https://github.com/aharmat/mcptam.