We are developing advanced multirotor models using deep recurrent neural networks, and are controlling multirotors with nonlinear path-following controllers, in an effort to extract more performance and motion precision out of todays vehicles. We have also demonstrated coordinated and autonomous docking on ground vehicles. See nany of our demonstrations on our youtube channel.
From motion planning for emergency evasion to vehicle control at the limit of friction to degraded lane marking detection and omni-directional place recognition, we are working on some of the most challenging aspects of autonomous driving. See our latest work on the Research Projects page.
We work extensively on real-time 3D Visual Simultaneous Localization and Mapping (SLAM) using Multi-Camera Clusters. We have release open source tools for calibrating both intrinsic and extrinsic parameters of wide-field of view and gimballed cameras, and a complete localization and mapping solution. Code available on github.