MIT Enterprise Forum event will showcase Pittsburgh & Carnegie Robotics

On March 29th, the MIT Enterprise Forum is holding an event at the Alpha Lab Gear space in Pittsburgh's East Liberty neighborhood.  "Pittsburgh Presents Robotics: Why the World is Watching" will be moderated by David Kalson of Cohen & Grigsby and the panel will include:

  • Steve DiAntonio, President and CEO of Carnegie Robotics
  • John Bares, Founder of Carnegie Robotics and Director of the Uber Advanced Technology Center
  • Chris Moehle, Managing Director of Coal Hill Ventures
  • Jim Rock, CEO of Seegrid

You can register for the event at EventBrite.  For additional information please see this press release.

 

R&D program for ship firefighting uses the MultiSense SL

The MultiSense SL is being used on a firefighting robot named SAFFiR, for Shipboard Autonomous Firefighting Robot.  The Navy R&D grant is for motion planning software development what will be developed at Worcester Polytechnic Institute and uses an existing robot from Virginia Tech.

The video below shows the bipedal robot in some initial tests and some of the live 3D data from the MultiSense SL sensor integrated into the robot's head.

saffir-robot_firefighter-100643993-primary.idge.jpg

Sponsored by the Office of Naval Research (ONR), SAFFiR is a two-legged, or bipedal, humanoid robot designed to help researchers evaluate unmanned systems to support Sailors with damage control aboard naval vessels.

Product Support

 

The Carnegie Robotics' website has had a product support section for many months now, but we haven't highlighted it until now.  It has a wealth of information, including:

  • General product documentation, specifications, & interface drawings.
  • ROS driver documentation.
  • Release notes for Firmware, LibMultiSense, ROS driver, and Cloud Demos software package.
  • Example code on reprojection, calibration, and ROS transform tree usage.

I invite you to check it out at http://carnegierobotics.com/support/.  We'd love to hear your ideas for additional information, code, data, etc to add to the support section.  

Thanks!

DRC Finals

Carnegie Robotics was on-hand at the DARPA Robotics Challenge Finals, or DRC on June 5th and 6th, 2015.  We had a booth in the DRC Expo area and spent two days talking to the the thousands of visitors and roboticists who attended and watched the exciting Trials.

We were excited to see the six Track B teams compete in the event.  All Track B teams use the Atlas robot from Boston Dynamics, which features a Carnegie Robotics MultiSense SL sensor as the main perception system -- or as one 6 year old in our booth excitedly noted, "Look Mom.. it's Atlas's head!"  The DRC Track B teams that competed:

Additionally, five other DRC teams built their own robots and deployed sensors from Carnegie Robotics, namely:

Steve DiAntonio, Matt Alvarado, and Chris Osterwood had a great time at the event.  It was quite something to see such a large crowd of people cheering when points were awarded to teams and groaning when robots fell.  

Visualization of DRC Finals from MIT Team

To give you a sense of how operators controlled these robots from remote stations, I recommend watching this visualization from MIT's DRC team.  It shows MultiSense SL data from their first finals run.  The video shows the SL's LIDAR data, sometimes colorized using the SL's on-board camera, stereo data from the SL, as well as overlays showing how MIT's software interprets the live data streams and allows operators to make decisions and navigate this complex (for a robot) course.

This video shows MIT's first DRC Finals Run on Friday the 6th, scoring 7 to 8 points.  After some faults on their seconds run, they finished sixth.  More information about their runs is on their website at http://drc.mit.edu

MIT releases LCM driver for MultiSense SL

The MIT DRC team has graciously published their Lightweight Communications and Marshalling (LCM) driver for the MultiSense SL, created by them as part of their work for the DARPA Robotics Challenge (DRC).  

They have also release a new video showing their ATLAS robot performing continuous path and step planning from just the stereo data of its' MultiSense SL.  

Enabled by Online Footstep Planning and Stereo Fusion. All execution here is autonomous MIT DARPA Robotics Challenge Team More Details: http://drc.mit.edu References: 1. Robin Deits and Russ Tedrake. Footstep planning on uneven terrain with mixed-integer convex optimization. In Proceedings of the 2014 IEEE/RAS International Conference on Humanoid Robots (Humanoids 2014), Madrid, Spain, 2014.

Carnegie Robotics & AutonomouStuff at RoboBusiness

Carnegie Robotics had Senior Engineering Staff on-site for RoboBusiness 2014 this past week in support of our distributor, AutonomouStuff.  The booth had a live demo of the MultiSense S7 running, our new S21 demo video on display, and we spent the days answering multitude of questions about our stereo sensors and technology.  

We were also honored to learn that the MutliSense S21 had been nominated for a Robotics Business Review 2014 Game Changer Award.  From their article:

The MultiSense S21 addresses one of the key long-term problems in robotics and automation: reliable 3D sensing of the world indoors and out, in a wide range of conditions, at a reasonable cost. This long-range, high-resolution, color 3D sensor has the potential to enable low-cost robotic, automation, and safety features for a wide range of applications including mining, construction, healthcare, and warehouse and factory automation.

For more details please see their article about the MultiSense S21:

Our new MultiSense S21 demo video shows the sensor operating in a wide-variety of outdoor scenes and several visualizations based on our Cloud Demo Software package, which is open source and slated for release in a few weeks.

MIT's DRC Team Posts Terrain Traversal Videos

MIT's DRC Team has recent posted some impressive videos of their ATLAS Robot walking over obstacles using 3D data from our MultiSense SL.

This first video, from April 2014, shows world model building and path planning from just the SL's continuously rotating planar laser.  

ALTAS walking over terrain enabled by Driftless Motion Estimation fusing MultiSense SL LIDAR, kinematics, and inertial information. Note: motion captures cameras were NOT used here.  A set of footsteps are formulated on the terrain course by the user and then executed continuously without human input. The alignment of the steps are adjusted while walking using the state estimate and each adjusted step is fed to the controller.

A second video, posted in September 2014, shows the same 3D world model building process but only stereo data from the MultiSense SL is used.  The high-rate 3D stereo data enables the ATLAS to navigate the terrain obstacle more quickly and in the future could allow for quick response to changes in the environment. 

This video demonstrates two features: 1) The use of stereo depth fusion of MultiSense stereo data using Kintinuous (originally used to build large maps with Kinect data) to a quality which matches LIDAR data. The heightmap shown was used to place the required footsteps a priori while stationary.