I came across an unused iRobot Roomba at FUBAR labs and though this would be the perfect opportunity to build a robot using the Robotic Operating System (ROS). ROS is basically software that's used integrate all a robot's sensors (encoders, depth camera, laser scanner ect..) with the code that's used to control it. All of the sensors run as a node. For example, the Kinect sensor node publishes its depth data. The SLAM (localization and mapping) node will uses the Kinect data to determine the robot's location and publish to other nodes. Each node is separate from the others so it's easy to change or add new ones. ROS also has a ton of built in libraries that work with different servos, laser scanners, and other sensors.
The Roomba is essentially the same thing as the iRobot Create, which is the base used for the
Turtlebot designed by Willow Garage (another ROS robot). In the picture below, I've got the Roomba on the bottom, the laptop that will be running ROS, and a Kinect sensor on top. Not pictured is the workstation, which is just another computer running ROS and used to render the localization data.
After a bit of searching, I found out that someone had already created a ROS stack for the Roomba. Unfortunately, I ran into a lot of trouble getting it to work and later found out that the Roomba stack is a bit buggy. The good thing about the similarity between the Turtlebot and Roomba is that you can use the Turtlebot stack with just a few small changes. From this post on ROS answers I learned that a few changes to the Turtlebot launch file were needed. The code below is what I've got working on my Roomba right now.
Teleoperation and Mapping
Here are some examples of the Roomba working with the teleoperation and SLAM packages at FUBAR labs. In the first video, I'm driving the Roomba
around with the keyboard on the workstation using the teleop package. The Roomba is running the SLAM package, mapping, and sending its data back to the workstation for visualization. The
white circle on the map is the Roomba and white lines moving around in front of the robot are where it thinks it's seeing an object based on the Kinect data.
There was a bit of trouble with the communication between
the two that caused the robot's localization to jump around. I think this was the result of my workstation trying to run ROS,
display the visualization data, and record a screen video. This slowed the computer down almost to the point where it was unusable. I did manage to get some video though.
In the video below, I thought it would be nice to show the depth image from the Kinect along with the map. This caused the workstation to lag quite a bit as you can see in the video. Still it shows some nice images of the Kinect.
I made a better map later on of my apartment. No video for that one but here is an image of the map. It's still a little messy, but I was able to use this map to have the Roomba autonomously navigate around my apartment. This was using the Adaptive Monte Carlo Localization (AMCL) package which uses a particle filter to localize the robot.