For this how-to guide, I decided to look into NodeJS. I have played around with it before, but this was prior to starting a CS degree, and it went a bit over my head. I though I would take another look at it and see if I can figure it out after getting a few CS classes under my belt. I found many to-do lists and chat servers- both of which didn't seem to difficult to create. I wanted to complete a more substantial project with potential for future use. I found a tutorial that uses Node with ExpressJS and MongoDB to create a web page that can interact with an employee database.
Here's a short video of my turtlebot responding ot voice commands using the Pocketsphinx ROS package.
Here's is short tutorial on how to do colored blob tracking using ROS Hydro and the cmvision package. This is building on the package that I build in the line follower demo. Here's a video of the color tracking.
The first setep is to get the cmvision library. That will depend on the version of ROS that you are using but for Hydro it's:
sudo apt-get ros-hydro-cmvision
Once you have that installed you need to figure out what color you want to track. In my case I just used a brightly colored balloon. You get better results using something that will stand out from the rest of the objects in a room. To find the color to track run:
rosrun cmvision colorgui image:=/camera/rgb/image_color
You may need to replace the image ...
from scratch. So now the turtlebot is running a version of ROS Fuerte and I have the newest verson of ROS Hydro on a virtual box for my workstation. I put together a simple line following program to test everything out. You can follow along to create your own or just grab the completed project from my github repo.
One new thing I ran into is that creating packages with Hydro is a little different than before. You still use the same arguments but this time create a new package using catkin_create_package. The package we will make is going to use std_msgs, rospy, and the turtlebot_node. To do this move into your working directory and run the following
catkin_create_pkg line_follower std_msgs rospy turtlebot_node
Use roscreate_pkg if you are still on an older version of ROS. You should have a line_follower directory in you workspace now. Move to that directory and create a nodes folder. In the nodes folder we will create the actual ...
I originally tried to get lane detection working for my autonomous car project a little over a year ago. I ended up getting some rough code working but it was really only useful on ideal roads with perfectly painted lines. At the time I didn't know a whole lot about computer vision so I ended up ditching the computer vision part of the project to focus on other areas.
So now I'm back with a new project that requires computer vision. The goal of this new project (which will be another post later on) is to track objects at relatively long distances of 60 meters or more. The list of off the shelf sensors that can do this is pretty slim with the best being LIDAR sensors that will run you ~$60K. I plan instead to use a CV algorithm to track objects and a laser range finder on a two axis servo mount to locate objects. The crux of this project being again the CV algorithm. To start off I'm getting my feet wet by revisiting the lane ...
The bigger and slightly less hacked together brother to my Autonomous Roomba is the Turtlebot. If you haven't heard of the Turtlebot, just let me say that it is probably the coolest creation of hobby and research robotics.
The downside is that the ready to go kits are pretty expensive at around $1000, which puts it out of the range for most hobbiests. If you have been here before, then you know what happened next. Here's the BOM for my homebuilt turtlebot. You end up spending a bit more on the mini-ITX board but this board has the advantage of comming with a built in step-up power regulator. That let's me get away with running it off of an 11.1V LiPo. Most of the boards I found needed 19V to run which means you would need to spend more ...
The information for my autonomous car project is starting to get a bit spread out between all the update posts, so I though making a nice summary was in order.
In part 1, I laid out my plans for the navigation algorithm and spent some time working on the computer vision part of the project. I was looking at sending an unboard camera image from my Android phone to a laptop to do some lane detection. The large variation between different roads posed a problem. I did get the algorithm to work pretty well with a perfect-looking road. There were also some issues with lag between sending the image, processing, and sending it back.