Lab 6: Light and Blob Following Using the Camera

You may work with a Partner

If you have a Partner put both names in the comments of your files

In this lab you will define a new behavior for your robot: light following. This will, however, be very different from your Braitenberg lab. You will carry out light-following behavior using the camera on the fluke dongle. Take a look at the camera's functionality now.

Topics:

The main topics you may need in this lab are:

Part I: The Camera

Picture * Scribbler::takePicture(String type); //The function definition for takePicture
Use this function as follows: Picture *p = robot->takePicture("type");

First, write out a driver program (driver.cpp) to simply test the camera functionality above. You can refer to LCRcpp 5 for an overview on sensing with the camera. You will need to include Picture.h for this lab, it imports the "Picture" class (which has functions just like Scribbler) Calling takePicture will return a Picture type, just like calling "new Scribbler()" returns an instance of Scribbler.

robot->takePicture("gray"); //Returns a gray image.
It returns a pointer to an Picture object containing the requested image. Here are more some examples of how to call the function and the types returned for those calls.
robot->takePicture("color"); //Returns a color image.
robot->takePicture("grayjpeg"); //Returns a gray jpeg image.
robot->takePicture("jpeg"); //Returns a color jpeg image.
Now add the following code to your driver program. Also, make sure it is placed after the spot where you take the picture!
show((robot->takePicture("gray")));
You should see a slightly pink image appear on your screen. The image is pink because it is being displayed in the normal "G" channel of RGB(Red Green Blue), the normal computer color image type. Now in the next section you will use a more efficient method for visually seeing feedback from the camera, because each time show() is called it waits for you to close the image window, before running more code!

Part II: Video Stream

To make the debugging of this program easier, you can use a Video Stream object. Sometimes it is useful to see what the robot is seeing during the debugging process, since its "eyes" are different than yours. It's helpful to know if the robot is behaving unexpectedly due to bad pictures (which you'll recognize through the VideoStream) rather than errors in your code. Also, you'll have to adjust your code to accommodate the quality of the actual photographs, not what you expect the robot to see.

VideoStream video(robot, 0);
video.startStream();

The arguments of the VideoStream constructor specify the robot from which the video will stream, and the color output of the video (zero indicates gray-scale, which you'll want for this application because it is much faster).

Do This: Write a program that creates a VideoStream object, starts it, then commands the robot to spin or move forward for some amount of time. Make sure to include VideoStream.h at the beginning of your file.

Part III: Light Following Behavior

To be brief, you want the robot to travel in the direction of a light signal. Your program will use the camera on the robot to take a photo, then analyze the photo and tell the robot to travel in the direction of highest light intensity captured in the gray-scale photo.

Now, consider the problem at hand: you want the robot to take a picture, analyze that photo for the region of highest intensity, then travel in that direction. Write two functions for this behavior.

Function One, int findLight(Picture *image):

You'll want to give some preference to the center area, so the robot moves forward easily after turning to find the area with the most light. The left and right area's of the picture should be the left and right most 30% of the image respectivly with the center being the remaining 40%.

When working out the loops to find the intensity of light, think about how your bubble-sort worked on your vector

Examine the following psudo-code
    for (each column of the picture)
{
// panel 1
for (each row of the leftmost pannel)
{
//when in a pixel of the first panel,
//get intensity and add it to the left panel value
}
// Center panel
for...
// Right Panel
for...
}
//calculate the average intensity of each panel

Other Picture functions you may find helpful

getWidth(Picture *p) and getHeight(Picture *P)

Get the weidth and height of a picture respectivly

Pixel getPixel(Picture *p, int x, int y)

Returns a Pixel structure from witdth x and height y

int getPixelValue_grey(Picture *p, int x, int y)

returns the int value of a pixel structure from witdth x and height y

Do This: Add the code for for void followLight() and int findLight(Picture * image) to your .cpp file.

Function Two, void followLight():

Part IV: Blob Following Behavior

This behavior builds upon the functions from Part III.

A common tracking method for robots uses a process called "blob-following." This method works by training a robot to search for an object. In our case, the robot will identify objects by color. You will train your robot to recognize a certain color using functions provided.

To do this, you will need to implement two functions:

void trainBlob(Scribbler* robot);

In this function, the following steps will need to take place: 
1. Display image with bounding box drawn in red.
2. Ask the user, if they want to use that image to train.
3. If yes, train using that image.
4. If no, take another picture and repeat step 1.
Remember to use show(Picture *p) to display a picture to the screen.

To draw on the picture, use the following command:

    setPixelColor(Picture *p, int x, int y, int R, int G, int B);

This gives a pixel located at width x and height y the RGB values specified by the corresponding ints.

Example:

If you wanted to set the pixel at width 20 and height 45 in the Picture *image the code would be

    setPixelColor(image, 20, 45, 255, 0, 0);

Remember RGB values have 3 values corresponding to the amount of red, green, and blue in a given pixel.

Think about how you would use loops to draw a box (Hint: you want to do it in such a way that you know what the corner's of your box are for training). Your robot's camera is not very big, thus the picture it takes is very small an area of about 16x16 pixels would be plenty for blob following. It is also best to try to follow an object that is one solid color.

You will use the following function to train your robot:

robot->conf_rle_range(unsigned char *image, int x1, int y1, int x2, int y2);
Note that this function takes in the image in the (unsigned char *) and not (Picture *). To get that, we use the function called Picture::getRawImage(). So if you have an image called "image", you call the following function for training:
robot->conf_rle_range(image->getRawImage(), x1, y1, x2, y2);
This using the image you pass the robot with what should be the deminsions of your box. In this function x1 and y1 are the top-left corner of the box and x2 and y2 are the bottom-right corner of the box.

Don't pass the robot the picture you colored, make a copy before coloring

Example:

Picture * image = robot->takePicture("color");
robot->conf_rle_range(image->getRawImage(), int (getWidth(image)/2)-8, (getHeight(image)/2)-8, (getWidth(image)/2)+8, (getHeight(image)/2)+8,);
The example should train the "blob" to be the 16x16 pixel area in the middle of the image.

The first argument is the picture you will train the robot with, and the last four arguments describe the bounding rectangle that represents a box containing the color you want to track. This function uses the pixels in the bounded rectangle to calculate a representative RGB value. After this function is called, the robot->takePicture("blob") function is configured for locating "blobs" of the trained rgb value.


void followBlob();

This function should be very similar to the followLight() function written in Part III. Because the Blob picture is also black and white, we can re-use the findLight() function written in Part III (Horray!). So, the only difference between this function and followLight() is that this function needs to take a "blob" picture instead of a "grey" picture.

The takePicture("blob") function returns a black and white picture in which the pixels that have values in a range of the trained rgb value are white and all other pixels are black. Thus, the "blob" is white, and everything else is black. This is convenient for tracking because you can simply create a modified version of the findLight function you wrote in lab6 to analyze the blob pictures for the "brightest" region. T hen have the robot react accordingly using followBlob.

Part V: Menu

Once you've done all the previous parts, make sure that you've created a menu that allows the user to select what they want to run, and for how long it should run. Make sure to refer to the example program for how the menu should look. NOTE: Your menu should be identical to the example program's.