Robotics 2
Final Challenge: Visual Tracker
You have now made it through the entire Robotics 2 course, and you are about to test your complete set of knowledge by making a working robotic system. You've already learned everything you need to know to do this; you just need to put it all together. In this class, you get to pick one of three options for your Final Challenge. This option is called the 'Visual Tracker', and it will mostly test your knowledge of machine vision and motion control. You should start by attaching your camera to the shaft of your stepper motor, with the stepper motor attached to the board. Here is what the final Visual Tracker should do:
1. Select one piece from the color/shape pieces set in your kit. This piece is the object that the camera will track. You should select an optimal distance between the camera and the object. You can choose the distance you want to use, but the distance should be greater than 1 meter and less than 10.
2. The camera code (in Python) should determine if the object is currently in view.
3. If the object IS NOT currently in view, the camera should pan back and forth looking for the object.
4. If the object IS currently in view, the camera should turn to keep the object in the center of the frame.
5. When the stepper moves to turn the camera to center the object, the motion should occur with a zeta (damping ratio) of 0.7.
In order to accomplish this, I recommend that you do the following five steps in order. As you go, TEST EACH STEP alone, then test each step working together with all of the previous steps. Don't move on to the next step until you have verified that the current step is working right.
There are five steps, and five in-class days to work on this. Try as best as you can to keep up with the schedule, and you should be ready to go by the time Finals Week arrives.
In this system, the PSoC code should be very simple: just wait until speed and direction values are received, then make the stepper motor move at that speed and direction. We learned how to do that here and here.
For now, we'll assume that the object to be tracked IS in view of the camera. In this case, we want to use the 'error' between the actual location of the object to be tracked and the center of the screen to calculate motor speed and direction, and send these values to the PSoC over UART. We already did this in a previous lab, and you can find that stuff here.
For now, you don't necessarily know what gains to set for Kp, Ki, and Kd. When you first get this working, you should set up the code for full PID control, but you should set Kp=1, Ki=0, and Kd=0. You will find better gains later.
Once this step is completed, the camera should turn to follow the specific object you are using for the 'tracking object'.
For this challenge, you have been asked to tune the control gains so that your zeta (damping ratio) value is 0.7. This value is often considered to be a 'good' trade-off between speed and stability - there will be overshoot, but a small enough amount that most applications would consider the overshoot 'acceptable'. To get this specific damping ratio, you first have to decide what distance from the camera will be your target tracking distance. This is because the behavior of your system will be different depending on how far away the tracking object is from the camera. You can pick what you would like the distance to be, but it should be somewhere between 1m and 10m. Now, hold the tracking object that distance away from the camera and do a 'step test' - move the object to the edge of the screen as fast as you can and then wait until the camera has centered the object. While you are doing this, your Python code should be saving the 'error' in your object positioning. We did this once before here. In order for this to work, you will need to have overshoot in the step response. If you don't have overshoot, increase Kp and repeat the experiment until you DO have noticeable overshoot.
Now, use the methods we learned in class here to figure out what value of Kp should give you a zeta value of 0.7. Finally, test the Kp value you calculated. Capture another step response, and make sure that the zeta value really is 0.7. If it isn't tweak the Kp value until you get the right zeta value.
Right now, when the object is not in view, the calculated speed values from your PID equation will be wrong: sometimes there will be a 'divide by zero' error, and sometimes the camera will try to turn towards nothing. Instead, you would like the Python code to KNOW when no object is in view, and send velocity commands that will make the camera rotate back and forth looking for the object, but not winding up the camera cable.
One way to do this is to take the black-and-white image that shows only the object you are looking for and add up the value of this matrix. If there is an object in view, the sum of this matrix should be relatively large. If there is no object in view, all of these pixels will be dark and the sum of this matrix will be small (or even 0). In this case, your code should skip the part where it finds the location of the object, calculates speed from the error, and sends that speed. Instead, you should pick a constant 'pan speed' and send that speed, changing the direction periodically.
We've already learned how to find the location of an object. But, you might have to modify the things we've done in the past to make sure that you only find the specific object you are looking for. You will have to convert the RGB image to grayscale, then use thresholding to convert to black-and-white, then use masking to clean up noise, then apply the connected pixel algorithm to find the Object Matrix. We learned how to do that here.
Then, you need to pick out the specific object you want to track. You can do that by extracting one or more features of the object, and checking that the object is within specified ranges for the target object. We learned all about features and how to find them here.
Once you've found the location of only the object you specifically want to track without it being affected by other objects in view or the color of the background, for example, you are ready to move on to the next step.