<!DOCTYPE html>
ECE MEng student
rl592@cornell.edu - Ithaca, NY
Course : ECE 5960 - Fast Robots - Spring 2022
Freelancer is a free bootstrap theme created by Start Bootstrap. The download includes the complete source files including HTML, CSS, and JavaScript as well as optional SASS stylesheets for easy customization.
You can create your own custom avatar for the masthead, change the icon in the dividers, and add your email address to the contact form to make it fully functional!
I use PWM to control the duty of LED. In loop function, the duty of LED is changed every 10 rounds.
if (cnt == 10){cnt = 0;}
if (light_on) {duty++;}
else {duty--;}}
if (duty == 10) {light_on = 0;}
if (!duty) {light_on = 1;}
cnt++;
digitalWrite(LED_BUILTIN, HIGH); // turn the LED on (HIGH is the voltage level)
delay(duty); // wait for duty/1000 second
digitalWrite(LED_BUILTIN, LOW); // turn the LED off by making the voltage LOW
delay(10-duty); // wait for (10-duty)/1000 second
Setup function: define the pin for UART and baud rate
Loop function: read the data from UART port and send to it.
First initialize ADC pin for temperature detection and UART port for data transmission.
Then loop function reads the temperature from ADC pin, send to UART port and repeat over and over again.
1. Initialize UART and start PDM interrupts.
2. Get audio data from pdmDataBuffer.
3. Perform FFT on audio data.
4. Output the loudest frequency.
1. Initialize UART and LEDPIN and start PDM interrupts.
2. Get audio data from pdmDataBuffer.
3. If the sum of audio data is above the threshold, set LED on, otherwise off.
Send an ECHO command with a string value from the computer to the Artemis board, and receive an augmented string on the computer.
char char_arr[MAX_MSG_SIZE];
// Extract the next value from the command string as a character array
success = robot_cmd.get_next_value(char_arr);
if (!success)
return;
tx_estring_value.clear();
tx_estring_value.append("Robot says -> ");
tx_estring_value.append(char_arr);
tx_estring_value.append(":)");
tx_characteristic_string.writeValue(tx_estring_value.c_str());
Serial.println(tx_estring_value.c_str());
In the Arduino side, the cmd_type in robot command send by computer is first extracted,
then the message of robot command is extracted, stored and inserted into tx_estring_value.
After that, Artemis will update this tx_characteristic_string in "RX_STRING" service where
my computer as a central device can read it
Send three floats to the Artemis board using the SEND_THREE_FLOATS command and extract the three float values in the Arduino sketch.
float f_a, f_b, f_c;
// Extract the next value from the command string as an integer
success = robot_cmd.get_next_value(f_a);
if (!success)
return;
// Extract the next value from the command string as an integer
success = robot_cmd.get_next_value(f_b);
if (!success)
return;
// Extract the next value from the command string as an integer
success = robot_cmd.get_next_value(f_c);
if (!success)
return;
Serial.print("Three floats: ");
Serial.print(f_a);
Serial.print(", ");
Serial.print(f_b);
Serial.print(", ");
Serial.println(f_c);
break;
In the Arduino side, the cmd_type in robot command send by computer is first extracted, then three float values are extracted through "robot_cmd.get_next_value()" function in order. After that, these float numbers are displayed on the serial monitor by "Serial.print()" function
Setup a notification handler in Python to receive the float value from the Artemis board. In the callback function, store the float value into a (global) variable such that it is updated every time the characteristic value changes. This way we do not have to explicitly use the receive_float() function to get the float value.
First define a global variable, float_value, and define notification_handler callback function in which the float_value variable gets updated by ble.bytearray_to_float function. And then start the notify. Using asyncio.sleep from asyncio libaray, we can monitor the value of float_value when the characteristic of RX_FLOAT service is updated.
Briefly explain the difference between the two approaches to receiving a float value.
A single float takes 4 bytes and a single string of length 1 takes 1 byte. When sending data from Artemis to computer, the less number of bytes being sent, the more efficient the transmission would be. So when the number of bytes of data in string format is less than that in float format, the second approach is preferable in which the rest of work of converting string to float is done by computer. Otherwise, the first approach is better.
Send a message from the computer and receive a reply from the Artemis board.
Note the respective times for each event, calculate the data rate.
I created a dataRate_handler callback function to calculate the time between message being sent and being received by computer. And I also commented out th write_data function in Artemis to ensure the accuracy.
From this plot, we can estimate that the time consumed is proportional to the number of bytes of a message and ranges from 0.2 second to 0.8 second with respect to the number of bytes of a message.
What happens when you send data at a higher rate from the robot to the computer? Does the computer read all the data published (without missing anything) from the Artemis board?
In Artemis, I set the interval to 0 in write_data function. This means that the data will be updated at a very high rate. And in my computer, I wrote another notification_handler to read and store the updated data from bulletin board. By this way, I was able to record the number of data sent by the robot and the number of data received by my computer. As seen in above gif file, although data is sent at a higher rate, my computer is still able to read all the data without missing anything.
>I think all the connections between sensors and the Artemis should be detachable for cases when sensors are broken and I can still replace them.
>For ToF sensors, the wires should be connected from the back side of the infrared sensor chip, so it doesn't affect the distance detection.
>For IMU, the wires should be connected from the back of the flat side so that the sensor can be aligned with the x-y plane of the robot frame.
>IMU should be placed on the origin of xy-plane of the robot frame and parallel to the xy-plane.
>ToF sensors should be mounted on the front and rear of the car which enables the robot to detect the obtacles ahead and behind.
>The VL53L1X has three distance modes: short, medium and long. Their maximum distances in dark are 136 cm, 290 cm, 300 cm respectively. Using the short mode will enable the robot to get the most accurate distance in short-range dark environment but will limit the "visual" distance of the robot so that it cannot see farther obstacles and turn promptly. However, short mode provides the longest distance measurement under strong ambient light (135cm) compared to the other two modes (76/73 cm). So, short mode works best in strong light environment. As for the medium mode and long mode, they both offer a longer detection range but behave badly under strong light. Since the longer the detecton range is, the longer it takes to measure, the medium mode is preferable in dark. To conclude, I think the short mode is most suitable for our robots under strong light and the medium mode is most suitable in dark and in normal environment.
I tested the ToF from 100mm to 1300mm. It seems that the accuracy of ToF is still very high, the standard deviation of the error is relatively small, and it flatuated from 1.1 mm to 1.6 mm.
>In setup function, I use two extra GPIO pins to control two ToF sensors on and off. I first use pin 9 to turn on and initialize the front sensor and use pin 10 to shutdown the rear sensor. Then I set the address of the front sensor to 0x10 and turn on and initialize the rear sensor. By doing so, ToF sensors get different I2C addresses.
pinMode(9, OUTPUT);
pinMode(10, OUTPUT);
digitalWrite(9, HIGH);
digitalWrite(10, LOW);
if (frontSensor.begin() != 0) //Begin returns 0 on a good init
{
Serial.println("frontSensor failed to begin. Please check wiring. Freezing...");
while (1);
}
Serial.println("frontSensor online!");
frontSensor.setI2CAddress(0x10);
digitalWrite(10,HIGH);
if (RearSensor.begin() != 0) //Begin returns 0 on a good init
{
Serial.println("RearSensor failed to begin. Please check wiring. Freezing...");
while (1);
}
Serial.println("RearSensor online!");
>
>The loop funcion lets both sensors start measurements.
frontSensor.startRanging(); //Write configuration bytes to initiate measurement
while (!frontSensor.checkForDataReady()) delay(0.1);
int frontDist = frontSensor.getDistance(); //Get the result of the measurement from the sensor
frontSensor.clearInterrupt();
frontSensor.stopRanging();
Serial.print("frontDistance(mm): ");
Serial.print(frontDist);
RearSensor.startRanging(); //Write configuration bytes to initiate measurement
while (!RearSensor.checkForDataReady()) delay(0.1);
int reartDist = RearSensor.getDistance(); //Get the result of the measurement from the sensor
RearSensor.clearInterrupt();
RearSensor.stopRanging();
Serial.print("\trearDistance(mm): ");
Serial.print(reartDist);
Serial.println();
There are two types of infrared sensors: active and passive. Active infrared sensors are linear-controlled detectors, and their control range is a narrow and long space with a linear distribution. The active infrared sensor also has the advantages of small size, light weight, low power consumption, easy operation and installation, and low price. Since the lens surface of the optical system is exposed in the air, it is easily contaminated by dust and other sundries. The passive infrared sensor itself does not emit any energy but passively receives and detects infrared radiation from the environment. Once the human body infrared radiation comes in, the pyroelectric device will generate a sudden change of electrical signal after being focused by the optical system, and an alarm will be issued. In the digital passive infrared sensors, the weak electrical signal output by the pyroelectric sensor is directly input to a powerful microprocessor, and all signal conversion, amplification, filtering, etc. are carried out in a processing chip, thereby improving the passive infrared sensor's reliability. Cons: When the equipment is not firmly installed, when the temperature difference between indoor and outdoor is large, or when multiple sensors transmit signals at the same time, it is easy to cause false alarms.
Accoarding to 2.5.2 in the datasheet, increasing the timing budget increases the maximum distance the device can range and improves the repeatability error. However, average consumption augments accordingly. Since I choose the long mode, I want to make the range of ToF as large as possible, while the time budget is small. 140 ms is the timing budget which allows the maximum distance of 4 m to reached under long distance mdoe.
Accoarding to this figure of maximum distance vs different timing budget from datasheet, I tried 140 ms timing budget and 200 ms under long distance.
The above picture is the test result of setting the timing budget to 140ms in long mode. It can be seen that, its easurement time, internal measurement period and singal rate was around 96 ms, 100ms and 2500 respectively.
The above picture is the test result of setting the timing budget to 200ms in long mode. It can be seen that, its easurement time, internal measurement period and signal rate was
180 ms, 100ms and 1000 respectively.
Through the above tests, it can be seen that when both timimg budgets can achieve the same long-distance measurement, the tb of 140ms can obtain data updates with higher frequency
, and I also noticed that using the tb of 200ms is more prone to signal fail. So in the following experiments I will choose long mode and tb of 140 ms.
I programmed the StatusandRate program to Artemis. After testing, I found that when the ToF distance measurement is relatively small (the obstacles ahead are relatively close), the measurement status is generally accurate, and the status is basically good.
When the ToF distance measurement is relatively large (the obstacles ahead are relatively far), the measurement status is generally accurate, and the status is basically good.
This result made me realize that using ToF When measuring distances, the measurement at close range is more reliable, and when the distance is farther, use ToF less.
I connected the IMU to the Artmeis board using I2C wiring, ran Example1_Basics.
AD0_VAL is the value of the last bit of the I2C address. On the Sparkfun 9DoF IMU breakout the default is 1, and when the ADR jumper is closed the value becomes 0.
When AD0_VAL equals 1, the Artmeis could not find IMU through I2C. When AD0_VAK equals 0, it worked.
When IMU was placed on my table, it measured 0g on the x and y axes and +1g on the z axis. Here are the results of accelerometer when I rotated the IMU around the x and y axes. But when I rotated the IMU around z axis, it was unable to distinguish between that and the acceleration provided through Earth's gravitational pull. This is because the accelerometer can only gauge the orientation of a stationary item with relation to Earth's surface.
Here are the results of gyroscope when I rotated the IMU around the x, y and z axes. The gyroscope worked well around these axes.
I used atan2 and M_PI to compute pitch and roll.
float rollRad = atan2(sensor->accY(), sensor->accZ()) ;
float rollDegree = rollRad * 180 / M_PI ;
float pitchRad = atan2(sensor->accX(), sensor->accZ());
float pitchDegree = pitchRad * 180 / M_PI;
SERIAL_PORT.print("Roll: ");
printFormattedFloat(rollDegree, 5, 2);
SERIAL_PORT.print("\t\tPitch: ");
printFormattedFloat(pitchDegree, 5, 2);
When the actual angle was 90 and -90, my test result was:
pitch: [83,-83], roll: [90,-84]
So I scaled the original pitch value by 90/83, and used 1.0405 * Roll - 3.6416 to make the final roll value match the expected output.
I saved the accelerometer data through CoolTermWin software and did fft in jupyter lab.
From these figures, I couldn't decide the cutoff frequency in low pass filter. So I performed FFT on the data from accelerometer when I rotated IMU.
From this frequency response plot, I think the cutoff frequency is around 10 Hz. My computer recieved 297 samples in 9.5 seconds, so the sampling rate is 297/9.5 = 31.2632. Given that fc = 1/(2piRC), T=1/sample_rate, alpha=T/(T+RC), I get T = 0.03198, RC = 1/(20pi), alpha = 0.6677. Using these parameters, I am able to design a low pass filter:
Alpha determines how much LPF trusts new signal, the higher the alpha, the more high frequency content the filtered signal has
I can monitor the values of original roll, filtered roll from accelerometer and original roll from gyroscope in Serial Plotter.
I can also monitor the values of original pitch, filtered pitch from accelerometer and original pitch from gyroscope in Serial Plotter.
I found that roll and pitch from accelerometer have more high frequency content, while roll and pitch from gyroscope change more smoothly.
The values of roll and pitch from gyroscope are related to the initial placement angle of IMU and have errors, which will accumulate over time.
When I try to lower the sampling frequency, I find that the error of roll and pitch from gyroscope got larger.
After using a complimentary filter, from the figure, we can see that the angles (in purple) have a lot less spikes, become relatively stable, and no longer have the drift effect.
I used equation given in the lab description to compute yaw value from Magnetometer. Yaw value was 0 when I placed x axis of IMU towards north, and changed when I rotated it around z-axis. I also noticed that yaw value was robust to small changes in pitch.
To let Artemis be powered by a battery, I replaced the black male connector from the battery with the white 2mm JST connector and made sure the connector was polarized correcly. And the Artmeis worked while being powered by the 600mAh battery.
By measuring the car with a tape measure, I got the dimension of the car: [14.3cm 17.8cm 7.8cm].
In order to estimate the range of car speed, I attached a ToF sensor on the front of the car with double-sided tape, which is used to measure the distance of obstacles in front of the car.
And then set up the Artemis to send ToF distance measurement to the computer over Bluetooth. By doing so, I could monitor
and store the distance data while speeding up the car with remoter. Since the speed of range is battery-dependent,
I used a fully charged one to test. On the computer end, I set up a notification handler to store the distance data with time stamps.
Then I positioned the car around 4 meters away from the door and speeded it up until it hit the door (pillows for cushioning).
By taking the derivative of the distance respect to time, I got the plot of the speed of the car. In this plot, we can find that there is a desired acceleration and a deceleration process caused by hitting the pillowss. And we are able to estimate that the speed range is -3.7 to 3.7 meters per second (the car is able to move in both directions).
Since we already have the plot of the car speed, we can take advantage of it to estimate the car acceleration by taking its derivative. According to the plot below, I estimated that the acceleration of the car ranges from -7 m/s^2 to 7 m/s^2
1. What pins will you use for control on the Artemis?
>I will use A0, A1 for left motor driver and A2, A3 for right motor driver.
2. We recommend powering the Artemis and the motor drivers/motors from separate batteries. Why is that?
>Because the motor driver needs large current, and Artemis uses small digital signal,
separating the two power supplies can protect the Artemis digital signal from being affected by the motor drive current.
1. Connect the necessary power and signal inputs to run one motor driver from the Artemis.
What are reasonable settings for the power supply?
>I looked throught the datasheet of the motor driver, and found the recommended DC output current per bridge. So I set the external currnet supply to 1 A.
2. Use analogWrite commands to generate PWM signals and show that you can regulate the power on the output for the motors.
>I used analogWrite commands to change the duty cycle of the PWM and monitor the output of the motor drive in oscilloscope. From the oscilloscope,
I could see that the duty cycle of the PWM was changing and the PWM frequency was around 183 Hz.
3. 4. 5.Place your car on its side, such that the spinning wheels are elevated, and show that you can run the motor in both directions.
6. Repeat the process for the second motor and motor driver.
7.Install everything inside your car chassis, and try running the car on the ground. Here's how I hook up my car.
8. Explore the lower limit for which each motor still turns while on the ground.
>I gradually increased the duty cycle, and at first the robot car stood still and made a vibration-like noise. When the duty cycle is
increased to 15%, the cart starts to move.
9. If your motors do not spin at the same rate, you will need to implement a calibration factor. To demonstrate that your robot can move
in a fairly straight line, record a video of your robot following a straight line (e.g. a piece of tape) for at least 2m/6ft.
>The robot should start centered on the tape, and still partially overlap with the tape at the end.
Before calibration, The car would turn slightly to the left.
So I scaled the output duty cycle of right motor by 0.95. Then the robot car was able to move in a straight line.
10. Demonstrate open loop, untethered control of your robot - add in some turns.
>I first wrote a motor_control function to control the output of the two motors. In the function, I first set the limits range of duty cycle values to
between -255 and 255 given that the resolution of analogWrite output is 8-bit, and then controlled the left and right motors respectively.
Forward if the input is an positive integer and vice versa.
In a open control loop, the robot first moves forward for 1 second by setting duty cycles of two PWM signals to 78%. Then the robot would turn left at a large angular speed by setting left duty cycle to -100% and the other one to 100%.
1. Consider what frequency analogWrite generates. Is this adequately fast for these motors? Can you think of any benefits to manually configuring
the timers to generate a faster PWM signal?
>From part 2. in the oscilloscope, I could see that the duty cycle of the PWM was changing and the PWM frequency was around 183 Hz. This indicates
that the motor control signal could change its value in 5 ms and it's much faster than the motor response time and fast enough to control the motor.
So generating a faster PWM signal would not bring any benefits.
2. Write a program that ramps up and down in speed slowly. Reporting the values to your computer using Bluetooth either during operation or when
your ramp up/down procedure is over. Use this setup to document accurately what range of speeds you can achieve.
>Since the maximum effective distance of ToF is 4 meters, I judged that this distance could not allow the car to complete the entire ramp up/down process,
so I chose to set the duty cycle, from small to large, and experimented multiple times within the range of four meters.
I designed this experiment to send a duty cycle command from the computer through Bluetooth communication, and the car moves forward and returns ToF distance
measurement every 300ms (according to lab2, I set the time budget of distance measurement to 140ms, so 300ms is enough for the car to obtain Distance data), the
car stops after moving forward 4 meters; the distance sequence is derived on the computer to obtain the speed of the car.
Here are the representative plots of distance and speed given duty cycles of 23.52%, 31,37%, 39,21%, 54.90%, 70,58% and 86.27%.
Observing the last plot, I find that when the duty cycle is large enough, the speed of the car couldn't meet its maximum within 4 meters. So this limitation prevented me from estimating the full range of the speed given different duty cycles. However, I could get a approximate one as the plot below.
1. Have the robot controller to start on an input from your computer sent over Bluetooth.
I added a START command to start/pause pid computation and motor driver.
2. I added the command to send the pid parameter to tune the PID controller in real time
without loading code again. Upon completion of the behavior,
send the debugging data back to the computer over Bluetooth.
3. By using the EString class, I was able to pass some useful information to the computer via bluetooth, such as initialization status.
I used arduino PID library to design distance controller. I set the output limits of distance controller to -255 to 255, Taking front ToF reading as input and motor duty cycle as output, the negative feedback control of distance is realized. One of the reason that I chose this library to implement PID control is that its build-in function has already taken care of Derivative kick problem. In addition, I also simply designed a z-axis angular acceleration PID controller with gyroscope reading as input and motor duty cycle as output. The output of this controller is superimposed on the duty cycle of the left and right motors, one positive and one negative, so as to ensure that the robot runs straight ( need to implement a calibration factor to both motor)
by logging all data (time stamped sensor values and motor outputs), I calculated that the average time to execute a loop is about 323 ms
I wrote a scaling function that converts the output from your PID controller to an output for which the motors can actually react. And I set the deacband of both motors from -30 to 30.
When I tried Task A, at first I found that the motor was very slow to respond. Later I found that when I was dealing with coasting and active breaking modes, I set the active breaking modes to be used when the duty cycle is 0. So when I set it to coasting modes, the problem was solved.
The robot is initially placed from 2 meters to 4 meters away from the obstacle, and the target distance is 300mm. Then the maximum value of error in the initial state is -3700. At the same time, due to the scaling function of motors, the maximum duty cycle given is 225, then the maximum proportional gain can be calculated as Kp = 225/3700 = 0.06. So the actual Kp value must be smaller than this value, but in the same order of magnitude. So I started tuning from 0.01.
According to lab 3, I discovered that to make the range of ToF as large as possible, while the time budget is small. 140 ms is the timing budget which allows the maximum distance of 4 m to reached under long distance mdoe. Therefore I set the ToF to long distance mode and 140 ms budget time.
Because the integral part of PID Controller will always introduce overshoot, which will increase the risk of the robot hitting the wall, so I use PD control, Kp adjusts the response speed of the robot, and Kd suppresses overshoot. Also I found out that the initial position at different distances requires different PD parameters. The farther the distance is, the Kp needs to be reduced accordingly, otherwise the robot will hit the wall.
Kp = 0.025, Kd = 0.015,As shown in the figures below, the controller can effectively control the robot to move to the 300mm position, and the maximum speed in the process is 1.15 m/s
Kp = 0.028, Kd = 0.015,As shown in the figure below, the robot drives to the 300mm position more quickly, and the maximum speed was also increased to 1.67 m/s, but increasing the kp significantly increased the overshoot
Kp = 0.020, Kd = 0.010,As shown in the figures below, the controller can effectively control the robot to move to the 300mm position, and the maximum speed in the process is 1.19 m/s
Kp = 0.022, Kd = 0.012,As shown in the figure below, the robot drives to the 300mm position more quickly, and the maximum speed was also increased to 1.70 m/s, but increasing the kp significantly increased the overshoot
Kp = 0.022, Kd = 0.012, As shown in the figures below, the controller can effectively control the robot to move to the 300mm position, and the maximum speed in the process is 2.176 m/s
Through this series of parameter adjustments, I was able to design a reasonable PD controller to make the robot drive from a distance of 2 to 4 meters away from the wall to a position of 300mm, and I found that the maximum speed that the robot can achieve is largely affected by the initial The effect of position, the further away from the wall, the greater its maximum speed.
Considering that the Kalman Filter will be combined with PID control in future experiments, I will keep the scaling function for motor output when doing the step response test.
First I set the input of motor to 70 and here are the ToF readings, calculated speed and motor input.
From the figrues above, we can have that the steady state of the speed is around 1500 mm/s. And given the motor input is 70, d = u/v = 70/1500 = 0.04667
Then I measured 90% rise time, it would be 784 ms or 0.784s. Therefore m = 0.01589
First try to reason about ballpark numbers for the variance of process and measurement. And we know that their relative values determines how much you trust your model versus your sensor and the process noise is dependent onf sampling rate. I averaged the sampling rate and got 10.21 ms per sample
In order to implement Kalman Filter on the computer, I define the kf function, whose input are mu at last time, sigma at last time, u(input), y(measurement), Ad (discrete A matrix), Bd (discrete B matrix) and C matrix
Loop through all of the data from my pre-recorded run, while calling this kf function.
After the first run, I get the following plot. The KF estimate in the figure is almost the same as the ToF reading, indicating that the Kalman filter trusts the results of the ToF reading very much at this time, I should increase the measurement noise Sigma_z.
Next I increased the measurement noise from 40 to 400. Below is the plot I get.In this figure, I found that the waveform of KF is much smoother than the previous one, and is closer to the actual state than ToF reading (because the sampling rate is higher than the frequency of ToF data acquisition, the ToF reading curve looks a little more discrete). It shows that Kalman Filter is less confident in ToF measurement than before, and can better balance expected measurement and actual measurement.
First, I included the BasicLinearAlgebra library and wrote the Kalman filter function according to part 3.
In each iteration, the general loop time is within 0.005-0.03s (5-30ms). When assigning a value to Delta_t, because this value is too small and the Matrix precision is limited, the Delta_t matrix will become 0. So I modified the library function and changed the data type of all matrices from float to double, and the problem was solved.
Then I found that mu and Sigma both became NaN after the second iteration. It took me a long time to find out that the inversion of a 1by1 matrix in the BasicLinearAlgebra library incorrectly returns 1. But fortunately, the inverse matrix is exactly 1by1, and I can directly divide by sigma_m when calculating Kalman Gain, instead of inverting it.
Here is the plot of ToF reading and KF estimate. Orange: KF estimate; Blue: ToF reading.
Then I found that although the KF estimate will eventually approach the ToF reading, the KF estimate will increase to around 2500 at first. So I wondered if the value I initialized KF sigma was too large, because sigma represents the uncertainty of estimate. So I made two additional attempts, setting the sigma to a very large value and a very small value
But the results were even worse. So it shouldn't be an initialization problem. Then I thought that Delta_T needs to be calculated for each iteration of KF, maybe the Delta_T calculated for the first time is too large, because in the first calculation, Delta_T is calculated by the current time (acquired from the function millis()) minus the previous time acquired from the function millis() in setup function, so I did a modification: in the first KF iteration, run through all Kalman filter calculations except updating mu and sigma just to get the previous time. The rest of the iterations remain the same. Fortunately, with such modification, the problem is solved. Here are the plots of ToF readings and KF estimate. I also zoomed in on a section where the filtering effect of the KF estimate can be better observed.
By using Arduino PID library, I created a PID controller that allows the robot to do on-axis turns in small, accurate increments. The robot was enabled to have 20 readings (18 degree increments) per 360 degree.
As shown in the figure above, the yaw value acquired from IMU was tracking the value of angle_set. And it's sufficient to perform the orientation control.
In order to minimize the noise of the IMUsensor, during initialization, when the robot remains static, I record the value of the gyroscope in the z-axis a thousand times, and take the average value as the offset. So I think my angle information is more accurate. After my many attempts, the robot rotates a circle, and the angle measurement value will have a maximum error of 10 degrees. When the robot rotates a circle under the orientation control, it will drift. In some cases, it will cause the robot to be 0.5 feet (0.1524 m) away from the origin. Therefore, in the worst case, there will be an error of 0.67 m, but generally this is not the case.
I will wait for a short while after sending a new angle command, let the robot read the ToF reading, and let the robot send back the distance data through Bluetooth. After testing, I set the waiting time to 2.5 seconds.
Sanity Check:
I put the robot in a corner of my apartment. The robot obtains ToF reading under orientation control. As shown in
the picture below, the shape in the picture conforms to the shape of the corner of the wall. So the program should work.
I placed the robot on five points of the lab, and executed the ToF reading under orientation control.
By taking the IMU pose in the robot frame and using the transformation matrices, I managed to plot all of TOF sensor readings in a single plot.
The errors in the map I got mainly exist near the two largest boxes. I corrected some points according to the position and side length of the actual boxes.
There are simulated robot and map in the simulator, in which the robot also displays the data of the distance sensor. We can control the linear velocity and angular velocity of the robot with the arrow keys, and stop the robot from moving with the space key.More importantly, we can control the robot through the Commander class. We can obtain the relevant information of the robot through get_pose, get_sensor, and realize functions such as localization. There is also the set_vel function to control the linear and angular velocity of the robot.
At the same time, we can use Plotter functions such as plot_odom, plot_gt, plot_bel to monitor some information such as odometry, truth pose, belief, etc. in the plotter,
so that we can verify the effectiveness of some algorithms.
To make the robot execute a "square" loop, I make the robot go straight, then turn 90 degrees in place, and repeat four times.
When going straight, I set the linear velocity to 0.5 m/s and the execution time is 20 iterations in for loop; when rotating in place, I
set the angular velocity to 0.1 rad/s and the execution time is 47 iterations in for loop.
(a). As long as the robot executes the program of rotation in place whenever it approaches an obstacle, no matter how many degrees it rotates, it can avoid the obstacle.
Therefore, the range of this angle is relatively broad. In my program the robot will rotate about 50 degrees.
(b). According to my attempts, in the simulation environment, no matter how fast the robot can avoid obstacles, I even tried 100 m/s and it worked well, so this may also be a defect of the simulator.
(c).The logic of this closed loop control is that when the data returned by the robot's distance sensor is less than a certain threshold,
the robot will back up and rotate again. So how close the robot can get to the obstacle depends on the data returned by the distance sensor and
the choice of the threshold. Since the distance sensor has noise, the threshold value needs to be set conservatively. The threshold I set is 0.3m, that is,
when the robot is less than 0.3m away from the obstacle, it will perform backward and rotate.
(d). Yes, my code always work.
Odometry Motion Model
delta_rot_1, delta_trans and delta_rot_2 are there parameters that need be calculated for the following steps. And they are calculated
according to the equations given in lectrue 18.
And then use the compute_control function above twice to get the actual_u and expected_u. Combining these controls with the gaussian distribution and rot_sigma/trans_sigma, the transition probability would be calculated.
With the functions above, it is sufficient to implement the prediction step by several for loops.
However, this way of prediction step needs 1944 to the power of 1944 iterations and takes a very long time (over one minute) to finish. To simplify this step and save some time, I first loop through every belief at last time and check if its value is above a threshold. If yes, loop through the every pose at current time and add up the product of the belief at last time and the corresponding transition probability to the belief_bar at current time.
And the result of this simplified prediction step is surprisingly good, and it finished within a second.
The update step loops through every cell of the configuration space. And at each cell, get the actual sensor readings (self.obs_range_data) and the expected sensor readings (self.mapper.get_views(cx, cy, ca)). Assuming that individual measurements are independent given the robot state, the likelihood of the measurements would be the product of the gaussian distributions of these measurements. And then the belief at each cell can be calculated by the likelihood of the measurements times the belief_bar at each cell.
Using the code skeleton in lab11.ipynb, the bayes filter can be fast and performed successfully.
As shown in the table and the plot, the trajectory of the ground truth and belief are every close and this Bayes Filter only relied on the odometry information and sensor readings and is able to estimate the pose of the robot.
Here's the result of simulation. As shown in the figure below, the trajectory of the belief (blue) was very close to the ground truth (green) which demonstrated that the Bayes Filter Localization worked well.
(a) Implement the function perform_observation_loop In order to take advantage of the orientation control implemented by lab9, we modified the perform_observation_loop function. To this function are added the command to send the angle via bluetooth and the command to get the ToF data in a for loop.
(b) Localization at four marker poses
(1) -3 ft ,-2 ft ,0 deg (-0.9144 meter,-0.6096 meter ,0 deg)
As shown in the figure below, the belief (blue) after the update step was (-0.914, -0.610, 10), the ground truth (green)
was (-0.9144,-0.6096, 0) and the pose error was (-0.0004, 0.0004, -10).The two points of belief and ground truth are completely coincident
in the plotter, which reflects that the pose estimate is quite accurate.
(c) Discussion
The estimates of Localization in the four marker poses are accurate, the first two points are almost accurate,
and the last two points have some errors in the x direction. The estimate in the all four poses has a 10 degree error in the angle.
We think the reason why the estimates of the last two poses are not as accurate as those of the first two poses may be that the latter
two poses are closer to the large box in the middle, and there is an error in the orientation control of the robot, so when approaching the
box, the rotation angle is not accurate and the error is higher. In addition, since the ToF is located at the front end of the robot,
the ToF is not collecting the exact distance from the center of the robot to the walls/obstacles, so it will cause certain errors.
But when we add 0.08 meters (the distance from the ToF to the center of the robot) to all ToF readings, the result remains the same.
Another possibility is that the actual box position is inconsistent with the box position in the map used by the precached true measurements.
Steps for the robot to move to a waypoint:
(1) It first needs to find its current pose using localization.
(2) Based on the current pose and goal, calculate the angle that the robot needs to turn and the distance it needs to travel.
(3) Turn to the desired angle using orientation control and move forward to the desired position using distance control.
(4) After the robot moves to a new position, do the localization again. If the pose estimate is more than one tile away from the
target waypoint, it would be
considered to have not arrived yet and then repeat step (1)(2)(3)(4) until the robot reaches the target waypoint.
If the pose estimate is within one tile away
from the target waypoint, it would be regarded as a success of visiting the target waypoint and
the robot select a new waypoint as the next goal and repeat
step (1)(2)(3)(4) until all the waypoints are visited.
The capabilities required to accomplish this task:
(1) Orientation control: The robot can be controlled by PID and turn to any angle which was accomplished by Lanyue Fang in Lab 8.
(2) Mapping: Bayes Filter localization which was accomplished in Lab 12 by the team.
(3) Distance control: Given the current ToF reading and distance that it needs to travel, the robot can first calculate
the target distance from the front wall to itself and then use PID to control the robot to the desired position. This task was finished by Ruohan Liu in Lab 6.
The videos below are our tests for the capabilities above.
The core function is shown above which enables the robot to navigate to all the waypoints in order.
At the first time step, we manually put the robot at one of the waypoints. So we assume the robot is aware of its
initial pose, skip the localization at the first waypoint and starts to move to the remaining waypoints. The program is
designed in a while loop, every time the robot tries to navigate to the goal, it would first perform the Bayes Filter Localization
by calling the function get_cur_pose() as shown below. And this function control the robot to perform observation loop and do the
update step of Bayes FIlter localization.
After getting the pose estimate from Bayes FIlter Localization, the robot calculates the required angle and distance to reach the goal by calling the funciton loc.compute_control() returns rotation1, translation and rotation2 as shown below. We only use rotation1, translation since the robot doesn't have the target angle at each waypoint.
Having the required angle to turn and distance to travel, the robot first calls the function rotation to turn to that angle and then calls the function translation to move.
This whole process works as described in Section 1. And here is the video of actual navigation and plots of robot's trajectory.
As shown in the figure above, the robot can perform navigation based entirely on ToF readings and IMU. Although the robot's trajectory does not pass through all the waypoints perfectly, considering the limited configuration space of the Bayes Filter localization, which causes a large error in the angle estimate, and only the IMU provide odometry information, this result is still very satisfactory.
For global path planning, we first create a roadmap with waypoints as nodes and edges and format the roadmap into an adjacency matrix.
Then we use Dijkstra's algorithm to find the shortest path between the current pose and the goal. It picks the unvisited vertex with the lowest distance, calculates the distance through it to each unvisited neighbor, and updates the neighbor's distance if smaller. Below is the Dijkstra algorithm we use.
We put the robot at the node 8, and try to use the roadmap and Dijkstra's algorithm to find a path towards node 6.
Then we feed the adjacency matrix of the roadmap to Dijkstra's algorithm and choose the node 6 as the destination. And we get the index of
of the waypoints in the shortest path generated by Dijkstra.
And the path can be visualized in the figure below.
Next, with the waypoints generated by the global planner, we can send the them to the navigation funciton in Section 2 and test. Below is the video of our test.
As shown in the video, the robot can find a path from the current pose to the other waypoint around the box.