The purpose of this lab is to implement grid localization using a Bayes filter in a simulation environment. Before starting work on Lab 10 I went through the lecture slides to familiarize myself better with how the Baye's filter works and why it is beneficial for our situation. The Baye's filter consists of 2 main steps: a prediction step and an update step. It takes a prior belief distribution bel(x_{t-1}), control input u_t, and measurement z_t to calculate the posterior belief distribution bel(x_t) about the current state. Line 3 performs the prediction step by calculating the belief before incorporating measurements, while line 4 performs the update step by incorporating the measurement likelihood and normalizing the distribution with factor η.
To implement the Baye's filter in the simulation environment, we had to complete the 5 main functions for the prediction and update steps.
The first function to implement was compute_control. This function takes in a current position and a previous position and calculates the 3 main odometry control values: rotation_1, translation and rotation_2. I used the equations given in the lecture slides to compute these values from the given positions. I also had to make sure to normalize the two rotations so that they are within the -180 to 180 deg range.
The second function to implement was odom_motion_model. This function takes in a current position, a previous position and a control input u. The output of this function is the probability of each parameter, calculated as a Gaussian distributions with mean from u, which is essentially p(x'|x, u). This is just the probability of a state given a prior state and action. The function first extracts each of the 3 model parameters by calling compute_control(). Then we use loc.gaussian() to calculate each individual parameter probability. The output is just these 3 probabilities multiplied.
The third function to implement was prediction_step(). This function takes in two parameters, the current and previous odometry parameters. It performs the predict step of the Baye's filter. It first calculates the control variable u using computer_control() then initializes a new belief loc.bel_bar to 0. It loops over every possible prior cell (x,y,θ) with non-negligible belief, defined by a probability threshold (as said in the lab handout I chose a threshold of 0.0001), and every possible new cell (x',y',θ'). It then converts both grid indices into real poses via the mapper, and uses the odometry motion model to compute the likelihood of moving from (x,y,θ) to (x',y',θ') given u. Each of these transition probabilities is weighted by the old belief at (x,y,θ) and accumulated into bel_bar[x',y',θ']. Finally, I normalized bel_bar so that the total probability sums to one.
This step of the Baye's filter can be highly computationally inefficient since it loops through all the possible cells. Therefore, to reduce computation we set a threshold and only loop through all prior cells with a certain threshold of belief.
The fourth function to implement was sensor_model(). This function takes in one parameter, the true observations of the robot in the map for a particular pose. It then outputs p(z|x), which is the probability of the given oberservation given a current state. I used loc.gaussian() to calculate each probability using the observation data as the mean.
The last function is the update_step() of the Baye's filter. This function loops over the grid for the current states and uses the sensor_model() function to retrieve the probability array. This probability value is then used to update loc.bel. Finally, I normalized bel so that the total probability sums to one.
Lastly, I ran 2 iterations of the simulation using the Baye's Filter. The blue line represents the Baye's filter belief, the green line the ground truth and the red line is the odometry measurements. Comparing the Baye's filter output to the simple odometry model shows that the Baye's filter performs much better.
Overall, this lab provided valuable insights into robotic localization techniques and Baye's filter.
I referenced Stephan Wagner and Mikayla's work for Lab 10.