Project 2

In this project, we will perform Monte Carlo Localization. It has two steps. First, you will perform MCL in 1 dimension, using a simulated robot. We will then use the actual robots to localize in the room.

1D Localization, Due 2359, Friday, March 22

Download the code tarball. There's probably more of a structure here than we need, but it will come much more in handy in 2D Localization, so we might as well get used to it. Nearly all of it doesn't require any coding from you, just your understanding; the actual amount of required coding is very small.

This starter code was written by myself and my friend, Mac Mason, for a Robotics class at Duke. You can see the ugly PyDoc documentation here:

In addition, there is a map file called threemap.map, which encodes a map with three landmarks.

Here is an example run over three timesteps with ten particles:

taylor@albert:~/zee/si475/localization/mcl/1d_localization$ python localizer.py 
dx? 4
Robot: 10.9285407607
Observations:
  0 -8.13234271922
  1 25.2313888427
  2 70.4740901866
P: 30.2263999417 1.04101338089e-46
P: 43.4118287854 1.04101338089e-46
P: 37.1547288267 1.04101338089e-46
P: 38.3049744933 1.04101338089e-46
P: 91.4404319529 1.04101338089e-46
P: 28.4827367061 1.04101338089e-46
P: 11.0135577861 1.0
P: 104.137268126 1.04101338089e-46
P: 41.4041394018 1.04101338089e-46
P: 17.3755608798 2.57591012888e-34
Heaviest-weighted particle: P: 11.0135577861 1.0

dx? 5
Robot: 16.3389344932
Observations:
  0 -13.0709611949
  1 17.9521164318
  2 64.9887382362
P: 16.2102659683 0.105545602015
P: 16.6451960755 0.063919920273
P: 15.8257754273 0.10250429916
P: 15.7710890058 0.0984669164371
P: 15.707516332 0.0929191461743
P: 16.1238933594 0.108983749649
P: 16.2731712898 0.101667871105
P: 16.0693670021 0.109937412395
P: 15.9768452962 0.109320836057
P: 15.9005799248 0.106734246735
Heaviest-weighted particle: P: 15.9005799248 0.106734246735

dx? 5
Robot: 20.8409691244
Observations:
  0 -18.2507354634
  1 15.2002855183
  2 59.3915610555
P: 21.4458558954 0.0539710825931
P: 21.1559233219 0.103440160585
P: 20.9548888865 0.140052647626
P: 20.6274976612 0.176973843754
P: 21.0221031601 0.12827755202
P: 20.6497104813 0.175966732039
P: 21.7591156752 0.020128335144
P: 21.2891847675 0.0791465549909
P: 21.1252918033 0.10917964297
P: 21.8770841196 0.0128634482784
Heaviest-weighted particle: P: 20.6274976612 0.176973843754

2D Localization, Due 0755, Monday, April 1

There are two maps in the room, the big one on the door side, and a little one on the other side. Each has several April Tags (so called because they are produced by the APRIL Robotics Laboratory at the University of Michigan) that will serve as landmarks in our world. You can assume there are no obstacles inside the map, and that your robot will be inside the circle formed by the tags.

Your task is to solve the "kidnapped robot problem," in which your robot is set down at random inside the map, and uses a particle filter to localize itself. You will find that much of this is not significantly different than in the 1D case.

Trajectories One thing to consider is that it is really, really slow to debug if you actually have to wait for the robot to make each motion on each run. The way this is usually handled, is we perform, and write down, a trajectory, in which every action the robot takes is noted, followed by each of its observations. Then, instead of actually running the robot, you can just read what happened from the trajectory. This is much faster and more convenient. For example, a trajectory containing "I drove ahead .5 meters, then observed landmarks 6 and 10, at some distances, then turned pi/4 radians, then observed landmark 7" could be encoded like this:

M: .5 0
O: 6 1.325
O: 10 0.746
M: 0 .785
O: 7 3.235

A good first step is to write a logging program to allow you to note these trajectories.

Models You'll need a sensor model for our APRIL Tag detector, and a motion model BOTH for linear motion (driving in a straight line) and rotational motion (turning). Feel free to use the \(x=b\cdot o+\mathcal{N}(0,\sigma)\) model we've been using, though you can improve it if you like. Your parameters should be calculated based on observed error.

The result After you've written your code, running mcl.py should result in a sequence of pictures. Below is a GIF constructed from the PNG files created by my solution, using pretty bad motion and sensor models on the big map. The colored dots are the landmarks, while the black circles are particles. The white lines indicate their orientation. Functions to draw all of this are given to you.

The code tarball, which includes map files for both the big map and the small map, can be found here. Start by reading through all documentation.

Plan of Attack I'd start by getting motion and sensor models, followed by getting your trajectory logging working. Then I would read through ALL documentation in the code to see what you have and what you don't have. You can largely ignore drawstuff.py. Then, I would look at your 1D Localization code, and port it over to 2D. Don't just blindly copy-paste; look at each line, and think about what, if anything, has to change.