Environment measurement models describe the information process by which sensor measurements are generated in the physical world.
Probabilistic robotics explicitly models the noise in sensor measurements. Such models account for the inherent uncertainty in the robot's sensors. Formally, the measurement model is defined as a conditional probability distribution
By modeling the measurement process as a conditional probability density,
Many sensors generate more than one numerical measurement value when queried. We denote the number of such measurement values within a measurement
We ideally assume that it is independent between each individual measurement beam. So the probability
A map of the environment is a list of objects in the environment and their locations. Formally, a map
Maps are usually indexed in one of two ways
-
feature-based: In feature-based maps,
$n$ is a feature index. The value of$m_n$ contains the Cartesian location of the feature next to the properties of a feature. -
location-based: In location-based maps, the index
$n$ corresponds to a specific location. In planar maps, it is common to denote a map element by$m_{x,y}$ instead of$m_n$ , to make explicit that$m_{x,y}$ is the property of a specific world coordinate$(x,y)$ .
Range finder measure the range to nearby objects. Range may be measured along a beam, which is a good model of the workings of laser range finders.
Our model incorporates four types of measurement errors, all of which are essential to making this model work: small measurement noise, errors due to unexpected objects, errors due to failures to detect objects and random unexplained noise. **The desired model
-
Correct range with local measurement noise.
In real world, because of the limited resolution of range sensor, atmospheric effect on the measurement signal and so on, the measurement noise always arises.
We denote this measurement noise by
$p_{\mathrm{hit}}$ . This measurement noise is usually modeled by a narrow Gaussian. Let's use$z_t^{k*}$ to denote the "true" range of the object measured by$z_t^k$ . We set the mean of Gaussian to be $z_t^{k}$ and denote the standard deviation of Gaussian to be $\sigma_{\mathrm{hit}}$*.In practice, the values measured by the range sensor are limited to the interval
$[0;z_{\max}]$ , where$z_{\max}$ denotes the maximum sensor range. Thus the measurement probability is given by $$ p_{\mathrm{hit}}(z_t^k|x_t,m)=\begin{cases} \eta \mathcal{N} (z_t^k;z_t^{k*},\sigma^2_{\mathrm{hit}}) & 0\leqslant z_t^k\leqslant z_{\max}\ 0 & \mathrm{otherwise} \end{cases} $$$z_t^{k*}$ is calculated from$x_t$ and$m$ via ray castingThe normalizer
$\eta$ evaluates to $$ \eta=\left(\int_{0}^{z_{\max}}\mathcal{N}(z_t^k;z_t^{k*},\sigma^2_{\mathrm{hit}})\dd{z_t^k}\right)^{-1} $$ !!notice that!!, the standard deviation$\sigma_{\mathrm{hit}}$ is an intrinsic parameter that should be estimation. -
Unexpected objects
Environments of mobile robots are dynamic, whereas maps
$m$ are static. Some moving objects like people will cause a very short range. A simple approach to deal with these is to treat them as sensor noise. Unmodeled objects have the property that they cause ranges to be shorter than $z_t^{k}$, not longer.* Because if they cause ranges longer than$z_t^{k*}$ , the maps will hide block them.Mathematically, the probability of range measurements in such situations is described by an exponential distribution. The parameter of this distribution,
$\lambda_{\mathrm{short}}$ , is an intrinsic parameter of the measurement model. $$ p_{\mathrm{short}}(z_t^k|x_t,m)= \begin{cases} \quad \eta \lambda_{\mathrm{short}}e^{-\lambda_{\mathrm{short}}z_t^k} & 0\leqslant z_t^k\leqslant z_t^{k*}\ \quad 0 & \mathrm{otherwise} \end{cases} $$ Here $$ \eta=\left(\int_{0}^{z_t^{k*}} \lambda_{\mathrm{short}}e^{-\lambda_{\mathrm{short}}z_t^k} \dd{z_t^k}\right)^{-1}=\frac{1}{1-e^{-\lambda_{\mathrm{short}}z_t^{k*}}} $$ -
Failures
Sometimes, obstacles are missed altogether. For example, laser range finder will fail when sensing black, light-absorbing objects or for some laser systems when measuring objects in bright sunlight. A typical result of a sensor failure is max-range measurement: the sensor return its maximum allowable value
$z_{\max}$ .We will model this cases with point-mass distribution centered at
$z_{\max}$ $$ p_{\max}(z_t^k|x_t,m) = I(z=z_{\max})=\begin{cases} 1 & \mathrm{if} \space z= z_{\max}\ 0 & \mathrm{otherwise} \end{cases} $$ Here I denotes the indicator function that takes on the value 1 if its argument is true, ans is 0 otherwise.$p_{\max}$ does not possess a probability density function because it is a discrete distribution. So we simply draw$p_{\max}$ as a very narrow uniform distribution centered at$z_{\max}$ . -
Random measurements
Range finders occasionally produce entirely unexplainable measurements**. To keep things simple, such measurements will be modeled using a uniform distribution spread over the entire sensor measurement range
$[0;z_{\max}]$ ** $$ p_{\mathrm{rand}}(z_t^k|x_t,m)=\begin{cases} \frac{1}{z_{\max}} & 0\leqslant z_t^k \leqslant z_{\max}\ 0 & \mathrm{otherwise} \end{cases} $$
The beam range finder model is the linear combination of the above four error with four weights
The beam range finder model also include the intrinsic parameters
A more principled way to learn these parameters is learning from actual data. This is achieved by maximizing the likelihood of a reference data set
According to the above, there are four types of errors. So, we decompose the data set into four disjoint sets
We use maximum likelihood estimator to estimate the intrinsic parameters
The logarithm is a strictly monotonic function, so we can maximize the log-likelihood.
$$
\Theta = \arg \max_{\Theta}E\left[\log p(Z|X,m,\Theta)\right]
$$
We denote the probability of the appearance of local noise by
p_{\mathrm{short}}(z_i|x_i,m) &= \eta\lambda_{\mathrm{short}}e^{-\lambda_{\mathrm{short}}z_i} \qquad 0\leqslant z_i\leqslant z_i^{} \end{split} $$ logarithm of them will be(here, $\ln=\log$) $$ \begin{split} \log p_{\mathrm{hit}}(z_t^k|x_t,m) &= \log\eta -\frac{1}{2}\log2\pi-\log\sigma_{\mathrm{hit}}-\frac{1}{2}\frac{(z_i-z_i^{})^2}{\sigma^2_{\mathrm{hit}}} \qquad 0\leqslant z_i\leqslant z_{\max}\
\log p_{\mathrm{short}}(z_i|x_i,m) &= \log\eta+\log\lambda_{\mathrm{short}}-\lambda_{\mathrm{short}}z_i \qquad 0\leqslant z_i\leqslant z_i^{} \end{split} $$ sequently $$ \begin{split} \frac{\partial E\left[\log p(Z|X,m,\Theta)\right]}{\partial \sigma_{\mathrm{hit}}} &= \sum_{z_i\in Z}p(c_i=\mathrm{hit})\frac{\partial }{\partial \sigma_{\mathrm{hit}}}\log p_{\mathrm{hit}}(z_i|x_i,m)\ &= \sum_{z_i\in Z}p(c_i=\mathrm{hit})\left[-\frac{1}{\sigma_{\mathrm{hit}}}+\frac{(z_i-z_i^{})^2}{\sigma_{\mathrm{hit}}^3}\right]\ \frac{\partial E\left[\log p(Z|X,m,\Theta)\right]}{\partial \lambda_{\mathrm{short}}} &= \sum_{z_i\in Z}p(c_i=\mathrm{short})\frac{\partial }{\partial \lambda_{\mathrm{short}}}\log p_{\mathrm{short}}(z_i|x_i,m)\ &= \sum_{z_i\in Z}p(c_i=\mathrm{short})\left[\frac{1}{\lambda_{\mathrm{short}}}-z_i\right] \end{split} $$ set them to be zero $$ \begin{split} \frac{\partial E\left[\log p(Z|X,m,\Theta)\right]}{\partial \sigma_{\mathrm{hit}}} &= 0\ \frac{\partial E\left[\log p(Z|X,m,\Theta)\right]}{\partial \lambda_{\mathrm{short}}} &= 0\ \end{split} \quad\longrightarrow \quad \begin{split} \sigma_{\mathrm{hit}} &= \sqrt{\frac{1}{\sum_{z_i\in Z}p(c_i=\mathrm{hit})}\sum_{z_i\in Z}p(c_i=\mathrm{hit})(z_i-z_i^*)^2 }\ \lambda_{\mathrm{short}} &= \frac{\sum_{z_i\in Z}\space p(c_i=\mathrm{short})}{\sum_{z_i\in Z}\space p(c_i=\mathrm{short})z_i} \end{split} $$
The beam-based model exhibits a lack of smoothness. The likelihood field model can overcome this disadvantage.
We firstly let $x_t=(x,y,\theta)^T$ denote a robot pose at time $t$ under the global coordinate frame. Denote the relative location of sensor measurement in the robot's coordinate system by $(x_{k,sens},y_{k,sens})^T$ and the angular orientation of the sensor beam relative to the robot's heading direction by $\theta_{k,sens}$ . Denote the location of sensor measurement in global coordinate by $(x_{z_t^k},y_{z_t^k})^T$ . According to the trigonometric transformation
$$
\begin{bmatrix}
x_{z_t^k} \ y_{z_t^k}
\end{bmatrix}
\begin{bmatrix}
x \ y
\end{bmatrix}
+
\begin{bmatrix}
\cos\theta & -\sin\theta\
\sin\theta & \cos\theta
\end{bmatrix}
+z_t^k
\begin{bmatrix}
\cos(\theta+\theta_{k,sens})\
\sin(\theta+\theta_{k,sens})
\end{bmatrix}
$$
These coordinates are only meaningful when the sensor detects an obstacle. If the range sensor takes on its maximum value
Noise arising from the measurement process is modeled using Gaussian. Let dist denote the Euclidean distance between the measurement coordinates
We model the probability of sensor measurement by zero-centered Gaussian
$$
p_{\mathrm{hit}}(z_t^k|x_t,m)=\varepsilon_{\sigma_{\mathrm{hit}}}(dist)
$$
????? why the probability of sensor measurement is given by zero-centered Gaussian, not
We assume that max-range readings have a distinct large likelihood. As before, this is modeled by point-mass distribution
A uniform distribution
Just as for the beam-based sensor model, the desired probability
There exist a number of range sensor models in the literature that measure correlations between a measurement and the map. A common technique is known as map matching. The sensor measurement model compares the local map
If the robot is at location
The sensor models discussed thus far are all based on raw sensor measurements. An alternative approach is to extract features from the measurements. Most feature extractors extract a small number of features from high-dimensional sensor measurements to reduce computing consumption.
If we denote the feature extractor as a function
In many robotics applications, features correspond to distinct objects in the physical world. For example, in indoor environments features may be door posts or windowsills; outdoors they may correspond to tree trunks or corners of buildings. In robotics, it is common to call those physical objects landmarks.
The most common model for processing landmarks assumes that the sensor can measure the range and the bearing of the landmark relative to the robot's local coordinate frame. The feature extractor may generate a signature that may equally be an integer (e.g. an average color) that characterizes the type of the observed landmark, or a multidimensional vector characterizing a landmark (e.g. height and color).
If we denote the range by
We will model noise in landmark perception by independent Gaussian noise on the range, bearing, and the signature. The resulting measurement model is formulated for the case where the i-th feature at time t corresponds to the j-th landmark in the map. As usual, the robot pose is given by
??????? whom the i-th feature is belong to??? Is the j-th landmark a feature of the map??? why corresponding the i-th feature to the j-th landmark not the j-th landmark??? For example, there 6 landmarks in the map and they are [rabbit, dog, cat, bird]. Rabbit is the $1^{th}$ landmark. Dog is the $2^{th}$ landmark. Cat... Bird... At this time $t=t$ , we observe 2 feature $z_t = [z_t^1,z_t^2]$ . Which landmark does the feature belong to? If $z_t^1=bird$ , that means the $1^{th}$ feature corresponds to $4^{th}$ landmark.
$$
\begin{bmatrix}
r_t^i \ \phi_t^i \ s_t^i
\end{bmatrix}
\begin{bmatrix} \sqrt{(m_{j,x}-x)^2+(m_{j,y}-y)^2}\ \atan2(m_{j,y}-y,m_{j,x}-x)-\theta\ s_j \end{bmatrix} +\begin{bmatrix} \varepsilon_{\sigma_r^2}\ \varepsilon_{\sigma_\phi^2}\ \varepsilon_{\sigma_s^2} \end{bmatrix} $$
A key problem for range/bearing sensors is known as the data association problem. This problem arises when landmarks cannot be uniquely identified, so that some residual uncertainty exists with regards to the identity of a landmark. Just like the problem above, why corresponding the i-th feature to the j-th landmark. So we need to specify correspondence between the feature and landmark.
If we know the correspondence