Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ospa Metric #67

Open
louis-gu opened this issue Jun 25, 2019 · 2 comments
Open

Ospa Metric #67

louis-gu opened this issue Jun 25, 2019 · 2 comments

Comments

@louis-gu
Copy link

louis-gu commented Jun 25, 2019

Hi!
I want to thank you with the work you did coding all that stuf but I am missing something with your OSPA metric.
I read the original paper:
http://www.dominic.schuhmacher.name/papers/ospa.pdf
If I got it right, I get an m*n distance matrix from their algorithm. Why do you compute a n*n distance matrix with a "c=10" padding? Even in the original implementation you cited (https://github.com/cheesinglee/cuda-PHDSLAM/blob/master/python/ospa.py), they use a m*n matrix without padding.
In the end, with p=1, your code behaves as if you where adding (n-m)*c/n to your OSPA metric even though missed detections or false alarms are already taken into account by the second term of the OSPA metric.
Could you please either enlighten me or correct this behavior if I am not mistaken?

Once again, I thank you deeply for the efforts you put in your code.

Best regards,

Louis

@tlind
Copy link
Member

tlind commented Nov 29, 2019

Hi, first of all sorry for the late response! Without having re-read the OSPA paper and just by looking at the code, I think the reason for creating a square cost matrix is that by default, many solvers for the Hungarian problem only support square input matrices. It seems that pymunkres in the latest version also supports solving rectangular cost matrices. In that case, according to their documentation, one shall pad it with 0 values -- instead, we use MAX_COST.

However, I believe that does not really make a difference: We will get a solution with at most min(m,n) assignments, and since our padding is MAX_COST, those MAX_COST matrix elements should not end up in any assignment (as there should be a cheaper solution).

Remember that the following loop only iterates over the resulting assignments:

I believe also the OSPA paper recommends to use a square matrix in practice. See p.5, "C. computation of the OSPA metric":

we use the distance matrix D = (D_{i,j}){1<=i,j<=n}
where D
{i,j} = d^(c)(x_i,y_i)^p if 1<=i<=m and 1<=j<=n,
and D_{i,j}=c^p otherwise.

I think that's exactly what we are doing.

@louis-gu
Copy link
Author

Hello! Thank you for your reply.

I get your point. After "re-reading" the parts you highlighted in your code and the article, I still believe their is an issue. In particular, when you say:

However, I believe that does not really make a difference: We will get a solution with at most min(m,n) assignments, and since our padding is MAX_COST, those MAX_COST matrix elements should not end up in any assignment (as there should be a cheaper solution).

If m<n, why would you get min(m,n) assignments with a MAX_COST pre-filled n*n matrix? The Munkres algorithm returns the n pairs of indices that lead to the lowest cost of assignment: at least (n-m) indices will point to MAX_COST values with a MAX_COST pre-filled n*n matrix.

To put it under another perspective, in the original paper they do pad the n*n matrix with MAX_COST values, but it is just to avoid to explicit the term ((n-m)*c^p) in the equation of the OSPA metric (eq. (3) p.4), as they point it out in III.C p.5:

We use the distance matrix D=(D{i,j})1≤i,j≤n, where D{i,j}=d(c)(xi, yj)^p if 1≤ i ≤ m and 1 ≤ j ≤ n, and D{i,j} = c^p otherwise. This corresponds to the introduction of n−m “dummy” points at distance ≥ c of any points in Y that was described earlier.

So with the MAX_COST padding, in your code line 160, total_loc already incorporates (n-m)*self.c^self.p, you don't need to add it again:

ospa_err = ( float(total_loc + self.a*wrong_labels + (n-m)*self.c**self.p) / n)**(1/float(self.p))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants