Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inception V3 gives wrong predictions #7

Open
erogol opened this issue Dec 21, 2015 · 18 comments
Open

Inception V3 gives wrong predictions #7

erogol opened this issue Dec 21, 2015 · 18 comments

Comments

@erogol
Copy link
Contributor

erogol commented Dec 21, 2015

I guess there is something wrong about the released network or at least the preprocessing code. I tried to use prediction-with-pretrained example but the results are mistaken.

I also realized that output layer has 1008 nodes where as the label txt has 1001 classes

@antinucleon
Copy link
Contributor

The model is converted from TensorFlow model. Note preprocessing code is different, see preprocessing.py in the zip. Also 1008 is from Google, and 1-1000 is ILVRC2012 label.

@erogol
Copy link
Contributor Author

erogol commented Dec 22, 2015

I used the provided processing code but still same problem.

@antinucleon
Copy link
Contributor

I am not sure what is your problem, but on my side it works well, and I verified on ILSVRC 2012 validation set and TensorFlow sample image.

@501177639
Copy link

I happen the same problem as erogol, could you show the code how you verify on ILSVRC 2012 validation set?

@ghost
Copy link

ghost commented Jan 7, 2016

Same here.

@piiswrong
Copy link
Member

What does wrong result mean? What is your accuracy on imagenet?

@antinucleon
Copy link
Contributor

First, resize the raw image into 384, then you can do in this way (code I used a month ago)

import mxnet as mx
import numpy as np
val = mx.io.ImageRecordIter(
        path_imgrec = "model/val-384.rec",
        mean_r      = 128,
        mean_g      = 128,
        mean_b      = 128,
        scale       = 0.0078125,
        rand_crop   = False,
        rand_mirror = False,
        data_shape  = (3, 299, 299),
        batch_size  = 128)

symbol, arg_params, aux_params = mx.model.load_checkpoint("model/Inception-7", 1)
model = mx.model.FeedForward(symbol=symbol, ctx=mx.gpu(), arg_params=arg_params, aux_params=aux_params, numpy_batch_size=1)
prob = model.predict(val)


import csv
fi = csv.reader(open("old_synset.txt"), delimiter=' ')
old = {}
idx = 0
for line in fi:
    old[idx] = line[0]
    idx += 1

fi = csv.reader(open("val.lst"), delimiter='\t')
ans = []
for line in fi:
    ans.append(old[int(line[1])])


fi = csv.reader(open("model/synset.txt"), delimiter=' ')
new = {}
idx = 0
for line in fi:
    new[line[0]] = idx
    idx += 1

new_ans = [new[s] for s in ans]

top_1 = 0.
top_5 = 0.
for i in range(len(new_ans)):
    sol = new_ans[i]
    pred_top5 = prob[i, :].argsort()[::-1][:5]
    if pred_top5[0] == sol:
        top_1 += 1
    if sol in pred_top5:
        top_5 += 1
print(top_1 / 50000)
print(top_5 / 50000)

@luxiangju
Copy link

i couldn't download the model, who can share the model with me?

@ghost
Copy link

ghost commented Jan 14, 2016

synset.txt might wrong in this model. This leads to a wrong mapping.

@erogol
Copy link
Contributor Author

erogol commented Jan 14, 2016

Accuracy is not important since the results are obviously flawed for couple of obvious class images that my previous net solves successfully. I also believe that the synset.txt is wrong since number of output nodes and the number of lines in the synset is not matching.

@antinucleon
Copy link
Contributor

The synset is correct. Again, in Google's released model, there is only 1008 outputs. There is a mapping in old synset and new synset, which I have provided code above. If it wrong, it can't produce 77% accuracy.

@ghost
Copy link

ghost commented Jan 14, 2016

The question is where can we find the old_synset.txt?

@antinucleon
Copy link
Contributor

You can find old_synset.txt in old Inception-BN model.

On Thu, Jan 14, 2016 at 4:07 PM, Shuo Zhang [email protected]
wrote:

The question is where can we find the old_synset.txt?


Reply to this email directly or view it on GitHub
#7 (comment)
.

@501177639
Copy link

I use the code above to test the validation set again, but still get the very low accuracy. I guess there are something wrong in either the Inception-7-0001.params or the synset.txt. Can you test again and add your code here https://s3.amazonaws.com/dmlc/model/inception-v3.tar.gz ,so we can run it directly to get the right result?

@erogol
Copy link
Contributor Author

erogol commented Jan 16, 2016

I observe that the model works fine with cpu but not with gpu. All top predictions are skewed in gpu setting.

@u1234x1234
Copy link

I've also had some problems with this model: it gaves me wrong predictions with cudnn v3. mxnet without cudnn, with cudnn v4, and cpu version worked fine for me.

@501177639
Copy link

something wrong with cudnn v3?

@erogol
Copy link
Contributor Author

erogol commented Jan 17, 2016

Yeap I update to cudnn 4 and problem barely resolved. Thanks for pointing @u1234x1234 .

But still gpu execution gives different top5 ordering in relation to cpu. At least results make sense for both cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants