Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add textures to renderer #10

Merged
merged 1 commit into from
Sep 25, 2016
Merged

Conversation

JimmyDaSilva
Copy link
Contributor

This PR is just the changes discussed on wg-perception/linemod#10 and #8

The following images show that the renderer now does show the textures on the renderer.
Though I guess that either the training or the detection doesn't really use texture properly because I can't see a difference between my three items (coke can, nestea can, mug)

training

detection

@nlyubova: that would be my starting point if you want to try linemod with a textured model

@@ -60,7 +60,7 @@ class Model
void
recursiveTextureLoad(const struct aiScene *sc, const struct aiNode* nd);
void
recursive_render(const struct aiScene *sc, const struct aiNode* nd) const;
recursive_render(const struct aiScene *sc, const struct aiNode* nd, const int j) const;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cv qualifiers for integral types are useless in definition. Please remove the const (for integral types, it's only useful for definitions, not declarations).

@JimmyDaSilva
Copy link
Contributor Author

@vrabaud done !

@vrabaud
Copy link
Member

vrabaud commented Feb 19, 2016

That actually would not work. texturesAndPaths that stacks all the textures of all the meshes of all the nodes is an array and that's probably a bad design. Please convert it to a std::map<std::pair<const aiNode*, int>, GLuint> where int is n in the code (the index of the mesh in the node).

@JimmyDaSilva
Copy link
Contributor Author

@vrabaud, there is a typo in your last post and I am not sure exactly what unique key you would like me to use.
I ran some tests and couldn't find another unique key but the texture path (i.e. wood.jpg). I tried with the materialIndex, and it worked fine, but it gives different integers for the same texture if used several times, so it works but you could load several times the same texture. That's why I would go with the path.
Hence std::map< aiString pathName, GLuint hTexture > ?

Let me know. I will make the change sometime next Monday once we agree on an appropriate structure.

@vrabaud
Copy link
Member

vrabaud commented Feb 20, 2016

That could work but aiGetMaterialTexture is heavy to call. All you have when you need to get a mesh is its aiNode and its index.
I updated my previous comment to get it to display nicely in GitHub

@JimmyDaSilva
Copy link
Contributor Author

Perfect. This does make more sense now.
I will make the according modifications next week.

@JimmyDaSilva
Copy link
Contributor Author

@vrabaud That should be it.
I used std::map<std::pair<const aiNode*, int>, textureAndPath> instead so that we know the texture path of the textures we have already loaded.

@@ -199,10 +199,11 @@ Model::recursive_render(const struct aiScene *sc, const aiNode* nd) const
// draw all meshes assigned to this node
for (; n < nd->mNumMeshes; ++n)
{
glEnable(GL_TEXTURE_2D);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice addition.

@vrabaud
Copy link
Member

vrabaud commented Feb 22, 2016

thx. Just fix the minor things, squash your commits and pus -f. Thx.
I looked a bit at the Mesa issue this weekend and it's not a Mesa issue. So it's really how I'm calling it, I'm sure it's a stupid bug around ... still looking.

@JimmyDaSilva
Copy link
Contributor Author

Commits squashed and forced pushed!

There is still some work to do to enable the use of textures for the linemod process.
The renderer loads the textures during learning, but the textures are not uploaded to the database. So detection displays errors if your not in the texture folder.
Even though the detection displays "texture loaded", I still get about the same scores for the nestea and coke cans.

@vrabaud Do you have any idea on which part of the code the problem could come from?
I guess detection, but it may be learning actually... Is learning supposed to upload textures? Display them on the mesh of couchDB ?

@JimmyDaSilva
Copy link
Contributor Author

@vrabaud so what about this PR? Eventhough there is still work to do on the linemod package I think that this is necessary for texture rendering.
Only problem left here I would say is the path of textures. Could they be uploaded to the database during capture ?

@linknum23
Copy link
Contributor

@JimmyDaSilva I tested the suggested modifications on the attached 2 files in the hopes of using my beautiful model for training.
The images generated had no color but the object is otherwise identifiable in shape and no errors were given although the model is in full color. I assumed that your fixes would add color to the images. Here are the files I used: expo_eraser.zip. The command I used to test was:

rosrun object_recognition_renderer view_generator ../objects/expo_eraser/expo_eraser.dae

The response was:

Info, T0: Load ../objects/expo_eraser/expo_eraser.dae
Info, T0: Found a matching importer for this file format
Info, T0: Import root directory is '../objects/expo_eraser/'
Info, T0: Entering post processing pipeline
Info, T0: Points: 0, Lines: 0, Triangles: 1, Polygons: 0 (Meshes, X = removed)
Info, T0: CalcTangentsProcess finished. Tangents have been calculated
Warn, T0: Mesh 0: Not suitable for vcache optimization
Info, T0: Cache relevant are 0 meshes (0 faces). Average output ACMR is -nan
Info, T0: Leaving post processing pipeline

Please forgive me if I made a mistake testing this or used an unsupported file, but this change does not work how I expected it to.

@JimmyDaSilva
Copy link
Contributor Author

@linknum23 I actually never tested with .dae files, only .obj files.
I can try training your files on Tuesday. In the meantime you can try to convert the .dae in .obj and see if it works for you.
Also realize that for now you need to be in the texture's folder when running the command if you want to be able to use it.

@linknum23
Copy link
Contributor

@JimmyDaSilva being in the textures folder fixed it. Now I have pretty colors :) Thanks for the help!

@linknum23
Copy link
Contributor

Hey I don't know how I didn't notice this before but all of the textures look mirrored. Is anyone experiencing the same thing? It looks like the textured coke can is mirrored as well ( need higher resolution image to verfiy...).

@xMutzelx
Copy link

xMutzelx commented Jun 15, 2016

Hello,
I am not sure if I understood you right. Is it possible to differentiate between a blue and a red cube or are there just textures/colors for visualization? If LineMod (for example) can't differentiate between a red and a blue cube, I would like to write a post-processing pipeline. I would like to read the LineMod-output and check this areas with color gradients (color histograms). I hope that I can reduce false-positive results.

@JimmyDaSilva
Copy link
Contributor Author

@xMutzelx Linemod as it is implemented in OpenCV or PCL does differentiate the colors.
ORK uses the OpenCV implementation so we should be able to use the textures. But I have not got any success about this yet.
So far all the code I have read in ORK showed that the texture should be used. There is probably a stupid bug somewhere.
For now I have to work on a different project so I don't have the time to work on the matter right now but it's definitely something I will be working on at the end of the summer.

Let me know if you get the textures to work.

@vrabaud
Copy link
Member

vrabaud commented Sep 25, 2016

so, should I merge this PR ?

@JimmyDaSilva
Copy link
Contributor Author

JimmyDaSilva commented Sep 25, 2016

I haven't worked on this for months... but I think you should, along with #13

@linknum23
Copy link
Contributor

Yes please do, along when the #13 as suggested.

@vrabaud vrabaud merged commit e26dedf into wg-perception:master Sep 25, 2016
@vrabaud vrabaud mentioned this pull request Sep 25, 2016
@sun11
Copy link

sun11 commented Oct 4, 2016

Does the texture be used now? I achieved the same result just like those two pictures posted by JimmyDaSilva, training in the data folder worked. And I also found some difference between use_rgb: 1 and use_rgb: 0, If use_rgb is 1, the confidence is higher, but maybe it is just the matching algorithm from linemod caused? In other words, is the texture still not be used and only the use_rgb option caused different processes and different confidences? Sorry for my poor knowledge in the linemod algorithm...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants