Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider hadoop2 branch to use hadoop-client 2.0.6-alpha #158

Open
erickt opened this issue Nov 14, 2013 · 5 comments
Open

Consider hadoop2 branch to use hadoop-client 2.0.6-alpha #158

erickt opened this issue Nov 14, 2013 · 5 comments

Comments

@erickt
Copy link

erickt commented Nov 14, 2013

Good evening,

Thanks for the great work with the hadoop 2 branch. Unfortunately by basing the hadoop-client on version 2.2.0 breaks compatibility with Cloudera CDH 4.4.0 with this error. This is the error I receive when I try to get the 2.2.0-based faunus to connect to a CDH 4.4.0 cluster:

gremlin> hdfs.ls()
Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "<host>"; destination host is: "<server>":8020;

If the hadoop-client is rolled back to 2.0.6-alpha, it works fine.

@okram
Copy link
Contributor

okram commented Nov 14, 2013

Ha. I dunno man. Sounds like your vendor needs to get on the boat :D. Using "alpha" APIs in their stable distributions?

On Nov 13, 2013, at 6:04 PM, Erick Tryzelaar [email protected] wrote:

Good evening,

Thanks for the great work with the hadoop 2 branch. Unfortunately by basing the hadoop-client on version 2.2.0 breaks compatibility with Cloudera CDH 4.4.0 with this error. This is the error I receive when I try to get the 2.2.0-based faunus to connect to a CDH 4.4.0 cluster:

gremlin> hdfs.ls()
Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: ""; destination host is: "":8020;
If the hadoop-client is rolled back to 2.0.6-alpha, it works fine.


Reply to this email directly or view it on GitHub.

@erickt
Copy link
Author

erickt commented Nov 14, 2013

No argument here. CDH5 (which just had a beta release) is at least based on 2.2.0. Since this only requires a change in the pom.xml, maybe we could use a hadoop-client version classifier for those of us with CDH4.

@okram
Copy link
Contributor

okram commented Nov 14, 2013

Can you provide a pull request? I'm not competent with pom.xml classifiers. Also, I could probably learn a thing or two from your pattern to do support for both 1.x and 2.x lines of Hadoop.

On Nov 13, 2013, at 6:16 PM, Erick Tryzelaar [email protected] wrote:

No argument here. CDH5 (which just had a beta release) is at least based on 2.2.0. Since this only requires a change in the pom.xml, maybe we could use a hadoop-client version classifier for those of us with CDH4.


Reply to this email directly or view it on GitHub.

@erickt
Copy link
Author

erickt commented Nov 14, 2013

@okram: Just submitted a pull request. I decided against doing anything clever with maven as I'm also not that familiar with that level of trickery.

A longer term solution could be what HBase does to be compatible across both Hadoop 1 and 2. They have a submodule called hbase-hadoop-compat that provides an interface that provides an interface to hadoop, and two other submodules, hbase-hadoop1-compat and hbase-hadoop2-compat that implements those interfaces. They also do some hackery with a script that generates a hadoop1 and hadoop2 pom file from a template in order to create two artifact jars.

@okram
Copy link
Contributor

okram commented Nov 15, 2013

@dalaro : Do you think we could use this pattern to make Faunus able to build with Hadoop 1.2.1 and Hadoop 2.2.0? Similar to your approach with Titan/HBase. The only concern I see is that there are class differences between the two versions of Faunus. Thoughts? Perhaps a Skype...

On Nov 14, 2013, at 4:53 PM, Erick Tryzelaar [email protected] wrote:

@okram: Just submitted a pull request. I decided against doing anything clever with maven as I'm also not that familiar with that level of trickery.

A longer term solution could be what HBase does to be compatible across both Hadoop 1 and 2. They have a submodule called hbase-hadoop-compat that provides an interface that provides an interface to hadoop, and two other submodules, hbase-hadoop1-compat and hbase-hadoop2-compat that implements those interfaces. They also do some hackery with a script that generates a hadoop1 and hadoop2 pom file from a template in order to create two artifact jars.


Reply to this email directly or view it on GitHub.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants