Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge chain of anonymous classes leads to class file that is too big #184

Closed
jcoveney opened this issue Sep 29, 2014 · 8 comments · Fixed by #232
Closed

Huge chain of anonymous classes leads to class file that is too big #184

jcoveney opened this issue Sep 29, 2014 · 8 comments · Fixed by #232

Comments

@jcoveney
Copy link
Contributor

This issue is seen here twitter/scalding#1059

Digging around scala/pickling#10 it looks like setting -Xmax-classfile-name may solve this, but this is an undesirable state for things to be in (if it can be avoided), especially since bijection is supposed to be a fairly dep-free, core lib that anyone can use.

@ianoc
Copy link
Collaborator

ianoc commented Sep 29, 2014

Thats just a warning, did it fail over that class being missing?

@jcoveney
Copy link
Contributor Author

It's an IOerror that looks to be killing the travis build, unless I missed
anther error

@janileppanen
Copy link

I recently ran into the long class file name problem.

I'm building a Docker container to have an easily distributable Scalding dev environment that can both compile my jar and run the job locally on Hadoop.

My project uses the sbt assembly plugin to package the dependencies (including bijection-core). On Docker, the maximum file name length is 242 characters, so when I try to run my job on Hadoop, unpacking bijection-core results in:

Exception in thread "main" java.io.FileNotFoundException: /tmp/hadoop-root/hadoop-unjar5063693169954638466/com/twitter/bijection/GeneratedTupleCollectionInjections$$anon$31$$anonfun$invert$10$$anonfun$apply$46$$anonfun$apply$47$$anonfun$apply$48$$anonfun$apply$49$$anonfun$apply$50$$anonfun$apply$51$$anonfun$apply$52$$anonfun$apply$53$$anonfun$apply$54$$anonfun$apply$55.class (File name too long)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
    at org.apache.hadoop.util.RunJar.unJar(RunJar.java:88)
    at org.apache.hadoop.util.RunJar.unJar(RunJar.java:64)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:188)

Setting -Xmax-classfile-name to 240 would make it possible to run a packaged jar containing bijection-core on Hadoop on Docker. I've been digging around, but couldn't find a workaround. Any suggestions are welcome.

@johnynek
Copy link
Collaborator

I didn't know about the -Xmax-classfile-name option. I'm worried if this will make binary compatibility really fragile.

I wonder if we could just restructure the code. I think the main issue is how we have deeply nested traits in order to control the implicit resolution priorities. I'm not sure that deep nesting is needed. It should only be needed when there are multiple cases that would apply so we need a way to break a tie.

Sorry this is a pain.

@eirslett
Copy link

+1 I'm hitting this problem as well with Zipkin, trying to run analysis jobs inside Docker (which has a lower character limit)

@janileppanen
Copy link

@eirslett I found a (not very pretty) workaround. You can mount a directory from the host system to your Docker container and set that as hadoop.tmp.dir (in core-site.xml). I'm still working on some other problems with my setup, so I'm not sure if that's enough to get a fully working environment, but that at least gets me past the unpacking error.

@eirslett
Copy link

eirslett commented Aug 9, 2015

The problem is that it wouldn't work in every host environment... (like orchestration where you don't control the file layout, for example mesos, or a third party docker hosting provider)

@kurtkopchik
Copy link

I'm hitting this issue as well and it's preventing me from running Scalding jobs in a Docker container. Any update on decreasing the file name length to be under Docker's limit? I'm using Mesos as well so mounting a host volume into the container as work around isn't really a feasible solution.

johnynek pushed a commit that referenced this issue Jan 21, 2016
johnynek added a commit that referenced this issue Jan 21, 2016
set the max file size to deal with #184
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants