-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression introduced with plexus-io 3.4.0 #109
Comments
See #79 (comment) |
…vement was reverted. (See codehaus-plexus/plexus-io#109.) This downgrade reduces the time for a clean `mvn source:jar-no-fork -f guava` on our Google workstations from ~53s to ~1s. (I also upgraded `maven-source-plugin` itself while I was in the area. We still don't have a great way to automatically update Guava _plugins_, only _deps_, as noted offhand in cl/526651811 and discussed slightly more in cl/554548816.) RELNOTES=n/a PiperOrigin-RevId: 650382872
…vement was reverted. (See codehaus-plexus/plexus-io#109.) This downgrade reduces the time for a clean `mvn source:jar-no-fork -f guava` on our Google workstations from ~53s to ~1s. (I also upgraded `maven-source-plugin` itself while I was in the area. We still don't have a great way to automatically update Guava _plugins_, only _deps_, as noted offhand in cl/526651811 and discussed slightly more in cl/554548816.) RELNOTES=n/a PiperOrigin-RevId: 650419894
In our case, we found a workaround: The problem appears to happen only with large/remote definitions of Unix groups, so we can avoid it by assigning our files to a group that is part of the local |
Oh, and one cool thing I discovered along the way is that |
Tested the fix from #135 and I can confirm this worked. |
Confirmed as well. Thanks very much! |
Hi, @gnodet will you bump the plexus-io dependency in related Maven plugins e.g. maven-jar-plugin? Current version of maven-jar-plugin 3.4.2 is affected by performance problem solved in this issue. |
…ent was restored. (See codehaus-plexus/plexus-io#109.) This CL _shouldn't_ make a performance difference because cl/650419894 _should_ have moved us to a version before the performance improvement was originally reverted. However, I messed that prior CL up, pinning us to the slow 3.4.1 instead of the fast 3.3.1. So in fact this CL _does_ improve performance back to the point it was at for 3.3.1. Today, that means an improvement from ~75s to ~1s for a clean `mvn source:jar-no-fork -f guava` on our Google workstations. RELNOTES=n/a PiperOrigin-RevId: 686480922
…ent was restored. (See codehaus-plexus/plexus-io#109.) This CL _shouldn't_ make a performance difference because cl/650419894 _should_ have moved us to a version before the performance improvement was originally reverted. However, I messed that prior CL up, pinning us to the slow 3.4.1 instead of the fast 3.3.1. So in fact this CL _does_ improve performance back to the point it was at for 3.3.1. Today, that means an improvement from ~75s to ~1s for a clean `mvn source:jar-no-fork -f guava` on our Google workstations. RELNOTES=n/a PiperOrigin-RevId: 686509316
We faced a notable performance regression of maven builds when updating the maven-jar-plugin which is very likely related to the changes in #79.
What we know for sure is that downgrading the plexus-io dependency from 3.4.0 to 3.3.1 removes the issue.
This is a captured stacktrace of the slowed down code path:
at sun.nio.fs.UnixNativeDispatcher.getgrgid([email protected]/Native Method)
at sun.nio.fs.UnixUserPrincipals.fromGid([email protected]/UnixUserPrincipals.java:125)
at sun.nio.fs.UnixFileAttributes.group([email protected]/UnixFileAttributes.java:212)
- locked <0x00000006cbf60f10> (a sun.nio.fs.UnixFileAttributes)
at sun.nio.fs.UnixFileAttributeViews$Posix.addRequestedPosixAttributes([email protected]/UnixFileAttributeViews.java:237)
at sun.nio.fs.UnixFileAttributeViews$Unix.readAttributes([email protected]/UnixFileAttributeViews.java:385)
at sun.nio.fs.AbstractFileSystemProvider.readAttributes([email protected]/AbstractFileSystemProvider.java:94)
at java.nio.file.Files.readAttributes([email protected]/Files.java:2084)
at org.codehaus.plexus.components.io.attributes.FileAttributes.(FileAttributes.java:110)
at org.codehaus.plexus.components.io.attributes.FileAttributes.(FileAttributes.java:88)
at org.codehaus.plexus.components.io.resources.PlexusIoFileResourceCollection.addResources(PlexusIoFileResourceCollection.java:163)
at org.codehaus.plexus.components.io.resources.PlexusIoFileResourceCollection.getResources(PlexusIoFileResourceCollection.java:262)
at org.codehaus.plexus.archiver.AbstractArchiver$1.hasNext(AbstractArchiver.java:564)
at org.codehaus.plexus.archiver.zip.AbstractZipArchiver.createArchiveMain(AbstractZipArchiver.java:224)
at org.codehaus.plexus.archiver.zip.AbstractZipArchiver.execute(AbstractZipArchiver.java:202)
at org.codehaus.plexus.archiver.AbstractArchiver.createArchive(AbstractArchiver.java:1028)
We suspect a relation to #79 although we did not look into the details.
Just looking at the stack the cause is likely related to more frequent native FS related calls triggered by execution of java.nio.file.Files.readAttributes from PlexusIoFileResourceCollection.
PS: The used JDK version is not relevant - it at least occurs for both latest OpenJDK 11 and 17 (only tested on Linux, but I'd expect the impact on Windows to be even larger). For a very large project this more than doubled the total build time on some systems - the minimal observed overhead was about 20%, which is still considerable.
The text was updated successfully, but these errors were encountered: