-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[get.jenkins.io/mirrors/mirrorbit - Azure] High costs due to usage of Azure File Storage #3917
Comments
Proposal: Since the original implementation of get.jenkins.io with mirrorbits, many things changed:
We could try switching to using 2 PVCs of type block storage instead of the current Azure File Storage:
Besides switching to a block storage-based persistent volume would:
|
Alternative Proposal:
This solution could be better than my disk proposal (less engineering effort) as the volume billing would be:
=> It would also allows us to use NFS file share for our volumes which could greatly improve #3525 Gotta check if the conversion from the existing volume is possible and under which constraints (edit) Looks like a migration is required to a new distinct Storage account: https://learn.microsoft.com/en-us/answers/questions/413129/how-to-migrate-aure-standard-storage-to-premium-st Still worth the effort |
Update: PR opened to create new storage (premium): jenkins-infra/azure#598 |
as per https://azure.microsoft.com/en-us/pricing/details/storage/files/ the ZRS Redundancy is a good choice, the price is a little higher than LRS but still cheap (in Premium version) as we consume only 600Go: |
…ns.io (#598) as per jenkins-infra/helpdesk#3917 --------- Co-authored-by: Damien Duportal <[email protected]> Co-authored-by: Hervé Le Meur <[email protected]>
Ref. jenkins-infra/helpdesk#3917 Fixup of #598 This PR correct attributes, mainly `account_kind` which must have the value `FileStorage` Signed-off-by: Damien Duportal <[email protected]>
Update:
Next step:
|
Update (
|
Update (
|
WiP (
|
Update: production migration to be started:
|
Update: Operation on
Next steps:
|
See #3913 (comment) for overall costs |
This PR recreates the file share deleted in jenkins-infra/helpdesk#3917 (comment), needed for the Core release process. Ref: - jenkins-infra/helpdesk#3927
Likely the cause of #3927 |
Service(s)
get.jenkins.io, mirrors.jenkins.io, pkg.jenkins.io, Update center
Summary
While checking the sources of costs in Azure for the year 2023, it appeared that the storage account
prodjenkinsreleases
(in the release groupprod-core-releases
) costs us around 20-25% of the monthly bills.The amount is around 1800-2000$ monthly which is insane for a 1 Tb shared file storage.
Most of the cost comes from "LRS Write Operation" meter (from 1000$ up to 1700$ in the past months), followed by "Read Operations" (~180-> 200$) and "Protocol Operation" (~170 -> 190$) [costs per month]. The storage in itself is really cheap: ~30$ monthly + 6$ "hot" (e.g. caching data often read in the filer).
This issue is to track the analysis and study to see if we can decrease this cost one way or another.
The usage of this storage is:
get.jenkins.io (mirrors.jenkins.io):
httpd
requires concurrent accesses to the same file storage by the 4 processes. We used to have aReadWriteMany
persistent volume to achieve this but we are going into a fullReadOnlyMany
mode soon as no write is needed.trusted.ci.jenkins.io:
sync-recent-releases.sh
which copies recently released plugins to the storage (withblobxfer
or eventually, soon,azcopy
)release.ci.jenkins.io:
sync.sh
which copies recently released binaries (plugins, core packages) and indexes to the storage (withblobxfer
or eventually, soon,azcopy
)=> Both trusted.ci and release.ci usages are reponsible for writing data. They can by using commands remotely on pkg.jenkins.io (AWS VM) which is allowed to access the storage through the Azure storage API. This pattern (writing avery 3 minutes) is clearly the culprit for the cost here.
=>
Storage consideration (usage/price):
Reproduction steps
No response
The text was updated successfully, but these errors were encountered: