Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510 #5048

Closed
farribeiro opened this issue Aug 14, 2024 · 12 comments · Fixed by #5069
Closed

rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510 #5048

farribeiro opened this issue Aug 14, 2024 · 12 comments · Fixed by #5069

Comments

@farribeiro
Copy link

farribeiro commented Aug 14, 2024

Describe the bug

I rebase f40 silverblue-testing and went back to stable or f41 rpm-ostree is giving errors in rpm-ostreed

Reproduction steps

  1. Rebase to f40-stable or f41
  2. reboot
  3. run a command like rpm-ostree status -b

Expected behavior

rpm-ostree run the command

Actual behavior

I tried run rpm-ostree status and receive a error

System details

# rpm-ostree --version
rpm-ostree:
 Version: '2024.6'
 Git: 97806a166eab4f716df6ae5307ff9da75e4f0740
 Features:
  - rust
  - compose
  - container
  - fedora-integration
root@fedora:~# rpm-ostree status -b
Job for rpm-ostreed.service failed because a fatal signal was delivered causing the control process to dump core.
See "systemctl status rpm-ostreed.service" and "journalctl -xeu rpm-ostreed.service" for details.
× rpm-ostreed.service - rpm-ostree System Management Daemon
    Loaded: loaded (/usr/lib/systemd/system/rpm-ostreed.service; static)
   Drop-In: /usr/lib/systemd/system/service.d
            └─10-timeout-abort.conf
    Active: failed (Result: core-dump) since Wed 2024-08-14 20:47:28 -03; 16ms ago
Invocation: 9ad34771ffa34550b295b1de2e7db982
      Docs: man:rpm-ostree(1)
   Process: 7513 ExecStart=rpm-ostree start-daemon (code=dumped, signal=ABRT)
  Main PID: 7513 (code=dumped, signal=ABRT)
  Mem peak: 4.6M
       CPU: 73ms

ago 14 20:47:28 fedora systemd[1]: Starting rpm-ostreed.service - rpm-ostree System Management Daemon...
ago 14 20:47:28 fedora rpm-ostree[7513]: Reading config file '/etc/rpm-ostreed.conf'
ago 14 20:47:28 fedora rpm-ostree[7513]: **
ago 14 20:47:28 fedora rpm-ostree[7513]: rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510:gboolean rpmostree_deployment_get_layered_info(OstreeRepo*, OstreeDeployment*, gboolean*, guint*, char**, char***, char***, GVariant**, GVarian…
ago 14 20:47:28 fedora rpm-ostree[7513]: Bail out! rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510:gboolean rpmostree_deployment_get_layered_info(OstreeRepo*, OstreeDeployment*, gboolean*, guint*, char**, char***, char***, GVariant*…
ago 14 20:47:28 fedora systemd[1]: rpm-ostreed.service: Main process exited, code=dumped, status=6/ABRT
ago 14 20:47:28 fedora systemd[1]: rpm-ostreed.service: Failed with result 'core-dump'.
ago 14 20:47:28 fedora systemd[1]: Failed to start rpm-ostreed.service - rpm-ostree System Management Daemon.
Hint: Some lines were ellipsized, use -l to show in full.
error: Loading sysroot: exit status: 1

Additional information

Also fails with f40-stable

https://paste.centos.org/view/9b2c5243 (broken)
https://paste.centos.org/view/8c6d7734 (broken)

@farribeiro
Copy link
Author

I noticed on Bodhi that RPM-Ostree 2024.7 is in testing. That is, possibly conflict of the old version that is in the stable

@yuntaz0
Copy link

yuntaz0 commented Aug 19, 2024

Same issue on Fedora 40. After the August 19 update (rpm-ostree 2024.6-1 → 2024.7-1), I am unable to execute any rpm-ostree subcommands from any rollback commits. The following error messages are displayed:

rpm-ostree status
error: Loading sysroot: Failed to invoke RegisterClient: GDBus.Error:org.freedesktop.DBus.Error.NameHasNoOwner: Could not activate remote peer 'org.projectatomic.rpmostree1': startup job failed

The system logs show:

× rpm-ostreed.service - rpm-ostree System Management Daemon
     Loaded: loaded (/usr/lib/systemd/system/rpm-ostreed.service; static)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: failed (Result: core-dump) since Sun 2024-08-18 23:02:25 PDT; 2min 24s ago
       Docs: man:rpm-ostree(1)
    Process: 3421 ExecStart=rpm-ostree start-daemon (code=dumped, signal=ABRT)
   Main PID: 3421 (code=dumped, signal=ABRT)
        CPU: 42ms

Aug 18 23:02:24 fedora systemd[1]: Starting rpm-ostreed.service - rpm-ostree System Management Daemon...
Aug 18 23:02:24 fedora rpm-ostree[3421]: Reading config file '/etc/rpm-ostreed.conf'
Aug 18 23:02:24 fedora rpm-ostree[3421]: **
Aug 18 23:02:24 fedora rpm-ostree[3421]: rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510:gboolean rpmostree_deployment_get_layered_info(OstreeRepo*, OstreeDeployment*, gboolean*, guint*, char**, char***, char***, GVariant**, GVariant**, GVariant**, GError**): assertion failed: (g_variant_dict_lookup (dict, "rpmostree.modules", "^as", &layered_modules))
Aug 18 23:02:24 fedora rpm-ostree[3421]: Bail out! rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510:gboolean rpmostree_deployment_get_layered_info(OstreeRepo*, OstreeDeployment*, gboolean*, guint*, char**, char***, char***, GVariant**, GVariant**, GVariant**, GError**): assertion failed: (g_variant_dict_lookup (dict, "rpmostree.modules", "^as", &layered_modules))
Aug 18 23:02:25 fedora systemd[1]: rpm-ostreed.service: Main process exited, code=dumped, status=6/ABRT
Aug 18 23:02:25 fedora systemd[1]: rpm-ostreed.service: Failed with result 'core-dump'.
Aug 18 23:02:25 fedora systemd[1]: Failed to start rpm-ostreed.service - rpm-ostree System Management Daemon.

What should I do to resolve this issue?

@Gawdl3y
Copy link

Gawdl3y commented Aug 21, 2024

Rebased to Fedora Kinoite 41 (from 40) and am now experiencing this issue as well. Errors are identical to those posted previously.

schuyler@Schuyler-LT:/var/home/schuyler$ rpm-ostree status
error: Loading sysroot: Failed to invoke RegisterClient: GDBus.Error:org.freedesktop.DBus.Error.NameHasNoOwner: Could not activate remote peer 'org.projectatomic.rpmostree1': startup job failed
schuyler@Schuyler-LT:/var/home/schuyler$ sudo systemctl status rpm-ostreed.service
Place your finger on the fingerprint reader
× rpm-ostreed.service - rpm-ostree System Management Daemon
     Loaded: loaded (/usr/lib/systemd/system/rpm-ostreed.service; static)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: failed (Result: core-dump) since Wed 2024-08-21 17:43:52 EDT; 17s ago
 Invocation: f58bb6b3e1114cb18e4d223d1b6cbc4b
       Docs: man:rpm-ostree(1)
    Process: 6325 ExecStart=rpm-ostree start-daemon (code=dumped, signal=ABRT)
   Main PID: 6325 (code=dumped, signal=ABRT)
   Mem peak: 4.1M
        CPU: 139ms

Aug 21 17:43:51 Schuyler-LT systemd[1]: Starting rpm-ostreed.service - rpm-ostree System Management Daemon...
Aug 21 17:43:51 Schuyler-LT rpm-ostree[6325]: Reading config file '/etc/rpm-ostreed.conf'
Aug 21 17:43:51 Schuyler-LT rpm-ostree[6325]: **
Aug 21 17:43:51 Schuyler-LT rpm-ostree[6325]: rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510:gboolean rpmostree_deployment_get_layered_info(OstreeRepo*, OstreeDeployment*, gboolean*, guint*, char**, char***, char***>
Aug 21 17:43:51 Schuyler-LT rpm-ostree[6325]: Bail out! rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510:gboolean rpmostree_deployment_get_layered_info(OstreeRepo*, OstreeDeployment*, gboolean*, guint*, char**, char**>
Aug 21 17:43:52 Schuyler-LT systemd[1]: rpm-ostreed.service: Main process exited, code=dumped, status=6/ABRT
Aug 21 17:43:52 Schuyler-LT systemd[1]: rpm-ostreed.service: Failed with result 'core-dump'.
Aug 21 17:43:52 Schuyler-LT systemd[1]: Failed to start rpm-ostreed.service - rpm-ostree System Management Daemon.
schuyler@Schuyler-LT:/var/home/schuyler$ rpm-ostree --version
rpm-ostree:
 Version: '2024.6'
 Git: 97806a166eab4f716df6ae5307ff9da75e4f0740
 Features:
  - rust
  - compose
  - container
  - fedora-integration

@queria
Copy link

queria commented Aug 25, 2024

Hit same issue, except in my case i was and stayed just on 40, only moved back a bit (in an attempt to check if it makes unrelated issue dissapear), like rpm-ostree deploy 40.20240728.0, and now rpm-ostree status instead prints logs about rpm-ostreed failed as mentioned by others:

Aug 25 20:36:18 hades rpm-ostree[2073]: rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510:gboolean rpmostree_deployment_get_layered_info(OstreeRepo*, OstreeDeployment*, gboolean*, guint*, char**, char***, char***, GVariant**, GVariant**, GVariant**, GError**): assertion failed: (g_variant_dict_lookup (dict, "rpmostree.modules", "^as", &layered_modules))
Aug 25 20:36:18 hades rpm-ostree[2073]: Bail out! rpm-ostreed:ERROR:src/libpriv/rpmostree-util.cxx:510:gboolean rpmostree_deployment_get_layered_info(OstreeRepo*, OstreeDeployment*, gboolean*, guint*, char**, char***, char***, GVariant**, GVariant**, GVariant**, GError**): assertion failed: (g_variant_dict_lookup (dict, "rpmostree.modules", "^as", &layered_modules))

@HuijingHei
Copy link
Member

Maybe it is related to 4238364, unluckily I can not reproduce using Fedora CoreOS 40.

@cgwalters
Copy link
Member

cgwalters commented Aug 27, 2024

Maybe it is related to 4238364, unluckily I can not reproduce using Fedora CoreOS 40.

I think you're right. An older rpm-ostree will fail to parse deployments written by a newer version. Fixing that is trivial, we can just continue to add an empty variant in this section of code: 4238364#diff-b12d7cfa4eac9038606d39894fec849e9d89c831185e94a4e3bedbb70dd0a43fL4676

Will do a PR. EDIT: done in #5069

@dubst3pp4
Copy link

dubst3pp4 commented Aug 29, 2024

Same here with Fedora Silverblue 40. I did a rollback to image 40.20240817.0 and now rpm-ostree isn't working anymore.

Regarding the fix of @cgwalters: does this mean that I'm not able to rollback to an older version that doesn't contain this fix? Is there any workaround so that I get my system working again without returning to a newer version via grub?

@dubst3pp4
Copy link

@cgwalters thanks for fixing this. But is there a workaround without updating to a new a version of rpm-ostree so that I can still use my current image?

@cgwalters
Copy link
Member

If you've done a rollback, a workaround is probably to e.g. ostree admin undeploy 0 to prune the deployment without the metadata.

@dubst3pp4
Copy link

If you've done a rollback, a workaround is probably to e.g. ostree admin undeploy 0 to prune the deployment without the metadata.

Currently, I have two deployments:

$ ostree admin status
* fedora 4d480c52e31f328469db96b7f41543d97646074dffffcc15dd4ddcbc1f549faa.0
    Version: 40.20240817.0
    origin: <unknown origin type>
  fedora 51151b22b8c9960668612a87b453dbc05c11050d34e59df593bf64626d742ac7.0 (rollback)
    Version: 40.20240829.0
    origin: <unknown origin type>

The active version (40.20240817.0) is the one with the broken rpm-ostree.service, which I want to keep, because of an older podman package.
The more up-to-date version (40.20240829.0) has a working rpm-ostree.service, but a buggy podman version.
So I would like to be able to use the older version (40.20240817.0) and to fix the rpm-ostree.service so that I can upgrade to the newest image as soon as a fixed podman package is available.

I think I don't understand ostree enough to have a clue how your answer could help me with my problem. Sorry for that, any help is very welcome 😃

@HuijingHei
Copy link
Member

Assume you are in 40.20240817.0, how about running ostree admin set-default 1 and reboot into 40.20240829.0 (with working rpm-ostreed.service), then downgrade the podman version ?

@dubst3pp4
Copy link

Assume you are in 40.20240817.0, how about running ostree admin set-default 1 and reboot into 40.20240829.0 (with working rpm-ostreed.service), then downgrade the podman version ?

Thanks, you're right. I've exactly done that and downgraded the required podman version with

rpm-ostree override replace https://koji.fedoraproject.org/koji/buildinfo\?buildID\=2484932

Now everything works again. Thanks for your help 👋 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants