Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable the admin socket and log file for librbd client #204

Closed
wants to merge 1 commit into from

Conversation

taodd
Copy link

@taodd taodd commented Aug 23, 2018

Need to sync it on related charm openstack project

Need to sync it on related charm openstack project

Related-Bug: LP#1786874
@taodd
Copy link
Author

taodd commented Sep 3, 2018

looks like cinder-volume use too much rados client. which will generate too much ceph client log file.
close it for now..

@taodd taodd closed this Sep 3, 2018
@taodd
Copy link
Author

taodd commented Sep 4, 2018

maybe I'm wrong, I might have misunderstood about the $pid, maybe the $pid is refer to cinder-volume process, not the thread. I will need to do some experiment.

@dosaboy
Copy link
Contributor

dosaboy commented Sep 4, 2018

Ive tried enabling this in my local env on a cinder node. I then created 100 (rbd) volumes and there were 0 logfiles in /var/log/ceph which suggests that librbd does not support client logging. As a side test i ran an rbd cli command and it created /var/log/ceph/client.admin.550767.log.

@taodd
Copy link
Author

taodd commented Apr 2, 2019

Two problems

  1. AppArmor might stop the cinder/nova/glance creating the admin-socket and log file
  2. As cinder and glance should only be using 1 librbd client, so there should be only one log file and admin socket, but for Nova, it should be one instance one log file/admin socket. If there is a host contains many instances, there would be too many of them.

lathiat added a commit to lathiat/charm-helpers that referenced this pull request May 20, 2021
Enable 'log to syslog' and 'err to stderr' for ceph clients.

Currently the default configuration disables "log to syslog" for ceph
primarily targeting the ceph OSD units which log to files in
/var/log/ceph instead. However this value is also used by ceph clients
and as a result even with an error a ceph client such as libvirt/qemu
will not log errors *anywhere* making issues such as failure to connect
to a ceph cluster or invalid keyrings difficult to diagnose. Hence we
always enable 'log to syslog' for ceph clients.

We could alternatively enable log to a file, however, there are apparmor
and permissions issues that mean access to /var/log/ceph is not
guaranteed and in some cases multiple PIDs may write to this file
creating many different files so syslog seems like the best choice.
(Reference: juju#204)

Also enable 'err to stderr' which it makes it more likely the error is
surfaced in other important locations such as
/var/log/libvirt/qemu/instance-*.log and on the command line.

Partial-Bug: #1786874
lathiat added a commit to lathiat/charm-helpers that referenced this pull request May 20, 2021
Enable 'log to syslog' and 'err to stderr' for ceph clients.

Currently the default configuration disables "log to syslog" for ceph
primarily targeting the ceph OSD units which log to files in
/var/log/ceph instead. However this value is also used by ceph clients
and as a result even with an error a ceph client such as libvirt/qemu
will not log errors *anywhere* making issues such as failure to connect
to a ceph cluster or invalid keyrings difficult to diagnose. Hence we
always enable 'log to syslog' for ceph clients.

We could alternatively enable log to a file, however, there are apparmor
and permissions issues that mean access to /var/log/ceph is not
guaranteed and in some cases multiple PIDs may write to this file
creating many different files so syslog seems like the best choice.
(Reference: juju#204)

Also enable 'err to stderr' which it makes it more likely the error is
surfaced in other important locations such as
/var/log/libvirt/qemu/instance-*.log and on the command line.

Partial-Bug: #1786874
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants