forked from l-mb/sbd
-
Notifications
You must be signed in to change notification settings - Fork 32
/
Copy pathsbd.8.pod.in
675 lines (457 loc) · 22.6 KB
/
sbd.8.pod.in
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
=head1 NAME
sbd - STONITH Block Device daemon
=head1 SYNOPSIS
sbd <-d F</dev/...>> [options] C<command>
=head1 SUMMARY
SBD provides a node fencing mechanism (Shoot the other node in the head,
STONITH) for Pacemaker-based clusters through the exchange of messages
via shared block storage such as for example a SAN, iSCSI, FCoE. This
isolates the fencing mechanism from changes in firmware version or
dependencies on specific firmware controllers, and it can be used as a
STONITH mechanism in all configurations that have reliable shared
storage.
SBD can also be used without any shared storage. In this mode, the
watchdog device will be used to reset the node if it loses quorum, if
any monitored daemon is lost and not recovered or if Pacemaker decides
that the node requires fencing.
The F<sbd> binary implements both the daemon that watches the message
slots as well as the management tool for interacting with the block
storage device(s). This mode of operation is specified via the
C<command> parameter; some of these modes take additional parameters.
To use SBD with shared storage, you must first C<create> the messaging
layout on one to three block devices. Second, configure
F<@configdir@/sbd> to list those devices (and possibly adjust other
options), and restart the cluster stack on each node to ensure that
C<sbd> is started. Third, configure the C<external/sbd> fencing
resource in the Pacemaker CIB.
Each of these steps is documented in more detail below the description
of the command options.
C<sbd> can only be used as root.
=head2 GENERAL OPTIONS
=over
=item B<-d> F</dev/...>
Specify the block device(s) to be used. If you have more than one,
specify this option up to three times. This parameter is mandatory for
all modes, since SBD always needs a block device to interact with.
This man page uses F</dev/sda1>, F</dev/sdb1>, and F</dev/sdc1> as
example device names for brevity. However, in your production
environment, you should instead always refer to them by using the long,
stable device name (e.g.,
F</dev/disk/by-id/dm-uuid-part1-mpath-3600508b400105b5a0001500000250000>).
=item B<-v|-vv|-vvv>
Enable verbose|debug|debug-library logging (optional)
=item B<-h>
Display a concise summary of C<sbd> options.
=item B<-n> I<node>
Set local node name; defaults to C<uname -n>. This should not need to be
set.
=item B<-R>
Do B<not> enable realtime priority. By default, C<sbd> runs at realtime
priority, locks itself into memory, and also acquires highest IO
priority to protect itself against interference from other processes on
the system. This is a debugging-only option.
=item B<-I> I<N>
Async IO timeout (defaults to 3 seconds, optional). You should not need
to adjust this unless your IO setup is really very slow.
(In daemon mode, the watchdog is refreshed when the majority of devices
could be read within this time.)
=back
=head2 create
Example usage:
sbd -d /dev/sdc2 -d /dev/sdd3 create
If you specify the I<create> command, sbd will write a metadata header
to the device(s) specified and also initialize the messaging slots for
up to 255 nodes.
B<Warning>: This command will not prompt for confirmation. Roughly the
first megabyte of the specified block device(s) will be overwritten
immediately and without backup.
This command accepts a few options to adjust the default timings that
are written to the metadata (to ensure they are identical across all
nodes accessing the device).
=over
=item B<-1> I<N>
Set watchdog timeout to N seconds. This depends mostly on your storage
latency; the majority of devices must be successfully read within this
time, or else the node will self-fence.
If your sbd device(s) reside on a multipath setup or iSCSI, this should
be the time required to detect a path failure. You may be able to reduce
this if your device outages are independent, or if you are using the
Pacemaker integration.
=item B<-2> I<N>
Set slot allocation timeout to N seconds. You should not need to tune
this.
=item B<-3> I<N>
Set daemon loop timeout to N seconds. You should not need to tune this.
=item B<-4> I<N>
Set I<msgwait> timeout to N seconds. This should be twice the I<watchdog>
timeout. This is the time after which a message written to a node's slot
will be considered delivered. (Or long enough for the node to detect
that it needed to self-fence.)
This also affects the I<stonith-timeout> in Pacemaker's CIB; see below.
=back
=head2 list
Example usage:
# sbd -d /dev/sda1 list
0 hex-0 clear
1 hex-7 clear
2 hex-9 clear
List all allocated slots on device, and messages. You should see all
cluster nodes that have ever been started against this device. Nodes
that are currently running should have a I<clear> state; nodes that have
been fenced, but not yet restarted, will show the appropriate fencing
message.
=head2 dump
Example usage:
# sbd -d /dev/sda1 dump
==Dumping header on disk /dev/sda1
Header version : 2
Number of slots : 255
Sector size : 512
Timeout (watchdog) : 15
Timeout (allocate) : 2
Timeout (loop) : 1
Timeout (msgwait) : 30
==Header on disk /dev/sda1 is dumped
Dump meta-data header from device.
=head2 watch
Example usage:
sbd -d /dev/sdc2 -d /dev/sdd3 -P watch
This command will make C<sbd> start in daemon mode. It will constantly monitor
the message slot of the local node for incoming messages, reachability, and
optionally take Pacemaker's state into account.
C<sbd> B<must> be started on boot before the cluster stack! See below
for enabling this according to your boot environment.
The options for this mode are rarely specified directly on the
commandline directly, but most frequently set via F<@configdir@/sbd>.
It also constantly monitors connectivity to the storage device, and
self-fences in case the partition becomes unreachable, guaranteeing that it
does not disconnect from fencing messages.
A node slot is automatically allocated on the device(s) the first time
the daemon starts watching the device; hence, manual allocation is not
usually required.
If a watchdog is used together with the C<sbd> as is strongly
recommended, the watchdog is activated at initial start of the sbd
daemon. The watchdog is refreshed every time the majority of SBD devices
has been successfully read. Using a watchdog provides additional
protection against C<sbd> crashing.
If the Pacemaker integration is activated, C<sbd> will B<not> self-fence
if device majority is lost, if:
=over
=item 1.
The partition the node is in is still quorate according to the CIB;
=item 2.
it is still quorate according to Corosync's node count;
=item 3.
the node itself is considered online and healthy by Pacemaker.
=back
This allows C<sbd> to survive temporary outages of the majority of
devices. However, while the cluster is in such a degraded state, it can
neither successfully fence nor be shutdown cleanly (as taking the
cluster below the quorum threshold will immediately cause all remaining
nodes to self-fence). In short, it will not tolerate any further faults.
Please repair the system before continuing.
There is one C<sbd> process that acts as a master to which all watchers
report; one per device to monitor the node's slot; and, optionally, one
that handles the Pacemaker integration.
=over
=item B<-W>
Enable or disable use of the system watchdog to protect against the sbd
processes failing and the node being left in an undefined state. Specify
this once to enable, twice to disable.
Defaults to I<enabled>.
=item B<-w> F</dev/watchdog>
This can be used to override the default watchdog device used and should not
usually be necessary.
=item B<-p> F<@runstatedir@/sbd.pid>
This option can be used to specify a pidfile for the main sbd process.
=item B<-F> I<N>
Number of failures before a failing servant process will not be restarted
immediately until the dampening delay has expired. If set to zero, servants
will be restarted immediately and indefinitely. If set to one, a failed
servant will be restarted once every B<-t> seconds. If set to a different
value, the servant will be restarted that many times within the dampening
period and then delay.
Defaults to I<1>.
=item B<-t> I<N>
Dampening delay before faulty servants are restarted. Combined with C<-F 1>,
the most logical way to tune the restart frequency of servant processes.
Default is 5 seconds.
If set to zero, processes will be restarted indefinitely and immediately.
=item B<-P>
Enable Pacemaker integration which checks Pacemaker quorum and node health.
Specify this once to enable, twice to disable.
Defaults to I<enabled>.
=item B<-S> I<N>
Set the start mode. (Defaults to I<0>.)
If this is set to zero, sbd will always start up unconditionally,
regardless of whether the node was previously fenced or not.
If set to one, sbd will only start if the node was previously shutdown
cleanly (as indicated by an exit request message in the slot), or if the
slot is empty. A reset, crashdump, or power-off request in any slot will
halt the start up.
This is useful to prevent nodes from rejoining if they were faulty. The
node must be manually "unfenced" by sending an empty message to it:
sbd -d /dev/sda1 message node1 clear
=item B<-s> I<N>
Set the start-up wait time for devices. (Defaults to I<120>.)
Dynamic block devices such as iSCSI might not be fully initialized and
present yet. This allows one to set a timeout for waiting for devices to
appear on start-up. If set to 0, start-up will be aborted immediately if
no devices are available.
=item B<-Z>
Enable trace mode. B<Warning: this is unsafe for production, use at your
own risk!> Specifying this once will turn all reboots or power-offs, be
they caused by self-fence decisions or messages, into a crashdump.
Specifying this twice will just log them but not continue running.
=item B<-T>
By default, the daemon will set the watchdog timeout as specified in the
device metadata. However, this does not work for every watchdog device.
In this case, you must manually ensure that the watchdog timeout used by
the system correctly matches the SBD settings, and then specify this
option to allow C<sbd> to continue with start-up.
=item B<-5> I<N>
Warn if the time interval for tickling the watchdog exceeds this many seconds.
Since the node is unable to log the watchdog expiry (it reboots immediately
without a chance to write its logs to disk), this is very useful for getting
an indication that the watchdog timeout is too short for the IO load of the
system.
Default is about 3/5 of watchdog timeout, set to zero to disable.
=item B<-C> I<N>
Watchdog timeout to set before crashdumping. If SBD is set to crashdump
instead of reboot - either via the trace mode settings or the I<external/sbd>
fencing agent's parameter -, SBD will adjust the watchdog timeout to this
setting before triggering the dump. Otherwise, the watchdog might trigger and
prevent a successful crashdump from ever being written.
Set to zero (= default) to disable.
=item B<-r> I<N>
Actions to be executed when the watchers don't timely report to the sbd
master process or one of the watchers detects that the master process
has died.
Set timeout-action to comma-separated combination of
noflush|flush plus reboot|crashdump|off.
If just one of both is given the other stays at the default.
This doesn't affect actions like off, crashdump, reboot explicitly
triggered via message slots.
And it does as well not configure the action a watchdog would
trigger should it run off (there is no generic interface).
Defaults to flush,reboot.
=back
=head2 allocate
Example usage:
sbd -d /dev/sda1 allocate node1
Explicitly allocates a slot for the specified node name. This should
rarely be necessary, as every node will automatically allocate itself a
slot the first time it starts up on watch mode.
=head2 message
Example usage:
sbd -d /dev/sda1 message node1 test
Writes the specified message to node's slot. This is rarely done
directly, but rather abstracted via the C<external/sbd> fencing agent
configured as a cluster resource.
Supported message types are:
=over
=item test
This only generates a log message on the receiving node and can be used
to check if SBD is seeing the device. Note that this could overwrite a
fencing request send by the cluster, so should not be used during
production.
=item reset
Reset the target upon receipt of this message.
=item off
Power-off the target.
=item crashdump
Cause the target node to crashdump.
=item exit
This will make the C<sbd> daemon exit cleanly on the target. You should
B<not> send this message manually; this is handled properly during
shutdown of the cluster stack. Manually stopping the daemon means the
node is unprotected!
=item clear
This message indicates that no real message has been sent to the node.
You should not set this manually; C<sbd> will clear the message slot
automatically during start-up, and setting this manually could overwrite
a fencing message by the cluster.
=back
=head2 query-watchdog
Example usage:
sbd query-watchdog
Check for available watchdog devices and print some info.
B<Warning>: This command will arm the watchdog during query, and if your
watchdog refuses disarming (for example, if its kernel module has the
'nowayout' parameter set) this will reset your system.
=head2 test-watchdog
Example usage:
sbd test-watchdog [-w /dev/watchdog3]
Test specified watchdog device (/dev/watchdog by default).
B<Warning>: This command will arm the watchdog and have your system reset
in case your watchdog is working properly! If issued from an interactive
session, it will prompt for confirmation.
=head1 Base system configuration
=head2 Configure a watchdog
It is highly recommended that you configure your Linux system to load a
watchdog driver with hardware assistance (as is available on most modern
systems), such as I<hpwdt>, I<iTCO_wdt>, or others. As a fall-back, you
can use the I<softdog> module.
No other software must access the watchdog timer; it can only be
accessed by one process at any given time. Some hardware vendors ship
systems management software that use the watchdog for system resets
(f.e. HP ASR daemon). Such software has to be disabled if the watchdog
is to be used by SBD.
=head2 Choosing and initializing the block device(s)
First, you have to decide if you want to use one, two, or three devices.
If you are using multiple ones, they should reside on independent
storage setups. Putting all three of them on the same logical unit for
example would not provide any additional redundancy.
The SBD device can be connected via Fibre Channel, Fibre Channel over
Ethernet, or even iSCSI. Thus, an iSCSI target can become a sort-of
network-based quorum server; the advantage is that it does not require
a smart host at your third location, just block storage.
The SBD partitions themselves B<must not> be mirrored (via MD,
DRBD, or the storage layer itself), since this could result in a
split-mirror scenario. Nor can they reside on cLVM2 volume groups, since
they must be accessed by the cluster stack before it has started the
cLVM2 daemons; hence, these should be either raw partitions or logical
units on (multipath) storage.
The block device(s) must be accessible from all nodes. (While it is not
necessary that they share the same path name on all nodes, this is
considered a very good idea.)
SBD will only use about one megabyte per device, so you can easily
create a small partition, or very small logical units. (The size of the
SBD device depends on the block size of the underlying device. Thus, 1MB
is fine on plain SCSI devices and SAN storage with 512 byte blocks. On
the IBM s390x architecture in particular, disks default to 4k blocks,
and thus require roughly 4MB.)
The number of devices will affect the operation of SBD as follows:
=over
=item One device
In its most simple implementation, you use one device only. This is
appropriate for clusters where all your data is on the same shared
storage (with internal redundancy) anyway; the SBD device does not
introduce an additional single point of failure then.
If the SBD device is not accessible, the daemon will fail to start and
inhibit startup of cluster services.
=item Two devices
This configuration is a trade-off, primarily aimed at environments where
host-based mirroring is used, but no third storage device is available.
SBD will not commit suicide if it loses access to one mirror leg; this
allows the cluster to continue to function even in the face of one outage.
However, SBD will not fence the other side while only one mirror leg is
available, since it does not have enough knowledge to detect an asymmetric
split of the storage. So it will not be able to automatically tolerate a
second failure while one of the storage arrays is down. (Though you
can use the appropriate crm command to acknowledge the fence manually.)
It will not start unless both devices are accessible on boot.
=item Three devices
In this most reliable and recommended configuration, SBD will only
self-fence if more than one device is lost; hence, this configuration is
resilient against temporary single device outages (be it due to failures
or maintenance). Fencing messages can still be successfully relayed if
at least two devices remain accessible.
This configuration is appropriate for more complex scenarios where
storage is not confined to a single array. For example, host-based
mirroring solutions could have one SBD per mirror leg (not mirrored
itself), and an additional tie-breaker on iSCSI.
It will only start if at least two devices are accessible on boot.
=back
After you have chosen the devices and created the appropriate partitions
and perhaps multipath alias names to ease management, use the C<sbd create>
command described above to initialize the SBD metadata on them.
=head3 Sharing the block device(s) between multiple clusters
It is possible to share the block devices between multiple clusters,
provided the total number of nodes accessing them does not exceed I<255>
nodes, and they all must share the same SBD timeouts (since these are
part of the metadata).
If you are using multiple devices this can reduce the setup overhead
required. However, you should B<not> share devices between clusters in
different security domains.
=head2 Configure SBD to start on boot
On systems using C<sysvinit>, the C<openais> or C<corosync> system
start-up scripts must handle starting or stopping C<sbd> as required
before starting the rest of the cluster stack.
For C<systemd>, sbd simply has to be enabled using
systemctl enable sbd.service
The daemon is brought online on each node before corosync and Pacemaker
are started, and terminated only after all other cluster components have
been shut down - ensuring that cluster resources are never activated
without SBD supervision.
=head2 Configuration via sysconfig
The system instance of C<sbd> is configured via F<@configdir@/sbd>.
In this file, you must specify the device(s) used, as well as any
options to pass to the daemon:
SBD_DEVICE="/dev/sda1;/dev/sdb1;/dev/sdc1"
SBD_PACEMAKER="true"
C<sbd> will fail to start if no C<SBD_DEVICE> is specified. See the
installed template or section for configuration via environment
for more options that can be configured here.
In general configuration done via parameters takes precedence over
the configuration from the configuration file.
=head2 Configuration via environment
=over
@environment_section@
=back
=head2 Testing the sbd installation
After a restart of the cluster stack on this node, you can now try
sending a test message to it as root, from this or any other node:
sbd -d /dev/sda1 message node1 test
The node will acknowledge the receipt of the message in the system logs:
Aug 29 14:10:00 node1 sbd: [13412]: info: Received command test from node2
This confirms that SBD is indeed up and running on the node, and that it
is ready to receive messages.
Make B<sure> that F<@configdir@/sbd> is identical on all cluster
nodes, and that all cluster nodes are running the daemon.
=head1 Pacemaker CIB integration
=head2 Fencing resource
Pacemaker can only interact with SBD to issue a node fence if there is a
configure fencing resource. This should be a primitive, not a clone, as
follows:
primitive fencing-sbd stonith:external/sbd \
params pcmk_delay_max=30
This will automatically use the same devices as configured in
F<@configdir@/sbd>.
While you should not configure this as a clone (as Pacemaker will register
the fencing device on each node automatically), the I<pcmk_delay_max>
setting enables random fencing delay which ensures, in a scenario where a
split-brain scenario did occur in a two node cluster, that one of the nodes
has a better chance to survive to avoid double fencing.
SBD also supports turning the reset request into a crash request, which
may be helpful for debugging if you have kernel crashdumping configured;
then, every fence request will cause the node to dump core. You can
enable this via the C<crashdump="true"> parameter on the fencing
resource. This is B<not> recommended for production use, but only for
debugging phases.
=head2 General cluster properties
You must also enable STONITH in general, and set the STONITH timeout to
be at least twice the I<msgwait> timeout you have configured, to allow
enough time for the fencing message to be delivered. If your I<msgwait>
timeout is 60 seconds, this is a possible configuration:
property stonith-enabled="true"
property stonith-timeout="120s"
B<Caution>: if I<stonith-timeout> is too low for I<msgwait> and the
system overhead, sbd will never be able to successfully complete a fence
request. This will create a fencing loop.
Note that the sbd fencing agent will try to detect this and
automatically extend the I<stonith-timeout> setting to a reasonable
value, on the assumption that sbd modifying your configuration is
preferable to not fencing.
=head1 Management tasks
=head2 Recovering from temporary SBD device outage
If you have multiple devices, failure of a single device is not immediately
fatal. C<sbd> will retry to restart the monitor for the device every 5
seconds by default. However, you can tune this via the options to the
I<watch> command.
In case you wish the immediately force a restart of all currently
disabled monitor processes, you can send a I<SIGUSR1> to the SBD
I<inquisitor> process.
=head1 LICENSE
Copyright (C) 2008-2013 Lars Marowsky-Bree
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public
License as published by the Free Software Foundation; either
version 2 of the License, or (at your option) any later version.
This software is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
For details see the GNU General Public License at
http://www.gnu.org/licenses/gpl-2.0.html (version 2) and/or
http://www.gnu.org/licenses/gpl.html (the newest as per "any later").