Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify csync2 procedure #261

Merged
merged 2 commits into from
Nov 4, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
311 changes: 145 additions & 166 deletions xml/ha_yast_cluster.xml
Original file line number Diff line number Diff line change
Expand Up @@ -656,6 +656,149 @@
</figure>
</sect1>

<sect1 xml:id="sec-ha-installation-setup-conntrackd">
<title>Synchronizing connection status between cluster nodes</title>
<para>
To enable <emphasis>stateful</emphasis> packet inspection for iptables,
configure and use the conntrack tools. This requires the following basic
steps:
</para>

<!--from fate#311872: It supports the only FTFW syncing mode now.-->
<procedure xml:id="pro-ha-installation-setup-conntrackd">
<title>Configuring the <systemitem class="resource">conntrackd</systemitem> with &yast;</title>
<para>
Use the &yast; cluster module to configure the user space
<systemitem class="daemon">conntrackd</systemitem> (see <xref
linkend="fig-ha-installation-setup-conntrackd"/>). It needs a
dedicated network interface that is not used for other communication
channels. The daemon can be started via a resource agent afterward.
</para>
<step>
<para>
Start the &yast; cluster module and switch to the <guimenu>Configure
conntrackd</guimenu> category.
</para>
</step>
<step>
<para>
Define the <guimenu>Multicast Address</guimenu> to be used for
synchronizing the connection status.
</para>
</step>
<step>
<para>
In <guimenu>Group Number</guimenu>, define a numeric ID for the group
to synchronize the connection status to.
<remark>emap 2011-11-10: To where?
The other nodes? - taroth: good question :), will investigate</remark>
</para>
</step>
<step>
<para>
Click <guimenu>Generate /etc/conntrackd/conntrackd.conf</guimenu> to
create the configuration file for
<systemitem class="daemon">conntrackd</systemitem>.
</para>
</step>
<step>
<para>
If you modified any options for an existing cluster, confirm your
changes and close the cluster module.
</para>
</step>
<step>
<para>
For further cluster configuration, click <guimenu>Next</guimenu> and
proceed with <xref linkend="sec-ha-installation-setup-services"/>.
</para>
</step>
<step>
<para>
Select a <guimenu>Dedicated Interface</guimenu> for synchronizing the
connection status. The IPv4 address of the selected interface is
automatically detected and shown in &yast;. It must already be
configured and it must support multicast.
<!--taroth 2011-11-09: for the records, this has nothing to do with the
corosync conf-->
</para>
</step>
</procedure>
<figure xml:id="fig-ha-installation-setup-conntrackd">
<title>&yast; <guimenu>Cluster</guimenu>&mdash;<systemitem class="resource">conntrackd</systemitem></title>
<mediaobject>
<imageobject role="fo">
<imagedata fileref="yast_cluster_conntrackd.png" width="100%"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="yast_cluster_conntrackd.png" width="75%"/>
</imageobject>
</mediaobject>
</figure>
<para>
After having configured the conntrack tools, you can use them for &lvs;
(see <xref linkend="cha-ha-lb" xrefstyle="select:title"/>).
</para>
</sect1>

<sect1 xml:id="sec-ha-installation-setup-services">
<title>Configuring services</title>
<para>
In the &yast; cluster module define whether to start certain services
on a node at boot time. You can also use the module to start and stop
the services manually. To bring the cluster nodes online and start the
cluster resource manager, &pace; must be running as a service.
</para>
<procedure xml:id="pro-ha-installation-setup-services">
<title>Enabling the cluster services</title>
<step>
<para>
In the &yast; cluster module, switch to the
<guimenu>Service</guimenu> category.
</para>
</step>
<step>
<para>
To start the cluster services each time this cluster node is booted, select the
respective option in the <guimenu>Booting</guimenu> group. If you
select <guimenu>Off</guimenu> in the <guimenu>Booting</guimenu> group,
you must start the cluster services manually each time this node is booted. To
start the cluster services manually, use the command:
</para>
<screen>&prompt.root;<command>crm</command> cluster start</screen>
</step>
<step>
<para>
To start or stop the cluster services immediately, click the respective button.
</para>
</step>
<step>
<para>
To open the ports in the firewall that are needed for cluster
communication on the current machine, activate <guimenu>Open Port in
Firewall</guimenu>.
</para>
</step>
<step>
<para>
Confirm your changes. Note that the configuration only
applies to the current machine, not to all cluster nodes.
</para>
</step>
</procedure>
<figure>
<title>&yast; <guimenu>Cluster</guimenu>&mdash;services</title>
<mediaobject>
<imageobject role="fo">
<imagedata fileref="yast_cluster_services.png" width="100%"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="yast_cluster_services.png" width="75%"/>
</imageobject>
</mediaobject>
</figure>
</sect1>

<sect1 xml:id="sec-ha-installation-setup-csync2">
<title>Transferring the configuration to all nodes</title>
<para>
Expand Down Expand Up @@ -759,13 +902,9 @@
<screen>&prompt.root;<command>systemctl</command> enable csync2.socket</screen>
</step>
<step>
<para> Confirm your changes. &yast; writes the &csync;
<para>Click <guimenu>Finish</guimenu>. &yast; writes the &csync;
configuration to <filename>/etc/csync2/csync2.cfg</filename>.</para>
</step>
<step>
<para>To start the synchronization process now, proceed with <xref
linkend="sec-ha-setup-yast-csync2-sync"/>. </para>
</step>
</procedure>
<figure>
<title>&yast; <guimenu>Cluster</guimenu>&mdash;&csync;</title>
Expand All @@ -782,24 +921,7 @@

<sect2 xml:id="sec-ha-setup-yast-csync2-sync">
<title>Synchronizing changes with &csync;</title>
<para> To successfully synchronize the files with &csync;, the following
requirements must be met: </para>
<itemizedlist>
<listitem>
<para> The same &csync; configuration is available on all cluster
nodes. </para>
</listitem>
<listitem>
<para> The same &csync; authentication key is available on all cluster
nodes. </para>
</listitem>
<listitem>
<para> &csync; must be running on <emphasis>all</emphasis> cluster
nodes. </para>
</listitem>
</itemizedlist>

<para> Before the first &csync; run, you therefore need to make the
<para> Before running &csync; for the first time, you need to make the
following preparations: </para>

<procedure>
Expand Down Expand Up @@ -873,149 +995,6 @@ Finished with 1 errors.</screen>
</sect2>
</sect1>

<sect1 xml:id="sec-ha-installation-setup-conntrackd">
<title>Synchronizing connection status between cluster nodes</title>
<para>
To enable <emphasis>stateful</emphasis> packet inspection for iptables,
configure and use the conntrack tools. This requires the following basic
steps:
</para>

<!--from fate#311872: It supports the only FTFW syncing mode now.-->
<procedure xml:id="pro-ha-installation-setup-conntrackd">
<title>Configuring the <systemitem class="resource">conntrackd</systemitem> with &yast;</title>
<para>
Use the &yast; cluster module to configure the user space
<systemitem class="daemon">conntrackd</systemitem> (see <xref
linkend="fig-ha-installation-setup-conntrackd"/>). It needs a
dedicated network interface that is not used for other communication
channels. The daemon can be started via a resource agent afterward.
</para>
<step>
<para>
Start the &yast; cluster module and switch to the <guimenu>Configure
conntrackd</guimenu> category.
</para>
</step>
<step>
<para>
Define the <guimenu>Multicast Address</guimenu> to be used for
synchronizing the connection status.
</para>
</step>
<step>
<para>
In <guimenu>Group Number</guimenu>, define a numeric ID for the group
to synchronize the connection status to.
<remark>emap 2011-11-10: To where?
The other nodes? - taroth: good question :), will investigate</remark>
</para>
</step>
<step>
<para>
Click <guimenu>Generate /etc/conntrackd/conntrackd.conf</guimenu> to
create the configuration file for
<systemitem class="daemon">conntrackd</systemitem>.
</para>
</step>
<step>
<para>
If you modified any options for an existing cluster, confirm your
changes and close the cluster module.
</para>
</step>
<step>
<para>
For further cluster configuration, click <guimenu>Next</guimenu> and
proceed with <xref linkend="sec-ha-installation-setup-services"/>.
</para>
</step>
<step>
<para>
Select a <guimenu>Dedicated Interface</guimenu> for synchronizing the
connection status. The IPv4 address of the selected interface is
automatically detected and shown in &yast;. It must already be
configured and it must support multicast.
<!--taroth 2011-11-09: for the records, this has nothing to do with the
corosync conf-->
</para>
</step>
</procedure>
<figure xml:id="fig-ha-installation-setup-conntrackd">
<title>&yast; <guimenu>Cluster</guimenu>&mdash;<systemitem class="resource">conntrackd</systemitem></title>
<mediaobject>
<imageobject role="fo">
<imagedata fileref="yast_cluster_conntrackd.png" width="100%"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="yast_cluster_conntrackd.png" width="75%"/>
</imageobject>
</mediaobject>
</figure>
<para>
After having configured the conntrack tools, you can use them for &lvs;
(see <xref linkend="cha-ha-lb" xrefstyle="select:title"/>).
</para>
</sect1>

<sect1 xml:id="sec-ha-installation-setup-services">
<title>Configuring services</title>
<para>
In the &yast; cluster module define whether to start certain services
on a node at boot time. You can also use the module to start and stop
the services manually. To bring the cluster nodes online and start the
cluster resource manager, &pace; must be running as a service.
</para>
<procedure xml:id="pro-ha-installation-setup-services">
<title>Enabling the cluster services</title>
<step>
<para>
In the &yast; cluster module, switch to the
<guimenu>Service</guimenu> category.
</para>
</step>
<step>
<para>
To start the cluster services each time this cluster node is booted, select the
respective option in the <guimenu>Booting</guimenu> group. If you
select <guimenu>Off</guimenu> in the <guimenu>Booting</guimenu> group,
you must start the cluster services manually each time this node is booted. To
start the cluster services manually, use the command:
</para>
<screen>&prompt.root;<command>crm</command> cluster start</screen>
</step>
<step>
<para>
To start or stop the cluster services immediately, click the respective button.
</para>
</step>
<step>
<para>
To open the ports in the firewall that are needed for cluster
communication on the current machine, activate <guimenu>Open Port in
Firewall</guimenu>.
</para>
</step>
<step>
<para>
Confirm your changes. Note that the configuration only
applies to the current machine, not to all cluster nodes.
</para>
</step>
</procedure>
<figure>
<title>&yast; <guimenu>Cluster</guimenu>&mdash;services</title>
<mediaobject>
<imageobject role="fo">
<imagedata fileref="yast_cluster_services.png" width="100%"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="yast_cluster_services.png" width="75%"/>
</imageobject>
</mediaobject>
</figure>
</sect1>

<sect1 xml:id="sec-ha-installation-start">
<title>Bringing the cluster online</title>
<para>
Expand Down