You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/important-notes/known-issues.md
+10-13
Original file line number
Diff line number
Diff line change
@@ -7,24 +7,21 @@ weight: 20500
7
7
8
8
- Currently, it is not possible to resize a logical volume clone. The resize command does not fail and the new size
9
9
is shown by `lsblk`. But when remounting the filesystem with the option to resize, it fails.
10
-
- High-availability is not working for Graylog. In case of a management node failure, the Graylog monitoring service
11
-
will fail over, but recent system logs will be empty. However, historic logs remain on S3. This issue is to be
12
-
resolved with the next patch.
13
10
14
11
## AWS
15
12
16
13
- During a VPC peering connection, all possible CIDRs from the request's VPC should be added to the route table.
17
14
Be aware, that there might be more than one CIDR to be added.
18
15
19
-
## Bare Metal
16
+
## Control Plane
20
17
21
-
- At the moment, manually scanning for new devices isn't possible (when adding to the nodes). As a workaround, a
22
-
storage node restart should be performed. This will automatically detect all newly added devices.
18
+
- Log data is not transferred in case of graylog fail-over and recent system logs will be empty.
19
+
- api times out, if selected history of io statistics is too long
20
+
21
+
## Storage Plane
23
22
24
-
## Simplyblock
25
-
26
-
- Write IO errors after setting new cluster map (2+2, 2-3 nodes, 4-6 devices)
27
-
- 2+1: I/O interruption with error when removing a node that was previously added
28
-
- 2+1: Expansion migration completes with errors when there is a removed node in the cluster map
29
-
- I/O hangs and I/O errors on low memory (1+1, EBS). As a work-around assign a minimum of 6G of huge page memory to a
30
-
storage node
23
+
- currently, the erasure coding schemas with n>1 (e.g. 2+1, 2+2, 4+1) are not power-fail-safe as parity inconsistencies can be found in rare cases.
24
+
We are working with max. effort to resolve this issue.
25
+
- during background deletion of large volumes, if the storage node at which the deleted volume resides goes offline or becomes unreachable before the delete completes, possible storage garbage may be left.
26
+
- the node removal currently does not migrate logical volumes. remove nodes only, if they are entirely empty. Otherwise, restart existing nodes on new instances or fail individual nvme devices to phase out old hardware.
27
+
- the fail-back on restart of primary node can cause io to hang for a few seconds.
0 commit comments