Skip to content

Commit 9983276

Browse files
Update known-issues.md
1 parent b682aa7 commit 9983276

File tree

1 file changed

+10
-13
lines changed

1 file changed

+10
-13
lines changed

docs/important-notes/known-issues.md

+10-13
Original file line numberDiff line numberDiff line change
@@ -7,24 +7,21 @@ weight: 20500
77

88
- Currently, it is not possible to resize a logical volume clone. The resize command does not fail and the new size
99
is shown by `lsblk`. But when remounting the filesystem with the option to resize, it fails.
10-
- High-availability is not working for Graylog. In case of a management node failure, the Graylog monitoring service
11-
will fail over, but recent system logs will be empty. However, historic logs remain on S3. This issue is to be
12-
resolved with the next patch.
1310

1411
## AWS
1512

1613
- During a VPC peering connection, all possible CIDRs from the request's VPC should be added to the route table.
1714
Be aware, that there might be more than one CIDR to be added.
1815

19-
## Bare Metal
16+
## Control Plane
2017

21-
- At the moment, manually scanning for new devices isn't possible (when adding to the nodes). As a workaround, a
22-
storage node restart should be performed. This will automatically detect all newly added devices.
18+
- Log data is not transferred in case of graylog fail-over and recent system logs will be empty.
19+
- api times out, if selected history of io statistics is too long
20+
21+
## Storage Plane
2322

24-
## Simplyblock
25-
26-
- Write IO errors after setting new cluster map (2+2, 2-3 nodes, 4-6 devices)
27-
- 2+1: I/O interruption with error when removing a node that was previously added
28-
- 2+1: Expansion migration completes with errors when there is a removed node in the cluster map
29-
- I/O hangs and I/O errors on low memory (1+1, EBS). As a work-around assign a minimum of 6G of huge page memory to a
30-
storage node
23+
- currently, the erasure coding schemas with n>1 (e.g. 2+1, 2+2, 4+1) are not power-fail-safe as parity inconsistencies can be found in rare cases.
24+
We are working with max. effort to resolve this issue.
25+
- during background deletion of large volumes, if the storage node at which the deleted volume resides goes offline or becomes unreachable before the delete completes, possible storage garbage may be left.
26+
- the node removal currently does not migrate logical volumes. remove nodes only, if they are entirely empty. Otherwise, restart existing nodes on new instances or fail individual nvme devices to phase out old hardware.
27+
- the fail-back on restart of primary node can cause io to hang for a few seconds.

0 commit comments

Comments
 (0)