In this case we reproduced a case that empty container node is not deleted after expired. We speed up the procedures in codes for evaluation convenience.
We assume the hostnames of 5 nodes are NODE1, NODE2, NODE3, NODE4 and NODE5. Please replace the hostnames in the commands with yours.
Setup the configuration of cluster:
cd zookeeper
cp conf/zoo_sample.cfg conf/zoo.cfg
In zookeeper/conf/zoo.cfg
, append these lines:
server.0=NODE1:2110:3110
server.1=NODE2:2110:3110
server.2=NODE3:2110:3110
server.3=NODE4:2110:3110
server.4=NODE5:2110:3110
By default we set the zookeeper data dir to be /tmp/zookeeper
, so we add id file for each host (this command should execute on each individual host machine one by one, to exit synchronized input mode in tmux, use ctrl-b
+:set synchronize-panes off
):
#for NODE1, each node you should only execute one command
echo 0 > /tmp/zookeeper/myid
#for NODE2
echo 1 > /tmp/zookeeper/myid
#for NODE3
echo 2 > /tmp/zookeeper/myid
#for NODE4
echo 3 > /tmp/zookeeper/myid
#for NODE5
echo 4 > /tmp/zookeeper/myid
Run these commands on all nodes:
experiments/reproduce/ZK-3546/install_ZK-3546.sh [path_to_OathKeeper] [path_to_ZooKeeper]
./run_engine.sh install conf/samples/zk-3.6.1.properties zookeeper
mkdir inv_prod_input && cp -r inv_verify_output_bk/ZK-* ./inv_prod_input
experiments/reproduce/ZK-3546/trigger_ZK-3546.sh [path_to_OathKeeper] [path_to_ZooKeeper] NODE5
Note that NODE5
should indicate the result of running hostname -s
on your zookeeper leader node, usually the host with highest id.
Wait for 20 secs, you should see on leader node our client creates a container node and after some sleep the container node still exist returned by query:
[zk: localhost:2181(CONNECTED) 0] ls /
[q1, zookeeper]
[zk: localhost:2181(CONNECTED) 1] if u see /q1 exists, that's successful.
Check the zookeeper/logs/zookeeper-*.out
to see detection report for failed invariants. If invariants giving alerts, the detection is successful.
cd zookeeper
bin/zkServer.sh stop
rm -r /tmp/zookeeper/version-2/