You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not sure how helpful it would be but if we would consider crmsh as main cluster tool, then it might be useful.
An example:
$ dlm_tool ls
dlm lockspaces
name lvm_clvg0
id 0x45d1d4f1
flags 0x00000000
change member 1 joined 1 remove 0 failed 0 seq 1,1
members 1
name lvm_global
id 0x12aabd2d
flags 0x00000000
change member 2 joined 1 remove 0 failed 0 seq 2,2
members 1 2
After adding "shared" VG, that is one which is activated on both nodes (eg. for OCFS2), this happens:
$ crm configure show clvg0
primitive clvg0 LVM-activate \
params vgname=clvg0 vg_access_mode=lvmlockd activation_mode=shared \
op start timeout=90s interval=0 \
op stop timeout=90s interval=0 \
op monitor interval=90s timeout=90s
# see lvm_clvg0 has two member now!
$ dlm_tool ls
dlm lockspaces
name lvm_clvg0
id 0x45d1d4f1
flags 0x00000000
change member 2 joined 1 remove 0 failed 0 seq 1,1
members 1 2
name lvm_global
id 0x12aabd2d
flags 0x00000000
change member 2 joined 1 remove 0 failed 0 seq 1,1
members 1 2
# see that we have 'rrp_mode = "passive"', and 'protocol'
$ dlm_tool dump
5674 config file log_debug = 1 cli_set 0 use 1
5674 dlm_controld 4.1.0 started
5674 our_nodeid 1
5674 node_config 1
5674 node_config 2
5674 found /dev/misc/dlm-control minor 124
5674 found /dev/misc/dlm-monitor minor 123
5674 found /dev/misc/dlm_plock minor 122
5674 /sys/kernel/config/dlm/cluster/comms: opendir failed: 2
5674 /sys/kernel/config/dlm/cluster/spaces: opendir failed: 2
5674 set log_debug 1
5674 set mark 0
5674 cmap totem.rrp_mode = 'passive'
5674 set protocol 1
5674 set /proc/sys/net/core/rmem_default 4194304
5674 set /proc/sys/net/core/rmem_max 4194304
5674 set recover_callbacks 1
5674 cmap totem.cluster_name = 'jb155sapqe'
5674 set cluster_name jb155sapqe
5674 /dev/misc/dlm-monitor fd 13
5674 cluster quorum 1 seq 270 nodes 2
5674 cluster node 1 added seq 270
5674 set_configfs_node 1 192.168.253.100 local 1 mark 0
5674 cluster node 2 added seq 270
5674 set_configfs_node 2 192.168.253.101 local 0 mark 0
5674 cpg_join dlm:controld ...
5674 setup_cpg_daemon 15
5674 dlm:controld conf 1 1 0 memb 1 join 1 left 0
5674 daemon joined 1
5674 dlm:controld ring 1:270 2 memb 1 2
5674 receive_protocol 1 max 3.1.1.0 run 0.0.0.0
5674 daemon node 1 prot max 0.0.0.0 run 0.0.0.0
5674 daemon node 1 save max 3.1.1.0 run 0.0.0.0
5674 set_protocol member_count 1 propose daemon 3.1.1 kernel 1.1.1
5674 receive_protocol 1 max 3.1.1.0 run 3.1.1.0
5674 daemon node 1 prot max 3.1.1.0 run 0.0.0.0
5674 daemon node 1 save max 3.1.1.0 run 3.1.1.0
5674 run protocol from nodeid 1
5674 daemon run 3.1.1 max 3.1.1 kernel run 1.1.1 max 1.1.1
5674 plocks 16
5674 receive_protocol 1 max 3.1.1.0 run 3.1.1.0
5674 send_fence_clear 1 fipu
5674 receive_fence_clear from 1 for 1 result -61 flags 1
5674 fence_in_progress_unknown 0 all_fipu
5815 dlm:controld conf 2 1 0 memb 1 2 join 2 left 0
5815 daemon joined 2
5815 receive_protocol 2 max 3.1.1.0 run 0.0.0.0
5815 daemon node 2 prot max 0.0.0.0 run 0.0.0.0
5815 daemon node 2 save max 3.1.1.0 run 0.0.0.0
5815 receive_protocol 1 max 3.1.1.0 run 3.1.1.0
5815 receive_fence_clear from 1 for 2 result 0 flags 6
5815 receive_protocol 2 max 3.1.1.0 run 3.1.1.0
5815 daemon node 2 prot max 3.1.1.0 run 0.0.0.0
5815 daemon node 2 save max 3.1.1.0 run 3.1.1.0
The text was updated successfully, but these errors were encountered:
Hmm, I see what you mean. However, DLM isn't versatile enough for the pacemaker/corosync cluster stack. Out of my head, I lack the suitable user interface to integrate it.
Not sure how helpful it would be but if we would consider crmsh as main cluster tool, then it might be useful.
An example:
After adding "shared" VG, that is one which is activated on both nodes (eg. for OCFS2), this happens:
The text was updated successfully, but these errors were encountered: