Post by account_disabled on Feb 22, 2024 0:12:01 GMT -7
We have reached the end of this series of articles about DRBD, and for this last article we have not yet seen (very briefly) the final configuration used to perform the requested development. Let's get back to the topic and pick up where we left off: explaining the final configuration. Under the premise we mentioned in the previous article, especially in Part 3, the cluster was configured with two nodes.
A virtual machine is created as a cluster resource and assigned a preferred node (a node on the same tier). However, if the node fails and the machine Switzerland Mobile Number List fails over, once the primary node comes back, the machine does not fail back but remains on the secondary node. This is done to avoid moving the machine 2 times in the event of a malfunction. This allows migration in case of failure, and when not working, can be manually migrated again to minimize damage. The configuration partition is replicated by DRBD and installed on both servers. Virtual hard disk partition, copied by DRBD and mounted on both servers simultaneously (OCFS2 is required for this, otherwise simultaneous mounting is not possible). The partition is presented to Linux as an iSCSI device with a fixed IP.
Additionally, certain rules are enforced in the cluster configuration, such as: To mount a DRBD partition, the Linux file system and network services need to be mounted beforehand. If one of the two doesn't work, the partition won't be mounted. To create an iSCSI device and assign it an IP address, a DRBD partition must be installed beforehand. If they cannot be installed, the iSCSI resources will not be created. To start a virtual machine, the DRBD partition, iSCSI resources, and IP must be installed beforehand. If it cannot be installed, the virtual machine cannot start. If the virtual machine is not running on any node, it will be started on the node of the factory itself by default. If it has been migrated to the Secondary due to a failure, it remains on the Secondary and will not fail back.
Regarding the DRBD configuration we have: The replication method needs confirmation. That is, the data will not be given as a write in the second node until it is acknowledged. One DRBD device per resource (so they can be stopped or started independently of each other) and per different port to separate traffic by port. If a node goes offline, allow the node to reconnect and automatically sync for 5 minutes. If the disks are not resynchronized within 5 minutes, they will remain disconnected to avoid data loss. Resynchronization will be done manually to give absolute control over the operation.
A virtual machine is created as a cluster resource and assigned a preferred node (a node on the same tier). However, if the node fails and the machine Switzerland Mobile Number List fails over, once the primary node comes back, the machine does not fail back but remains on the secondary node. This is done to avoid moving the machine 2 times in the event of a malfunction. This allows migration in case of failure, and when not working, can be manually migrated again to minimize damage. The configuration partition is replicated by DRBD and installed on both servers. Virtual hard disk partition, copied by DRBD and mounted on both servers simultaneously (OCFS2 is required for this, otherwise simultaneous mounting is not possible). The partition is presented to Linux as an iSCSI device with a fixed IP.
Additionally, certain rules are enforced in the cluster configuration, such as: To mount a DRBD partition, the Linux file system and network services need to be mounted beforehand. If one of the two doesn't work, the partition won't be mounted. To create an iSCSI device and assign it an IP address, a DRBD partition must be installed beforehand. If they cannot be installed, the iSCSI resources will not be created. To start a virtual machine, the DRBD partition, iSCSI resources, and IP must be installed beforehand. If it cannot be installed, the virtual machine cannot start. If the virtual machine is not running on any node, it will be started on the node of the factory itself by default. If it has been migrated to the Secondary due to a failure, it remains on the Secondary and will not fail back.
Regarding the DRBD configuration we have: The replication method needs confirmation. That is, the data will not be given as a write in the second node until it is acknowledged. One DRBD device per resource (so they can be stopped or started independently of each other) and per different port to separate traffic by port. If a node goes offline, allow the node to reconnect and automatically sync for 5 minutes. If the disks are not resynchronized within 5 minutes, they will remain disconnected to avoid data loss. Resynchronization will be done manually to give absolute control over the operation.