What'southward New in Failover Clustering in Windows Server

Applies To: Windows Server 2012 R2, Windows Server 2012

This topic describes the Failover Clustering functionality that is new or changed in Windows Server 2012 R2.

Failover clusters provide loftier availability and scalability to many server workloads. These include server applications such as Microsoft Exchange Server, Hyper-5, Microsoft SQL Server, and file servers. The server applications can run on physical servers or virtual machines. In a failover cluster, if i or more than of the clustered servers (nodes) fails, other nodes begin to provide service. This procedure is known as failover.

In this topic:

  • What'due south new in Failover Clustering in Windows Server 2012 R2

  • What's new in Failover Clustering in Windows Server 2012

What'southward new in Failover Clustering in Windows Server 2012 R2

In Windows Server 2012 R2, Failover Clustering offers enhanced support in the following areas.

Feature/Functionality New or Improved Description
Shared virtual difficult disk (for invitee clusters) New Enables you to use .vhdx files as shared storage in a guest cluster.
Virtual automobile bleed on shutdown New Enables a Hyper-V host to automatically live migrate running virtual machines if the computer is shut downwards.
Virtual machine network health detection New Enables a Hyper-V host to automatically live migrate virtual machines if a network disconnection occurs on a protected virtual network.
Optimized CSV placement policies Improved Distributes CSV ownership evenly across the failover cluster nodes.
Increased CSV resiliency Improved Multiple Server service instances per cluster node and CSV monitoring of the Server service provide greater resiliency.
CSV enshroud allocation Improved Increases the amount of RAM that you tin allocate as CSV cache.
CSV diagnosibility Improved Enables you to view the state of a CSV on a per node basis, and the reason for I/O redirection.
CSV interoperability Improved Adds CSV support for other Windows Server 2012 R2 features.
Deploy an Active Directory-discrete cluster New Enables you to deploy a failover cluster with less dependency on Active Directory Domain Services.
Dynamic witness New Dynamically adjusts the witness vote based on the number of voting nodes in current cluster membership.
Quorum user interface improvements Improved Enables yous to easily view the assigned quorum vote and the current quorum vote for each node in Failover Cluster Manager.
Force quorum resiliency New Enables automatic recovery in the case of a partitioned failover cluster.
Tie breaker for fifty% node split New Enables one side of a cluster to continue to run in the instance of a cluster dissever where neither side would normally have quorum.
Configure the Global Update Manager mode New Helps the cluster to keep to function if there is a delay with one or more nodes.
Cluster node health detection Improved Increases the resiliency to temporary network failures for virtual machines that are running on a Hyper-V cluster.
Plough off IPsec encryption for inter-node cluster advice New Helps forbid a cluster from being affected by high latency Group Policy updates.
Cluster dashboard New Provides a convenient way to check the health of all managed failover clusters in Failover Cluster Managing director.

High availability virtual machine improvements

The following section provides a summary of new high availability functionality for virtual machines in Windows Server 2012 R2.

Shared virtual hard disk (for guest clusters)

Y'all tin now share a virtual hd file (in the .vhdx file format) betwixt multiple virtual machines. You tin can employ these .vhdx files as shared storage for a virtual machine failover cluster, too known as a guest cluster. For instance, you can create shared .vhdx files for data disks and for the disk witness. (You would not use a shared .vhdx file for the operating system virtual hard deejay.)

What value does this change add together?

In Windows Server 2012, you could deploy guest clusters using shared storage that was provided past virtual Fibre Channel or iSCSI to the invitee operating organisation. In these configurations, the underlying storage was exposed to the user of a virtual car. In private or public cloud deployments, there is frequently the demand to hide the details of the underlying fabric from the user or tenant administrator. Shared .vhdx storage provides that layer of brainchild.

This change also enables easier deployment of invitee cluster configurations. A shared .vhdx file configuration is easier to deploy than solutions like virtual Fibre Channel or iSCSI. When you lot configure a virtual machine to use a shared .vhdx file, yous do not have to make storage configuration changes such every bit zoning and LUN masking.

What works differently?

In Windows Server 2012, shared virtual hard disks are not available. Windows Server 2012 R2 adds this functionality.

In Windows Server 2012 R2, virtual SCSI disks now announced as virtual SAS disks when you add a SCSI hard disk to a virtual machine. This includes both shared and non-shared virtual hard disk drive files. For example, if y'all view the disk in Server Manager, the bus type is listed as SAS.

For more information about shared virtual hard disks, encounter Virtual hard disk sharing and Deploy a Guest Cluster Using a Shared Virtual Hard Disk.

Virtual machine drain on shutdown

In Windows Server 2012 R2, if you shut down a Hyper-V failover cluster node without showtime putting the node into maintenance mode to drain any running clustered roles, the cluster now automatically alive migrates all running virtual machines to another host before the estimator shuts downward.

Note

Brand sure that there is not more than ane virtual automobile in a virtual machine clustered role. Starting with Windows Server 2012, nosotros do not support this configuration. An case of this scenario is where multiple virtual machines have files on a mutual physical deejay that is non part of Cluster Shared Volumes. A single virtual auto per clustered role improves the management experience and the functionality of virtual machines in a clustered environment, such as virtual automobile mobility.

What value does this change add?

This change provides a safety mechanism to help ensure that a server shutdown (or whatever action that shuts down the Cluster service) does non cause unplanned downtime for running virtual machines. This increases the availability of applications that run within the invitee operating arrangement.

Important

We still recommend that you put a node into maintenance mode or move all virtual machines to other nodes before you shut downwardly a cluster node. This is the safest mode to bleed any running amassed roles.

What works differently?

In Windows Server 2012, if you lot close down a cluster node without first draining the node, the virtual machines are put into a saved state, so moved to other nodes and resumed. This means that there is an suspension to the availability of the virtual machines. If information technology takes too long to salve country the virtual machines, they may be turned off, and and then restarted on some other node. In Windows Server 2012 R2, the cluster automatically live migrates all running virtual machines before shutdown.

To enable or disable this functionality, configure the DrainOnShutdown cluster common property. By default, this belongings is enabled (fix to a value of "i").

To view the holding value, start Windows PowerShell every bit an ambassador, and then enter the following command:

              (Get-Cluster).DrainOnShutdown                          

Virtual machine network wellness detection

Network health detection and recovery is at present available at the virtual motorcar level for a Hyper-V host cluster. If a network disconnection occurs on a protected virtual network, the cluster live migrates the affected virtual machines to a host where that external virtual network is bachelor. For this to occur there must be multiple network paths between cluster nodes.

Note

If in that location are no available networks that connect to other nodes of the cluster, the cluster removes the node from cluster membership, transfers ownership of the virtual machine files, and so restarts the virtual machines on some other node.

What value does this alter add?

This change increases the availability of virtual machines when at that place is a network issue. If live migration occurs, there is no reanimation because live migration maintains the session state of the virtual auto.

What works differently?

In Windows Server 2012, if there is a network disconnection at the virtual machine level, the virtual motorcar continues to run on that figurer even though the virtual machine may not be available to users.

In Windows Server 2012 R2, there is now a Protected network cheque box in the virtual machine settings. This setting is available in the advanced features of the network adapter. By default, the setting is enabled. You can configure this setting on a per network basis for each virtual motorcar. Therefore, if there is a lower priority network such as one used for examination or for backup, you tin choose non to alive migrate the virtual motorcar if those networks experience a network disconnection.

Figure 1. Protected network setting

Cluster Shared Volume (CSV) improvements

The following section provides a summary of new CSV functionality in Windows Server 2012 R2.

Optimized CSV placement policies

CSV buying is now automatically distributed and rebalanced across the failover cluster nodes.

What value does this change add?

In a failover cluster, one node is considered the owner or "coordinator node" for a CSV. The coordinator node owns the concrete disk resource that is associated with a logical unit (LUN). All I/O operations that are specific to the file system are through the coordinator node. Distributed CSV ownership increases disk performance considering it helps to load residue the disk I/O.

Because CSV ownership is now counterbalanced across the cluster nodes, i node volition not ain a disproportionate number of CSVs. Therefore, if a node fails, the transition of CSV buying to some other node is potentially more efficient.

This functionality is useful for a Scale-Out File Server that uses storage spaces because information technology ensures that storage spaces buying is distributed.

What works differently?

In Windows Server 2012, there is no automatic rebalancing of coordinator node assignment. For example, all LUNs could be owned past the aforementioned node. In Windows Server 2012 R2, CSV ownership is evenly distributed beyond the failover cluster nodes based on the number of CSVs that each node owns.

Additionally in Windows Server 2012 R2, ownership is automatically rebalanced when at that place are conditions such every bit a CSV failover, a node rejoins the cluster, you add a new node to the cluster, you restart a cluster node, or you first the failover cluster subsequently it has been shut down.

Increased CSV resiliency

Windows Server 2012 R2 includes the following improvements to increase CSV resiliency:

  • Multiple Server service instances per failover cluster node. There is the default instance that handles incoming traffic from Server Message Cake (SMB) clients that access regular file shares, and a second CSV instance that handles merely inter-node CSV traffic. This inter-node traffic consists of metadata access and redirected I/O traffic.

  • CSV health monitoring of the Server service

What value does this modify add?

A CSV uses SMB as a transport for I/O forwarding between the nodes in the cluster, and for the orchestration of metadata updates. If the Server service becomes unhealthy, this can impact I/O performance and the ability to admission storage. Because a cluster node at present has multiple Server service instances, this provides greater resiliency for a CSV if there is an upshot with the default instance. Additionally, this change improves the scalability of inter-node SMB traffic betwixt CSV nodes.

If the Server service becomes unhealthy, it tin can bear upon the ability of the CSV coordinator node to have I/O requests from other nodes and to perform the orchestration of metadata updates. In Windows Server 2012 R2, if the Server service becomes unhealthy on a node, CSV ownership automatically transitions to some other node to ensure greater resiliency.

What works differently?

In Windows Server 2012, there was only one case of the Server service per node. Too, at that place was no monitoring of the Server service.

CSV cache resource allotment

In Windows Server 2012 R2, you tin at present allocate a higher percentage of the full physical memory to the CSV cache. CSV cache enables the server to use organisation memory every bit a write-through cache.

What value does this modify add?

Increasing the CSV cache limit is especially useful for Scale-Out File Server scenarios. Because Scale-Out File Servers are not typically memory constrained, y'all tin accomplish large performance gains by using the actress memory for the CSV cache.

Tip

We recommend that you lot enable the CSV cache for all clustered Hyper-V and Scale-Out File Server deployments, with greater allocation for a Scale-Out File Server deployment.

What works differently?

In Windows Server 2012, you could allocate just 20% of the full physical RAM to the CSV cache. You can now allocate up to lxxx%.

In Windows Server 2012, the CSV cache was disabled by default. In Windows Server 2012 R2, it is enabled by default. Also, the name of the private property of the cluster Physical Disk resource has been changed from CsvEnableBlockCache to EnableBlockCache.

You must still allocate the size of the block enshroud to reserve. To do this, set the value of the BlockCacheSize cluster common property. (The name of this holding was changed from SharedVolumeBlockCacheSizeInMB in Windows Server 2012.) For more than information, meet Enable the CSV enshroud for read-intensive workloads.

CSV diagnosibility

You can now view the state of a CSV on a per node basis. For example, y'all can see whether I/O is direct or redirected, or whether the CSV is unavailable. If a CSV is in I/O redirected mode, you can also view the reason.

What value does this change add?

This change enables y'all to optimize your cluster configuration because you tin can easily decide the state of a CSV.

What works differently?

Yous tin employ the new Windows PowerShell cmdlet Go-ClusterSharedVolumeState to view the state data (such as directly or redirected) and the redirection reason. For the state data, see the StateInfo property. For the I/O redirection reason, see the FileSystemRedirectedIOReason belongings and the BlockRedirectedIOReason property.

CSV interoperability

In Windows Server 2012 R2, CSV functionality has been enhanced to include support for the following features:

  • Resilient File Organization (ReFS)

  • Deduplication

  • Parity storage spaces

  • Tiered storage spaces

  • Storage Spaces write-dorsum caching

What value does this change add together?

This added support expands the scenarios in which you can use CSVs, and enables you to take reward of the efficiencies that are introduced in these features.

What works differently?

ReFS, deduplication, and parity storage spaces were not supported by CSVs in Windows Server 2012. Tiered storage spaces and Storage Spaces write-back caching are new in Windows Server 2012 R2.

Deploy an Active Directory-detached cluster

In Windows Server 2012 R2, you can deploy a failover cluster without dependencies in Active Directory Domain Services (Advert DS) for network names. This is referred to as an Active Directory-discrete cluster. When yous deploy a cluster by using this method, the cluster network proper noun (also known as the authoritative admission point) and network names for any clustered roles with client admission points are registered in Domain Proper name Organization (DNS). However, no figurer objects are created for the cluster in AD DS. This includes both the reckoner object for the cluster itself (too known equally the cluster proper noun object or CNO), and computer objects for any clustered roles that would typically accept customer access points in AD DS (also known as virtual estimator objects or VCOs).

Note

The cluster nodes must withal be joined to an Active Directory domain.

What value does this change add?

With this deployment method, you can create a failover cluster without the previously required permissions to create computer objects in Advertising DS or the need to request that an Active Directory ambassador pre-stages the computer objects in AD DS. Also, you do not accept to manage and maintain the cluster computer objects for the cluster. For instance, you can avert the possible consequence where an Active Directory administrator accidentally deletes the cluster computer object, which impacts the availability of cluster workloads.

What works differently?

The selection to create an Agile Directory-detached cluster is non available in Windows Server 2012. In Windows Server 2012, you can only deploy a failover cluster where the network names for the cluster are in both DNS and AD DS.

An Active Directory-detached cluster uses Kerberos authentication for intra-cluster communication. Notwithstanding, when authentication confronting the cluster network proper noun is required, the cluster uses NTLM authentication.

Important

We do not recommend this deployment method for any scenario that requires Kerberos hallmark.

To deploy this type of cluster, you must employ Windows PowerShell. For deployment data and details virtually what is supported and not supported with this deployment method, see Deploy an Active Directory-Discrete Cluster.

Quorum improvements

The following section provides a summary of improvements to cluster quorum functionality in Windows Server 2012 R2.

Dynamic witness

In Windows Server 2012 R2, if the cluster is configured to use dynamic quorum (the default), the witness vote is also dynamically adjusted based on the number of voting nodes in current cluster membership. If there are an odd number of votes, the quorum witness does not take a vote. If there is an even number of votes, the quorum witness has a vote.

The quorum witness vote is too dynamically adjusted based on the state of the witness resource. If the witness resource is offline or failed, the cluster sets the witness vote to "0."

What value does this change add together?

Dynamic witness significantly reduces the run a risk that the cluster will become downwardly because of witness failure. The cluster decides whether to use the witness vote based on the number of voting nodes that are available in the cluster.

This modify too greatly simplifies quorum witness configuration. Yous no longer have to determine whether to configure a quorum witness because the recommendation in Windows Server 2012 R2 is to e'er configure a quorum witness. The cluster automatically determines when to use it.

Important

In Windows Server 2012 R2, we recommend that y'all e'er configure a quorum witness.

What works differently?

In Windows Server 2012, you had to determine when to configure a witness, and had to manually adjust the quorum configuration if node membership changed to keep the total number of votes at an odd number. This included every time you added or evicted cluster nodes.

Now, you no longer need to manually arrange the quorum configuration if node membership changes. By default, the cluster determines quorum management options, including the quorum witness.

Windows Server 2012 R2 as well includes the new WitnessDynamicWeight cluster mutual property that y'all tin use to view the quorum witness vote.

To view the property value, starting time Windows PowerShell as an administrator, and so enter the following command:

              (Get-Cluster).WitnessDynamicWeight                          

A value of "0" indicates that the witness does not have a vote. A value of "1" indicates that the witness has a vote.

Quorum user interface improvements

In Windows Server 2012 R2, you can now view the assigned quorum vote and the current quorum vote for each cluster node in the Failover Cluster Manager user interface (UI). Also, the quorum mode terminology has been simplified.

What value does this change add?

You can now easily make up one's mind in the UI which nodes take a vote, and whether that vote is active. When you click Nodes in Failover Cluster Manager, you can run across the vote assignments.

Effigy 2. Node vote assignment

What works differently?

In Windows Server 2012, you had to run the Validate Quorum Configuration validation report or apply Windows PowerShell to view the vote status. You tin still use these methods in Windows Server 2012 R2.

In Windows Server 2012 R2, the Validate Quorum Configuration study and the parameters for the Fix-ClusterQuorum Windows PowerShell cmdlet are simplified to no longer use quorum mode terminology such as node majority (no witness), node majority with witness (disk or file share) or no bulk (disk witness simply).

Force quorum resiliency

In Windows Server 2012 R2, after there is an upshot where you manually force quorum to start the cluster, for example you lot use the /fq switch when you lot start the Cluster service, the cluster automatically detects whatever partitions when connectivity is restored. The partition that yous started with force quorum is now deemed administrative. When failover cluster communication is resumed, the partitioned nodes automatically restart the Cluster service, and rejoin the cluster. The cluster is brought back into a single view of membership.

What value does this change add?

This change enables automatic recovery in the instance of a partitioned failover cluster where a subset of nodes was started past forcing quorum. A partitioned failover cluster is likewise known equally a divide cluster or a "split up-brain" cluster.

Note

A partitioned cluster occurs when a cluster breaks into subsets that are not aware of each other. For instance, yous have a multi-site cluster with three nodes in ane site, and ii nodes in the other. A network issue disrupts cluster communication. The site with three nodes stays running because it has quorum. The two node site without quorum shuts down. You determine that the site with three nodes does not accept external connectivity, while the two node site does. Therefore, to restore service to users, yous use the /fq switch to offset the two node site. When network connectivity is restored, you lot have a partitioned cluster.

What works differently?

If at that place is a partitioned cluster in Windows Server 2012, after connectivity is restored, you must manually restart any partitioned nodes that are non part of the forced quorum subset with the /pq switch to prevent quorum. Ideally, you lot should practice this every bit speedily as possible.

In Windows Server 2012 R2, both sides have a view of cluster membership and they volition automatically reconcile when connectivity is restored. The side that you started with force quorum is deemed authoritative and the partitioned nodes automatically restart with the /pq switch to forestall quorum.

Tie breaker for 50% node split

As an enhancement to dynamic quorum functionality, a cluster tin now dynamically adjust a running node'south vote to keep the total number of votes at an odd number. This functionality works seamlessly with dynamic witness. To maintain an odd number of votes, a cluster volition first accommodate the quorum witness vote through dynamic witness. Nonetheless, if a quorum witness is non available, the cluster can adjust a node's vote. For example:

  1. Y'all have a six node cluster with a file share witness. The cluster stretches across two sites with 3 nodes in each site. The cluster has a total of vii votes.

  2. The file share witness fails. Considering the cluster uses dynamic witness, the cluster automatically removes the witness vote. The cluster at present has a full of half-dozen votes.

  3. To maintain an odd number of votes, the cluster randomly picks a node to remove its quorum vote. One site now has two votes, and the other site has three.

  4. A network upshot disrupts communication between the two sites. Therefore, the cluster is evenly split into two sets of three nodes each. The partition in the site with two votes goes down. The sectionalisation in the site with three votes continues to function.

In addition to this automated functionality, in that location is a new cluster common property that y'all tin can apply to determine which site survives if there is a 50% node separate where neither site has quorum. Instead of the cluster randomly picking a node to remove its quorum vote, you can set the LowerQuorumPriorityNodeID property to predetermine which node will have its vote removed.

What value does this change add?

With this functionality, i side of the cluster continues to run in the case of a 50% node split where neither side would unremarkably have quorum.

By optionally setting the LowerQuorumPriorityNodeID holding, yous can control which side stays upwardly in this scenario. For instance, yous can specify that the primary site stays running and that a disaster recovery site shuts downwardly.

What works differently?

In Windows Server 2012, if there is a 50% separate where neither site has quorum, both sides will go downwardly.

In Windows Server 2012 R2, you can assign the LowerQuorumPriorityNodeID cluster common property to a cluster node in the secondary site so that the primary site stays running. Set up this property on simply one node in the site.

To set the property, start Windows PowerShell as an administrator, and and then enter the following command, where "1" is the example node ID for a node in the site that you lot consider less critical:

              (Get-Cluster).LowerQuorumPriorityNodeID = 1                          

Tip

To determine a node ID, start Windows PowerShell as an administrator, and so enter the following command, where "Node1" represents the name of a cluster node: (Go-ClusterNode –Name "Node1").Id Yous can also use the following command to return all the cluster node names, node IDs and the node state: Get-ClusterNode | ft

Configure the Global Update Manager manner

When a state change occurs such as a cluster resources is taken offline, the nodes in a failover cluster must exist notified of the modify and acknowledge information technology before the cluster commits the change to the database. The Global Update Manager is responsible for managing these cluster database updates. In Windows Server 2012 R2, you tin configure how the cluster manages global updates. By default, the Global Update Managing director uses the post-obit modes for failover cluster workloads in Windows Server 2012 R2:

  • All (write) and Local (read). In this mode, all cluster nodes must receive and process the update earlier the cluster considers the change committed. When a database read request occurs, the cluster reads the data from the cluster database on the local node. In this case, the local read is expected to be consistent because all nodes receive and procedure the updates. This is the default setting for all workloads too Hyper-V.

    Note

    This is how global updates work for all workloads in Windows Server 2012.

  • Majority (read and write). In this new mode, a bulk of the running cluster nodes must receive and process the update earlier the cluster commits the change to the database. When a database read request occurs, the cluster compares the latest timestamp from a majority of the running nodes, and uses the data with the latest timestamp. This is the default setting for Hyper-V failover clusters.

Note

There is also a new "Majority (write) and Local (read)" way. However, this manner is not used by default for whatsoever workloads. Meet the "What works differently" department for more than data.

What value does this change add together?

The new configuration modes for Global Update Manager significantly improve cluster database functioning in scenarios where in that location is significant network latency between the cluster nodes, for case with a stretch multi-site cluster. Past association, this increases the functioning of cluster workloads such as SQL Server or Exchange Server in these scenarios. Without this feature, the cluster database performs at the footstep of the slowest node.

The new configuration modes can likewise assist if there are delays that are associated with software or hardware bug. For example, a local registry update may exist delayed on a node that has a hardware upshot. Past using a Global Update Manager mode that performs updates that are based on a majority of nodes, the cluster does not have to wait for all nodes to exist notified of and acknowledge the state change before information technology is ready to process the next transaction.

What works differently?

In Windows Server 2012, you cannot configure the Global Update Manager mode. For all cluster workloads in Windows Server 2012, all cluster nodes must receive and process the update earlier the cluster considers the change committed. In Windows Server 2012 R2, y'all can configure the Global Update Managing director way, with three possible values. In Windows Server 2012 R2, the majority (read and write) style is now the default style for Hyper-V failover clusters.

Y'all tin configure the Global Update Manager mode by using the new DatabaseReadWriteMode cluster common holding. To view the Global Update Managing director mode, start Windows PowerShell as an administrator, and then enter the post-obit command:

              (Go-Cluster).DatabaseReadWriteMode                          

The post-obit table shows the possible values.

Value Description
0 = All (write) and Local (read) - Default setting in Windows Server 2012 R2 for all workloads besides Hyper-V.
- All cluster nodes must receive and process the update before the cluster commits a change to the database.
- Database reads occur on the local node. Because the database is consistent on all nodes, at that place is no hazard of out of date or "stale" information.
ane = Majority (read and write) - Default setting in Windows Server 2012 R2 for Hyper-5 failover clusters.
- A majority of the cluster nodes must receive and process the update earlier the cluster commits the change to the database.
- For a database read, the cluster compares the latest timestamp from a majority of the running nodes, and uses the data with the latest timestamp.
2 = Majority (write) and Local (read) - A majority of the cluster nodes must receive and process the update before the cluster commits the change to the database.
- Database reads occur on the local node. Because the cluster does non compare the latest timestamp on a majority of nodes, the data may exist out of date or "dried."

Warning

Practice not use either of the bulk modes (1 or 2) for scenarios that require stiff consistency guarantees from the cluster database. For example, do non utilise these modes for a Microsoft SQL Server failover cluster that uses AlwaysOn availability groups, or for Microsoft Exchange Server.

Cluster node health detection

Past default, cluster nodes commutation heartbeats every one second. The number of heartbeats that tin can be missed before failover occurs is known as the heartbeat threshold. In Windows Server 2012 R2, the default heartbeat threshold has been increased for Hyper-V failover clusters.

What value does this alter add?

This alter provides increased resiliency to temporary network failures for virtual machines that are running on a Hyper-5 cluster. For example, you may not desire the cluster to perform recovery actions if your network often experiences brusque term network interruptions. For an application that is running on a virtual machine, a short network failure is often adequately seamless because of the TCP reconnect window.

What works differently?

By default, in Windows Server 2012, a node is considered downwardly if it does non answer within five seconds. In Windows Server 2012 R2, the default threshold value for a Hyper-V failover cluster has been increased to 10 seconds for cluster nodes in the aforementioned subnet, and twenty seconds for cluster nodes in different subnets.

The following tabular array lists the default values in Windows Server 2012 R2.

Cluster Common Property Default for All Amassed Roles Except Hyper-V Default for Hyper-5 Amassed Role
SameSubnetThreshold 5 seconds ten seconds
CrossSubnetThreshold 5 seconds twenty seconds

Important

We recommend the following:

Turn off IPsec encryption for inter-node cluster communication

In Windows Server 2012 R2, you can now turn off Cyberspace Protocol security (IPsec) encryption for inter-node cluster communication such as the cluster heartbeat.

What value does this change add?

The processing of loftier latency Group Policy updates tin cause Active Directory Domain Services (Advertizement DS) to become temporarily unavailable. In this situation, because IPsec encryption relies on access to AD DS, IPsec encryption is interrupted until the updates are complete. If cluster communication uses IPsec encryption, this interruption prevents inter-node cluster communication (including heartbeat messages) from being sent. If the filibuster exceeds the cluster heartbeat threshold for a cluster node, the cluster removes the node from cluster membership. If this occurs on multiple nodes at the aforementioned time, it could cause the cluster to lose quorum.

By turning off IPsec encryption for inter-node cluster communication, this traffic remains uninterrupted. Therefore, the ability of the cluster to provide high availability for its clustered roles is not impacted by high latency Group Policy updates.

What works differently?

Yous can now use the NetFTIPSecEnabled cluster mutual property to plow off IPsec encryption on port 3343 for inter-node cluster communication. Past default, the NetFTIPSecEnabled setting is enabled (set up to "ane"). A value of "1" ways that IPsec encryption for inter-node communication is enabled if there is an existing Group Policy setting that enforces IPsec.

To change the value to "0", which overrides whatsoever Group Policy setting and turns off IPsec encryption for inter-node cluster communication, kickoff Windows PowerShell every bit an administrator, then enter the following command:

              (Become-Cluster). NetFTIPSecEnabled = 0                          

Alarm

We recommend that you turn off IPsec encryption for inter-node cluster communication only if you experience issues because of high latency Grouping Policy updates. If you do plough off the setting, brand sure that y'all thoroughly test the change because information technology may impact cluster functioning.

Cluster dashboard

Failover Cluster Manager at present includes a cluster dashboard that enables you to quickly view the wellness condition of all managed failover clusters. You lot tin can view the proper name of the failover cluster together with an icon that indicates whether the cluster is running, the number and status of clustered roles, the node status, and the event condition.

What value does this modify add?

If you manage multiple failover clusters, this dashboard provides a convenient way for you lot to quickly check the wellness of the failover clusters.

What works differently?

In Windows Server 2012, you lot had to click each failover cluster name to view status information. At present, when you lot click Failover Cluster Director in the navigation tree, in that location is a Clusters dashboard in the heart pane that shows all managed clusters.

Effigy 3. Cluster dashboard

What'south new in Failover Clustering in Windows Server 2012

In Windows Server 2012, Failover Clustering offers enhanced support in the post-obit areas.

Characteristic/functionality New or improved Description
Cluster scalability Improved Scales to 64 nodes and 8,000 virtual machines per cluster
Management of large-scale clusters by using Server Director and Failover Cluster Manager New Provides GUI tools to streamline direction and operation of large-scale clusters
Management and mobility of clustered virtual machines and other clustered roles New Helps allocate cluster resource to clustered virtual machines and other clustered roles
Cluster Shared Volumes Improved Improves CSV setup and enhances security, performance, and file organization availability for additional cluster workloads
Support for Scale-Out File Servers New Provides CSV storage and integrates with File Services features to back up scalable, continuously bachelor application storage
Cluster-Enlightened Updating New Applies software updates across the cluster nodes while maintaining availability
Virtual machine awarding monitoring and management New Extends clustered virtual auto monitoring to the applications that run in the clustered virtual machines
Cluster validation tests Improved Validates Hyper-V and CSV functionality and performs faster
Active Directory Domain Services integration Improved Increases cluster resiliency and supports a wider range of deployments
Quorum configuration and dynamic quorum Improved Simplifies quorum setup and increases the availability of the cluster in failure scenarios
Cluster upgrade and migration Improved Allows migration of virtual machines from Windows Server 2008 R2, migration to CSVs, and reuse of existing storage
Task Scheduler integration New Integrates Failover Clustering with additional server functionality
Windows PowerShell support Improved Allows scripting of Failover Clustering functionality that was introduced in Windows Server 2012

Run into also Removed or deprecated functionality.

Cluster scalability

Failover clusters in Windows Server 2012 tin calibration to a greater number of nodes and virtual machines than clusters in Windows Server 2008 R2, equally shown in the following table:

Cluster maximum Windows Server 2012 Windows Server 2008 R2
Nodes 64 16
Virtual machines or clustered roles viii,000 (upwards to 1,024 per node) 1,000

Management of large-scale clusters past using Server Manager and Failover Cluster Manager

Server Manager and Failover Cluster Manager provide new capabilities in Windows Server 2012 to manage large-calibration clusters.

Server Director can notice and manage the nodes of the cluster. It enables remote multi-server management, remote part and feature installation, and the ability to start Failover Cluster Managing director from the Server Director GUI. For more information, come across Manage Multiple, Remote Servers with Server Manager.

New Failover Cluster Manager features that simplify large-calibration direction of clustered virtual machines and other clustered roles include:

  • Search, filtering, and custom views. Administrators can manage and navigate big numbers of clustered virtual machines or other clustered roles.

  • Multiselect. Administrators can select a specific collection of virtual machines and so perform any needed functioning (such as live migration, salvage, shutdown, or first).

  • Simplified live migration and quick migration of virtual machines and virtual auto storage. Live migration and quick migration are easier to perform.

  • Simpler configuration of Cluster Shared Volumes (CSVs). Configuration is a right-click from the Storage pane. CSVs take boosted enhancements, which are described in Cluster Shared Volumes later on in this topic.

  • Support for Hyper-V Replica. Hyper-Five Replica provides indicate-in-fourth dimension replication of virtual machines between storage systems, clusters, and data centers for disaster recovery.

What value exercise these changes add?

These scalability features in Windows Server 2012 improve the configuration, management, and maintenance of large physical clusters and Hyper-Five failover clusters.

Direction and mobility of clustered virtual machines and other amassed roles

In Windows Server 2012, administrators can configure settings, such as prioritize starting or placing virtual machines and clustered roles on cluster nodes, to efficiently allocate resources to amassed workloads. The following table describes these settings:

Setting Description Scope
Priority settings: High, Medium (the default), Low, or No Auto Beginning - Clustered roles with higher priority are started and are placed on nodes before those with lower priority.
- If a No Auto Start priority is assigned, the role does not come up online automatically later it fails, which keeps resources available and so other roles can start.
All clustered roles, including clustered virtual machines
Preemption of virtual machines based on priority - The Cluster service takes offline lower priority virtual machines when high-priority virtual machines do not have the necessary retention and other resources to starting time after a node failure. The freed-up resource tin be assigned to high-priority virtual machines.
- When necessary, preemption starts with the everyman priority virtual machines and continues to college priority virtual machines.
- Virtual machines that are preempted are later restarted in priority lodge.
Amassed virtual machines
Memory-enlightened virtual machine placement - Virtual machines are placed based on the Non-Uniform Memory Admission (NUMA) configuration, the workloads that are already running, and the available resources on each node.
The number of failover attempts before a virtual auto is successfully started is reduced. This increases the uptime for virtual machines.
Clustered virtual machines
Virtual machine mobility features - Multiple live migrations can be started simultaneously. The cluster carries out as many as possible, and then queues the remaining migrations to complete later. Failed migrations automatically retry.
- Virtual machines are migrated to nodes with sufficient retentiveness and other resources.
- A running virtual auto can be added to or removed from a failover cluster.
- Virtual machine storage can be live migrated.
Clustered virtual machines
Automatic node draining - The cluster automatically drains a node (moves the clustered roles that are running on the node to another node) before putting the node into maintenance style or making other changes on the node.
- Roles neglect back to the original node after maintenance operations.
- Administrators can drain a node with a single activity in Failover Cluster Manager or past using the Windows PowerShell cmdlet, Suspend-ClusterNode. The target node for the moved clustered roles can be specified.
- Cluster-Aware Updating uses node draining in the automated procedure to apply software updates to cluster nodes. For more information, see Cluster-Aware Updating later in this topic.
All amassed roles, including clustered virtual machines

What value do these changes add?

These features in Windows Server 2012 better the allocation of cluster resources (particularly when starting or maintaining nodes) in large physical clusters and Hyper-V failover clusters.

Cluster Shared Volumes

Cluster Shared Volumes (CSVs) were introduced in Windows Server 2008 R2 to provide common storage for clustered virtual machines. In Windows Server 2012, CSVs tin provide storage for additional amassed roles. CSVs allow multiple nodes in the cluster to simultaneously access the aforementioned NTFS file system without imposing hardware, file blazon, or directory construction restrictions. With CSVs, multiple clustered virtual machines tin use the same LUN and withal alive migrate or quick drift from node to node independently.

The following is a summary of new CSV functionality in Windows Server 2012.

  • Storage capabilities for a wider range of clustered roles. Includes Calibration-Out File Servers for awarding information, which provide continuously available and scalable file-based (SMB iii.0) server storage for Hyper-V and applications such as Microsoft SQL Server. For more information, run across Support for Calibration-Out File Servers subsequently in this topic.

  • CSV proxy file organisation (CSVFS). Provides cluster shared storage with a single, consistent file namespace while still using the underlying NTFS file system.

  • Support for BitLocker Bulldoze Encryption. Allows decryption past using the common identity of the reckoner account for the cluster (besides chosen the Cluster Name Object, or CNO). This enables physical security for deployments outside secure data centers and meets compliance requirements for volume-level encryption.

  • Ease of file fill-in. Supports backup requestors that are running Windows Server 2008 R2 or Windows Server 2012 Backup. Backups can use awarding-consistent and crash-consequent Volume Shadow Re-create Service (VSS) snapshots.

  • Directly I/O for file data access, including sparse files. Enhances virtual automobile creation and copy performance.

  • Removal of external authentication dependencies. Improves the performance and resiliency of CSVs.

  • Integration with SMB Multichannel and SMB Direct. Uses new SMB iii.0 features to allow CSV traffic to stream across multiple networks in the cluster and leverage network adapters that support Remote Directly Memory Access (RDMA). For more information, see Server Message Block.

  • Storage tin be made visible to just a subset of nodes. Enables cluster deployments that contain application and data nodes.

  • Integration with Storage Spaces. Allows virtualization of cluster storage on groups of inexpensive disks. The Storage Spaces characteristic in Windows Server 2012 tin can integrate with CSVs to permit scale-out access to data. For more than information, see Storage Spaces.

  • Ability to browse and repair volumes with zilch offline time. Maintains CSV availability while the NTFS file system identifies, logs, and repairs anomalies.

What value practise these changes add together?

These new features provide easier CSV setup, broader workload support, enhanced security and performance in a wider multifariousness of deployments, and greater file organization availability.

What works differently?

CSVs now appear every bit CSV File System (CSVFS) instead of NTFS.

Support for Scale-Out File Servers

Scale-Out File Servers tin can host continuously available and scalable storage past using the SMB 3.0 protocol. Failover clusters in Windows Server 2012 provide the post-obit foundational features that support this type of file server:

  • A Distributed Network Name (DNN), which provides an access point for client connections to the Scale-Out File Servers.

  • A Scale-out File Server resource blazon that supports Scale-out File Services.

  • Cluster Shared Volumes (CSVs) for storage. For more data, see Cluster Shared Volumes earlier in this topic.

  • Integration with File Services features to configure the clustered part for the Scale-Out File Server.

What value do these changes add?

These features support continuously available and readily scalable file services for applications and for finish users. For more data, run into Scale-Out File Server for Application Data.

Cluster-Aware Updating

Cluster-Enlightened Updating (CAU) is an automatic characteristic that allows updates to exist practical automatically to the host operating organisation or other arrangement components in clustered servers, while maintaining availability during the update process. This feature leverages automated draining and failback of each node during the update process. By default, it uses the Windows Update Agent infrastructure as its update source. For an overview of the CAU characteristic, meet Cluster-Aware Updating.

What value does this alter add together?

CAU provides increased uptime of high availability services, easier maintenance of failover clusters, and reliable and consistent IT processes.

Virtual automobile application monitoring and management

In clusters running Windows Server 2012, administrators can monitor services on clustered virtual machines that are likewise running Windows Server 2012. This functionality extends the high-level monitoring of virtual machines that is implemented in Windows Server 2008 R2 failover clusters. If a monitored service in a virtual machine fails, the service can exist restarted, or the clustered virtual automobile can be restarted or moved to some other node (depending on service restart settings and cluster failover settings).

What value does this change add together?

This feature increases the uptime of high availability services that are running on virtual machines inside a failover cluster.

Cluster validation tests

The Validate a Configuration Wizard in Failover Cluster Manager simplifies the procedure of validating hardware and software across servers for apply in a failover cluster. The performance for large failover clusters has been improved and new tests have been added.

The post-obit are improved features related to validation:

  • Improved performance. Runs significantly faster, especially to test storage.

  • Targeted validation of new LUNs. Allows specifying a new LUN (deejay), rather than testing all LUNs when validating storage.

  • Integration with WMI. Exposes cluster validation status to applications and scripts through Windows Management Instrumentation (WMI).

  • New validation tests. Provides validation test support for CSVs, and for Hyper-Five and virtual machines (when the Hyper-V role is installed).

  • Validation test awareness of replicated hardware. Helps back up multisite environments.

What value practice these changes add together?

The added validation tests help confirm that the servers in the cluster volition support smooth failover, particularly of virtual machines from one host to another.

Active Directory Domain Services integration

Integration of failover clusters with Agile Directory Domain Services (AD DS) is made more robust in Windows Server 2012 past the following features:

  • Power to create cluster computer objects in targeted organizational units (OUs) or in the same OUs as the cluster nodes. Aligns failover cluster dependencies on Advertising DS with the delegated domain administration model that is used in many It organizations.

  • Automated repair of cluster virtual computer objects (VCOs) if they are deleted accidentally.

  • Cluster access just to Read-but domain controllers. Supports cluster deployments in branch office or perimeter network scenarios.

  • Ability of the cluster to outset with no AD DS dependencies. Enables certain virtualized information center scenarios.

Note

Failover clusters do non support grouping Managed Service Accounts.

What value practice these changes add?

These features amend the configuration and resiliency of failover clusters.

Quorum configuration and dynamic quorum

The following features in Windows Server 2012 enhance the management and functionality of the cluster quorum:

  • Configure Cluster Quorum Wizard. Simplifies quorum configuration and integrates well with new features and existing quorum functionality.

  • Vote assignment. Allows specifying which nodes take votes in determining quorum (by default, all nodes accept a vote).

  • Dynamic quorum. Gives the administrator the ability to automatically manage the quorum vote assignment for a node, based on the state of the node. When a node shuts down or crashes, the node loses its quorum vote. When a node successfully rejoins the cluster, it regains its quorum vote. By dynamically adjusting the consignment of quorum votes, the cluster tin can increase or decrease the number of quorum votes that are required to keep running. This enables the cluster to maintain availability during sequential node failures or shutdowns.

What value do these changes add?

These enhancements simplify quorum setup and increment the availability of the cluster in failure scenarios.

Cluster upgrade and migration

By using the updated Migrate a Cluster Magician in Windows Server 2012, administrators tin can migrate the configuration settings for clustered roles (formerly called amassed services and applications) from clusters that are running Windows Server 2012, Windows Server 2008 R2, or Windows Server 2008. Migration enhancements in Windows Server 2012 include:

  • Export and reimport Hyper-V virtual machines.

  • Migrate to CSV disks.

  • Map storage and virtual networks.

  • Reuse existing storage.

What value does this modify add together?

The Drift a Cluster Magician provides ease and flexibility to deploy, upgrade, and migrate failover clusters.

Task Scheduler integration

In Windows Server 2012, Task Scheduler is integrated with Failover Clustering to allow the administrator to configure clustered tasks. A amassed task is a Job Scheduler task that is registered on all cluster nodes. Depending on the task, information technology tin can be enabled on all or a subset of the nodes.

The administrator tin can configure clustered tasks in three means:

  • Cluster-wide. The task is scheduled on all cluster nodes.

  • Whatever node. The task is scheduled on a single, random node.

  • Resource specific. The task is scheduled only on a node that owns a specified cluster resources.

The ambassador tin can configure and manage clustered tasks by using the following Windows PowerShell cmdlets:

  • Register-ClusteredScheduledTask

  • Set-ClusteredScheduledTask

  • Get-ClusteredScheduledTask

  • Unregister-ClusteredScheduledTask

What value does this alter add?

Clustered tasks provide a flexible mechanism to use cluster resource to run applications or processes at predefined times.

Windows PowerShell back up

To use the Windows PowerShell cmdlets for Failover Clustering, you must install the Failover Cluster module for Windows PowerShell that is included with the Failover Clustering tools. For a complete listing of the cmdlets, see Failover Clustering Cmdlets in Windows PowerShell.

New Windows PowerShell cmdlets support capabilities in Failover Clustering in Windows Server 2012 including the post-obit:

  • Manage cluster registry checkpoints, including cryptographic checkpoints.

  • Create Scale-Out File Servers.

  • Monitor virtual machine applications.

  • Update the backdrop of a Distributed Network Name resources.

  • Create and manage clustered tasks.

  • Create an iSCSI Target Server for high availability.

What value does this change add?

The new Windows PowerShell cmdlets provide management and scripting support for the Failover Clustering features in Windows Server 2012.

What works differently?

The Test-ClusterResourceFailure cmdlet replaces Fail-ClusterResource.

Removed or deprecated functionality

  • The Cluster.exe command-line tool is deprecated, but it can be optionally installed with the Failover Clustering tools. Windows PowerShell cmdlets for Failover Clustering provide functionality that is generally equivalent to Cluster.exe commands. For more than information, see Mapping Cluster.exe Commands to Windows PowerShell Cmdlets for Failover Clusters.

  • The Cluster Automation Server (MSClus) COM interface is deprecated, merely information technology can exist optionally installed with the Failover Clustering tools.

  • Support for 32-flake cluster resources DLLs is deprecated, but 32-fleck DLLs can be optionally installed. You should update cluster resource DLLs to 64-flake.

  • The Print Server part is removed from the High Availability Wizard, and it cannot be configured in Failover Cluster Manager. Instead, see Loftier Availability Press Overview.

  • The Add together-ClusterPrintServerRole cmdlet is deprecated, and it is not supported in Windows Server 2012.

Run across too

  • Failover Clustering

  • What's New in Failover Clusters in Windows Server 2008 R2

  • What's New in Failover Clusters in Windows Server 2008