Link to VMware Compatibility Guide
Purity Release
|
Certified vSphere Version(s)
|
1.0.0
|
-
|
1.1.0
|
6.0-7.0 (all updates)
|
1.2.0
|
-
|
2.0.3
|
6.0-7.0 (all updates)
|
2.4.0
|
6.0-7.0 (all updates)
|
3.0.0
|
-
|
3.1.0
|
-
|
3.2.0
|
6.7-7.0 (all updates)
|
FlashBlade Connectivity
FlashBlade™ client data is served via four 40Gb/s QSFP+ or 32 10Gb/s Ethernet ports. While it is beyond the scope of this document to describe and analyze available network technologies, at the minimum two network uplinks (one from each Fabric Module) are recommended. Each uplink should be connected to a different LAN switch. This network topology protects against the switch as well as FM and individual network port failures.
BEST PRACTICE
: Provide at least two network uplinks (one per Fabric Module).
An example of high performance, high redundancy network configuration with four FlashBlade uplinks and Cisco UCS is shown in Figure 5. Please note that Cisco Nexus switches are configured with virtual Port Channel (vPC).
BEST PRACTICE
: Separate storage network from other networks
Figure 5
ESXi Host Connectivity
The ESXi hosts connectivity to NAS devices is provided by Virtual switches with VMKernel Adapters and port groups. The Virtual switch must have at least one physical adapter (vmnic) assigned. While it is possible to connect ESXi hosts to an NFS datastore with a single vmnic, this configuration does not protect against potential NIC failures. Whenever possible, it is recommended to create a Virtual switch and to assign two vmnics to each dedicated VMKernel Adapter.
BEST PRACTICE:
Assign two vmnics to a dedicated VMKernel Adapter and Virtual switch.
Additionally, to reduce the Ethernet broadcast domain, connections should be configured using separate VLANs and IP subnets. By default, the ESXi host will direct NFS data traffic through a single NIC. Therefore, single NIC’s bandwidth,
even in multiple vmnic Virtual switch configurations
, is a limiting factor for the NFS datastore I/O operations.
PLEASE READ:
NFS datastore connection is limited by single NIC’s bandwidth.
Network traffic load balancing for the Virtual switches with multiple vmnics may be configured by changing the load-balancing policy – see VMware Load Balancing section.
ESXi Virtual Switch Configuration
A basic recommended ESXi Virtual switch configuration is shown in Figure 6.
Figure 6
For ESXi hosts with two high-bandwidth Network Interface Cards, adding a VMkernel port group will increase the IO parallelism – see Figure 7 and Datastores section for additional details. Please note that two VMkernel port groups are on different IP subnets.
Figure 7
For ESXi hosts with four or more high bandwidth Network Interface Cards, it is recommended to create a dedicated Virtual switch for each pair of NICs – see Figure 8.
Please note that each Virtual switch and each VMkernel port group exist on different IP subnets and the corresponding datastores. This configuration provides optimal connectivity to the NFS datastores by increasing the IO parallelism on the ESXi host as well as on FlashBlade– see Datastores section for additional details.
Figure 8
VMware Load Balancing
VMware supports several load balancing algorithms for virtual switches:
-
Route based on originating virtual port – network uplinks are selected based on the virtual machine port id – this is the default routing policy.
-
Route based on source MAC hash – network uplinks are selected based on the virtual machine MAC address.
-
Route based on IP hash – network uplinks are selected based on the source and destination IP address of each datagram.
-
Route based on physical NIC load – uplinks are selected based on the load evaluation performed by the virtual switch; this algorithm is available only on vSphere Distributed Switch.
-
Explicit failover – uplinks are selected based on the order defined in the list of Active adapters; no load balancing.
The Route based on originating virtual port and Route based on source MAC hash routing teaming and failover policies require Virtual switch to virtual machine connections. Therefore, they are not appropriate for VMkernel Virtual switches and NFS datastores. The Route based on IP hash policy is the only applicable teaming option.
Route based on IP hash load balancing ensures the egress network traffic is directed through one vmnic and ingress through another.
The Route based on IP hash teaming policy also requires configuration changes of the network switches. The procedure to properly setup link aggregation is beyond the scope of this document. The following VMware Knowledge Base article provides additional details and examples regarding EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches configuration:
https://kb.vmware.com/s/article/1004048
For the steps required to change VMware’s load balancing algorithm, see Appendix A.
ESXi Virtual Switch Configuration
A basic recommended ESXi Virtual switch configuration is shown in Figure 6.
Figure 6
For ESXi hosts with two high-bandwidth Network Interface Cards, adding a VMkernel port group will increase the IO parallelism – see Figure 7 and Datastores section for additional details. Please note that two VMkernel port groups are on different IP subnets.
Figure 7
For ESXi hosts with four or more high bandwidth Network Interface Cards, it is recommended to create a dedicated Virtual switch for each pair of NICs – see Figure 8.
Please note that each Virtual switch and each VMkernel port group exist on different IP subnets and the corresponding datastores. This configuration provides optimal connectivity to the NFS datastores by increasing the IO parallelism on the ESXi host as well as on FlashBlade– see Datastores section for additional details.
Figure 8
VMware Load Balancing
VMware supports several load balancing algorithms for virtual switches:
-
Route based on originating virtual port – network uplinks are selected based on the virtual machine port id – this is the default routing policy.
-
Route based on source MAC hash – network uplinks are selected based on the virtual machine MAC address.
-
Route based on IP hash – network uplinks are selected based on the source and destination IP address of each datagram.
-
Route based on physical NIC load – uplinks are selected based on the load evaluation performed by the virtual switch; this algorithm is available only on vSphere Distributed Switch.
-
Explicit failover – uplinks are selected based on the order defined in the list of Active adapters; no load balancing.
The Route based on originating virtual port and Route based on source MAC hash routing teaming and failover policies require Virtual switch to virtual machine connections. Therefore, they are not appropriate for VMkernel Virtual switches and NFS datastores. The Route based on IP hash policy is the only applicable teaming option.
Route based on IP hash load balancing ensures the egress network traffic is directed through one vmnic and ingress through another.
The Route based on IP hash teaming policy also requires configuration changes of the network switches. The procedure to properly setup link aggregation is beyond the scope of this document. The following VMware Knowledge Base article provides additional details and examples regarding EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches configuration:
https://kb.vmware.com/s/article/1004048
For the steps required to change VMware’s load balancing algorithm, see Appendix A.
Datastores
The performance of FlashBlade™ and DirectFlash™ modules (blades) is not dependent on the number of file systems created and exported to the hosts. However, for each host connection there is an internal 10Gb/s bandwidth threshold between the Fabric Module and the DirectFlash module (blade) – see Figure 2. While the data is distributed among multiple blades, a single DirectFlash module provides host to the storage network connection. The blade selection process to service specific host connection is determined by hashing function. This methodology minimizes the possibility of the same blade being used by multiple hosts. For instance, a connection from a single host may be internally routed to blade 1 whereas another connection from the same host may be internally routed to blade 2 for storage access.
The number of the datastores connected to the ESXi host will depend on the number of available network interfaces (NICs), bandwidth, and performance requirements. To take full advantage of FlashBlade parallelism, create or mount at least one datastore per host per network connection.
BEST PRACTICE:
Create or mount at least one datastore per host per network connection
The basic ESXi host single datastore connection is shown in Figure 9.
Figure 9
For the servers with high bandwidth NICs (40Gb/s or higher), create two or more VMkernel port groups per Virtual switch and assign IP addresses to each port group. These IP addresses need to be on different subnets. In this configuration, the connection to each exported file system will be established using dedicated VMkernel port group and corresponding NICs. This configuration is shown in Figure 10.
Figure 10
For servers with four or more high bandwidth network adapters, create a dedicated Virtual switch for each pair of vmnics. The VMkernel port groups need to have IP addresses which are on different subnets. This configuration parallelizes the ESXi host as well as internal FlashBlade connectivity. See Figure 11.
Figure 11
BEST PRACTICE:
Mount datastores on all hosts.
FlashBlade Configuration
The configuration of the FlashBlade includes the creation of the subnet and network interfaces for host connectivity.
All the tasks may be accomplished using FlashBlade’s web-based HTML 5 user interface (no client installation required), the command line or via RestAPI.
Configuring Client Connectivity
Create subnet for client (NFS, SMB, HTTP/S3) connectivity
-
Command Line Interface (CLI)
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> <vlan_name>
Example:
puresubnet create --prefix 10.25.0.0/16 --vlan vlan2025
-
Graphical User Interface (GUI) - see figure 12.
-
Select Settings in the left pane.
-
Select Network
-
Select ‘+’ sign at top-right in the Subnets header.
-
Provide values in Create Subnet dialog window.
-
Name – subnet name
-
Prefix – network address for the subnet with the subnet mask length.
-
Gateway – optional IP address of the gateway.
-
MTU – optional Maximum Transmission Unit size, default is 1500, change to 9000 for jumbo frames - see also Appendix B.
-
Click Create.
Figure 12
Create a Virtual Network Interface, Assign it to the Existing VLAN
-
Command Line Interface (CLI):
purenetwork create vip --address <IP_address> --servicelist data name
Example:
purenetwork create vip --address 10.25.0.10 --servicelist data subnet25_NIC
-
Using Graphical User Interface - see Figure 13.
-
Select Settings in the left pane.
-
Select Add interface ‘+’ sign.
-
Provide values in Create Network Interface dialog box.
-
Name – Interface name
-
Address – IP address where file systems can be mounted.
-
Services – not modifiable
-
Subnet – not modifiable
-
Click Create
Figure 13
Creating and Exporting File System
Create and Export File System
-
Command Line Interface (CLI):
purefs create --rules <rules> --size <size> File_System
Example:
purefs create --rules '*(rw,no_root_squash)' --size 78GB DS10
For existing file systems modify export rules (if necessary).
purefs setattr --rules <rules> File_System
Example:
purefs setattr --rules '*(rw,no_root_squash)' DS10
where --rules are standard NFS (FlashBlade supported) export rules, in format ‘ip_addres(options)’
* (asterisk) – export the file system to all hosts.
rw – file system exported will be readable and writable.
ro – file system exported will be read-only.
root_squash – file system exported will be mapped to anonymous user ID when accessed by user root.
no_root_squash – file system exported will not be mapped to anonymous ID when accessed by user root.
fileid_32bit – file system exported will support clients that require 32-bit inode support.
Add the desired protocol to the file system:
purefs add --protocol <protocol> File_System
Example:
purefs add --protocol nfs DS10
Optionally enable fast-remove and/or snapshot options:
purefs enable --fast-remove-dir --snapshot-dir File_System
-
Using Graphical User Interface – see Figure 14.
-
Select Storage in the left pane.
-
Select File Systems and ‘+’ sign.
-
Provide values in Create File System.
-
Files system Name
-
Provisioned Size
-
Select unit (K, M, G, T, P)
-
Optionally enable Fast Remove.
-
Optionally enable Snapshot.
-
Enable NFS.
-
Provide Export Rules [*(rw,no_root_squash)].
-
Click Create.
For ESXi hosts the
rw,no_root_squash
export rules are recommended. It also recommended to export the file system to all hosts (include
*
in front of the parenthesis). This will allow the NFS datastores to be mounted on all ESXi hosts.
BEST PRACTICE:
Use *(rw,no_root_squash) rule for exporting file systems to ESXi hosts.
Figure 14
Configuring Client Connectivity
Create subnet for client (NFS, SMB, HTTP/S3) connectivity
-
Command Line Interface (CLI)
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> <vlan_name>
Example:
puresubnet create --prefix 10.25.0.0/16 --vlan vlan2025
-
Graphical User Interface (GUI) - see figure 12.
-
Select Settings in the left pane.
-
Select Network
-
Select ‘+’ sign at top-right in the Subnets header.
-
Provide values in Create Subnet dialog window.
-
Name – subnet name
-
Prefix – network address for the subnet with the subnet mask length.
-
Gateway – optional IP address of the gateway.
-
MTU – optional Maximum Transmission Unit size, default is 1500, change to 9000 for jumbo frames - see also Appendix B.
-
Click Create.
Figure 12
Create a Virtual Network Interface, Assign it to the Existing VLAN
-
Command Line Interface (CLI):
purenetwork create vip --address <IP_address> --servicelist data name
Example:
purenetwork create vip --address 10.25.0.10 --servicelist data subnet25_NIC
-
Using Graphical User Interface - see Figure 13.
-
Select Settings in the left pane.
-
Select Add interface ‘+’ sign.
-
Provide values in Create Network Interface dialog box.
-
Name – Interface name
-
Address – IP address where file systems can be mounted.
-
Services – not modifiable
-
Subnet – not modifiable
-
Click Create
Figure 13
Create subnet for client (NFS, SMB, HTTP/S3) connectivity
-
Command Line Interface (CLI)
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> <vlan_name>
Example:
puresubnet create --prefix 10.25.0.0/16 --vlan vlan2025
-
Graphical User Interface (GUI) - see figure 12.
-
Select Settings in the left pane.
-
Select Network
-
Select ‘+’ sign at top-right in the Subnets header.
-
Provide values in Create Subnet dialog window.
-
Name – subnet name
-
Prefix – network address for the subnet with the subnet mask length.
-
Gateway – optional IP address of the gateway.
-
MTU – optional Maximum Transmission Unit size, default is 1500, change to 9000 for jumbo frames - see also Appendix B.
-
Click Create.
Figure 12
Create a Virtual Network Interface, Assign it to the Existing VLAN
-
Command Line Interface (CLI):
purenetwork create vip --address <IP_address> --servicelist data name
Example:
purenetwork create vip --address 10.25.0.10 --servicelist data subnet25_NIC
-
Using Graphical User Interface - see Figure 13.
-
Select Settings in the left pane.
-
Select Add interface ‘+’ sign.
-
Provide values in Create Network Interface dialog box.
-
Name – Interface name
-
Address – IP address where file systems can be mounted.
-
Services – not modifiable
-
Subnet – not modifiable
-
Click Create
Figure 13
Creating and Exporting File System
Create and Export File System
-
Command Line Interface (CLI):
purefs create --rules <rules> --size <size> File_System
Example:
purefs create --rules '*(rw,no_root_squash)' --size 78GB DS10
For existing file systems modify export rules (if necessary).
purefs setattr --rules <rules> File_System
Example:
purefs setattr --rules '*(rw,no_root_squash)' DS10
where --rules are standard NFS (FlashBlade supported) export rules, in format ‘ip_addres(options)’
* (asterisk) – export the file system to all hosts.
rw – file system exported will be readable and writable.
ro – file system exported will be read-only.
root_squash – file system exported will be mapped to anonymous user ID when accessed by user root.
no_root_squash – file system exported will not be mapped to anonymous ID when accessed by user root.
fileid_32bit – file system exported will support clients that require 32-bit inode support.
Add the desired protocol to the file system:
purefs add --protocol <protocol> File_System
Example:
purefs add --protocol nfs DS10
Optionally enable fast-remove and/or snapshot options:
purefs enable --fast-remove-dir --snapshot-dir File_System
-
Using Graphical User Interface – see Figure 14.
-
Select Storage in the left pane.
-
Select File Systems and ‘+’ sign.
-
Provide values in Create File System.
-
Files system Name
-
Provisioned Size
-
Select unit (K, M, G, T, P)
-
Optionally enable Fast Remove.
-
Optionally enable Snapshot.
-
Enable NFS.
-
Provide Export Rules [*(rw,no_root_squash)].
-
Click Create.
For ESXi hosts the
rw,no_root_squash
export rules are recommended. It also recommended to export the file system to all hosts (include
*
in front of the parenthesis). This will allow the NFS datastores to be mounted on all ESXi hosts.
BEST PRACTICE:
Use *(rw,no_root_squash) rule for exporting file systems to ESXi hosts.
Figure 14
Create and Export File System
-
Command Line Interface (CLI):
purefs create --rules <rules> --size <size> File_System
Example:
purefs create --rules '*(rw,no_root_squash)' --size 78GB DS10
For existing file systems modify export rules (if necessary).
purefs setattr --rules <rules> File_System
Example:
purefs setattr --rules '*(rw,no_root_squash)' DS10
where --rules are standard NFS (FlashBlade supported) export rules, in format ‘ip_addres(options)’
* (asterisk) – export the file system to all hosts.
rw – file system exported will be readable and writable.
ro – file system exported will be read-only.
root_squash – file system exported will be mapped to anonymous user ID when accessed by user root.
no_root_squash – file system exported will not be mapped to anonymous ID when accessed by user root.
fileid_32bit – file system exported will support clients that require 32-bit inode support.
Add the desired protocol to the file system:
purefs add --protocol <protocol> File_System
Example:
purefs add --protocol nfs DS10
Optionally enable fast-remove and/or snapshot options:
purefs enable --fast-remove-dir --snapshot-dir File_System
-
Using Graphical User Interface – see Figure 14.
-
Select Storage in the left pane.
-
Select File Systems and ‘+’ sign.
-
Provide values in Create File System.
-
Files system Name
-
Provisioned Size
-
Select unit (K, M, G, T, P)
-
Optionally enable Fast Remove.
-
Optionally enable Snapshot.
-
Enable NFS.
-
Provide Export Rules [*(rw,no_root_squash)].
-
Click Create.
For ESXi hosts the
rw,no_root_squash
export rules are recommended. It also recommended to export the file system to all hosts (include
*
in front of the parenthesis). This will allow the NFS datastores to be mounted on all ESXi hosts.
BEST PRACTICE:
Use *(rw,no_root_squash) rule for exporting file systems to ESXi hosts.
Figure 14
ESXi Host Configuration
The basic ESXi host configuration consists of creating a dedicated Virtual switch and datastore.
Creating Virtual Switch
To create a Virtual switch and NFS based datastores using vSphere Web Based client follow the steps below:
-
Create a vSwitch – see Figure 15.
-
Select the hosts tab, Host ➤Configure (tab)➤Virtual switches➤Add host networking icon.
Figure 15
-
b. Select connection type: Select VMkernel Network Adapter – see Figure 16.
-
-
-
Figure 16
-
c. Select target device: New standard switch - see Figure 17.
-
Figure 17
d. Create a Standard Switch: Assign free physical network adapters to the new switch (click green ‘+’ sign and select an available active adapter (vmnic)) – see Figure 18.
Figure 18
-
e. Select Next when finished assigning adapters.
f. Port properties – see Figure 19.
i. Network label (for example: VMkernelNFS)
ii. VLAN ID: leave at default (0) if you are not planning to tag the outgoing network frames.
iii. TCP/IP stack: Default
iv. Available service: all disabled (unchecked).
Figure 19
g. IPv4 settings – see Figure 20.
i. IPv4 settings: Use static IPv4 settings.
ii. Provide the IP address and the corresponding subnet mask.
iii. Review settings and finish creating the Virtual switch.
Figure 20
2. Optionally verify connectivity from ESXi host to the FlashBlade file system.
a. Login as root to the ESXi host.
b. Issue vmkping command.
vmkping <destination_ip>
Creating Datastore
1. Select the hosts tab, Host ➤Datastores-➤New Datastore - see Figure 21.
Figure 21
2. New Datastore - see Figure 22.
a. Type: NFS
Figure 22
3. Select NFS version: NFS 3 - see Figure 23.
Figure 23
a. Datastore name: friendly name for the datastore (for example: DS10) – see Figure 24.
b. Folder: Specify folder where this datastore was created on FlashBlade - Creating and Exporting File System.
c. Server: IP address or FQDN for the VIP on the FlashBlade.
Figure 24
When mounting NFS datastore on multiple hosts you must use the same FQDN, name, or IP address and datastore name. If using FQDN, ensure that DNS records have been updated and ESXi hosts have been correctly configured with IP address of the DNS server.
BEST Practice:
Mount NFS datastores using IP addresses
Mounting NFS datastore using IP address instead of the FQDN removes the dependency on the availability of DNS servers.
Creating Virtual Switch
To create a Virtual switch and NFS based datastores using vSphere Web Based client follow the steps below:
-
Create a vSwitch – see Figure 15.
-
Select the hosts tab, Host ➤Configure (tab)➤Virtual switches➤Add host networking icon.
Figure 15
-
b. Select connection type: Select VMkernel Network Adapter – see Figure 16.
-
-
-
Figure 16
-
c. Select target device: New standard switch - see Figure 17.
-
Figure 17
d. Create a Standard Switch: Assign free physical network adapters to the new switch (click green ‘+’ sign and select an available active adapter (vmnic)) – see Figure 18.
Figure 18
-
e. Select Next when finished assigning adapters.
f. Port properties – see Figure 19.
i. Network label (for example: VMkernelNFS)
ii. VLAN ID: leave at default (0) if you are not planning to tag the outgoing network frames.
iii. TCP/IP stack: Default
iv. Available service: all disabled (unchecked).
Figure 19
g. IPv4 settings – see Figure 20.
i. IPv4 settings: Use static IPv4 settings.
ii. Provide the IP address and the corresponding subnet mask.
iii. Review settings and finish creating the Virtual switch.
Figure 20
2. Optionally verify connectivity from ESXi host to the FlashBlade file system.
a. Login as root to the ESXi host.
b. Issue vmkping command.
vmkping <destination_ip>
Creating Datastore
1. Select the hosts tab, Host ➤Datastores-➤New Datastore - see Figure 21.
Figure 21
2. New Datastore - see Figure 22.
a. Type: NFS
Figure 22
3. Select NFS version: NFS 3 - see Figure 23.
Figure 23
a. Datastore name: friendly name for the datastore (for example: DS10) – see Figure 24.
b. Folder: Specify folder where this datastore was created on FlashBlade - Creating and Exporting File System.
c. Server: IP address or FQDN for the VIP on the FlashBlade.
Figure 24
When mounting NFS datastore on multiple hosts you must use the same FQDN, name, or IP address and datastore name. If using FQDN, ensure that DNS records have been updated and ESXi hosts have been correctly configured with IP address of the DNS server.
BEST Practice:
Mount NFS datastores using IP addresses
Mounting NFS datastore using IP address instead of the FQDN removes the dependency on the availability of DNS servers.
ESXi NFS Datastore Configuration Settings
Adjust the following ESXi parameters on each ESXi server (see Table 1):
-
NFS.MaxVolumes – Maximum number of NFS mounted volumes (per host)
-
Net.TcpipHeapSize – Initial TCP/IP heap size in MB
-
Net.TcpipHeapMax – Maximum TCP/IP heap size in MB
-
SunRPC.MaxConnPerIp – Maximum number of unique TCP connections per IP address
Parameter
|
Default Value
|
Maximum Value
|
Recommended Value
|
NFS.MaxVolumes
|
8
|
256
|
256
|
Net.TcpipHeapSize
|
0 MB
|
32 MB
|
32 MB
|
Net.TcpipHeapMax
|
512 MB
|
1536 MB
|
512 MB
|
SunRPC.MaxConnPerIp
|
4
|
128
|
128
|
Table 1
The SunRPC.MaxConnPerIp should be increased to avoid sharing the host to NFS datastore connections. There is a maximum of 256 NFS datastores with 128 unique TCP connections, therefore forcing connection sharing when the NFS datastore limit is reached.
The settings listed in Table 1 must adjusted on each ESXi host using vSphere Web Client (Advanced System Settings) or command line and may require a reboot.
Changing ESXi Advanced System Settings
To change ESXi advanced system settings using vSphere Web Client GUI – see Figure 25.
-
Select Host (tab)➤Host➤Configure➤Advanced System Settings➤Edit.
-
In Edit Advanced System Setting windows use the search window to locate the required parameter and modify its value, click OK.
-
Reboot if required.
Figure 25
To change Advanced System Settings using esxcli:
esxcli system settings set --option=“/SectionName/OptionName” --int-value=<value>
Example:
esxcli system settings set --option=“/NFS/MaxVolumes” --int-value=16
Changing ESXi Advanced System Settings
To change ESXi advanced system settings using vSphere Web Client GUI – see Figure 25.
-
Select Host (tab)➤Host➤Configure➤Advanced System Settings➤Edit.
-
In Edit Advanced System Setting windows use the search window to locate the required parameter and modify its value, click OK.
-
Reboot if required.
Figure 25
To change Advanced System Settings using esxcli:
esxcli system settings set --option=“/SectionName/OptionName” --int-value=<value>
Example:
esxcli system settings set --option=“/NFS/MaxVolumes” --int-value=16
Virtual Machine Configuration
For the virtual machines residing on FlashBlade backed NFS dastastores only thin provisioning is available however FlashBlade does not support thin provisioned disks at this time. Support for thin provisioning will be added in the future.
Figure 26
Based on VMware recommendations, the additional disks (non-root (Linux) or other than c:\ (Windows)) should be connected to a VMware Paravirtual SCSI controller.
Snapshots
Snapshots provide convenient means of creating a recovery point and can be enabled on FlashBlade on a per-file-system bases. The actual snapshots are located in the .snapshot directory on the exported file systems. The content of the .snapshot directory may be copied to a different location, providing a recovery point. To recover virtual machine using FlashBlade snapshot:
1. Mount the .snapshot directory with ‘Mount NFS as read-only’ option on the host where you would like recover virtual machine – see Figure 27.
Figure 27
2. Select the newly mounted datastore and browse the files to locate the directory where the virtual machine files reside - see Figure 28.
3. Select and copy all virtual machine files to another directory on different datastore
Figure 28
4.Register the virtual machine by selecting the Host➤Datastore➤Register VM by browsing to the new location of virtual machine files – see Figure 29.
Figure 29
5. Unmount datastore mounted in step 1
VMware managed snapshots are fully supported.
Conclusion
While the recommendations and suggestions outlined in this paper do not cover all possible ESXi and FlashBlade implementation details and configuration settings, they should serve as a guideline and provide a starting point for NFS datastore deployments. Continuous data collection and analysis of the network, active ESXi hosts and FlashBlade performance characteristics are the best method of determining if and what changes may be required to deliver the most reliable, robust, high-performing virtualized compute service.
BEST PRACTICE:
Always monitor your network, FlashBlade, ESXi hosts
Appendix A
Changing network load balancing policy – see Figure A1.
To change network load balancing policy using command line:
esxcli network vswitch standard policy failover set -l iphash -v <vswitch-name>
Example:
esxcli network vswitch standard policy failover set -l iphash -v vSwitch1
To change network load balancing policy using vSphere Web Client:
-
Select Host (tab)➤Host➤Configure➤Virtual switches
-
Select the switch➤Edit (Pencil)
-
Virtual switch Edit Setting dialog➤Teaming and failover➤Load Balancing➤Route based on IP hash
Figure A1
Appendix B
While the typical Ethernet (IEEE 802.3 Standard) Maximum Transmission Unit is 1500 bytes, larger MTU values are also possible. Both FlashBlade and ESXi provide support for jumbo frames with an MTU of 9000 bytes.
FlashBlade Configuration
Create subnet with MTU 9000
1. Command Line Interface (CLI):
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> --mtu <mtu> vlan_name
Example:
puresubnet create –prefix 10.25.64.0/21 –vlan vlan2064
2. Graphical User Interface (GUI) - see Figure B1
i. Select Settings in the left pane
ii. Select Network and ‘+’ sign next to “Subnets”
iii. Provide values in Create Subnet dialog window changing MTU to 9000
iv. Click Save
Figure B1
Change an existing subnet MTU to 9000
-
Select Settings in the left pane
-
Select Edit Subnet icon
-
Provide new value for MTU
-
Click Save
ESXi Host Configuration
Jumbo frames need to be enabled on per host and per VMkernel switch basis. Only command line configuration examples are provided below.
1. Login as root to the ESXi host
2. Modify MTU for the NFS datastore vSwitch
esxcfg-vswitch -m <MTU> <vSwitch>
Example:
esxcfg-vswitch -m 9000 vSwitch2
3. Modify MTU for the corresponding port group
esxcfg-vmknic -m <MTU> <portgroup_name>
Example:
esxcfg-vmknic -m 9000 VMkernel2vs
Verify connectivity between the ESXi host and the NAS device using jumbo frames
vmkping -s 8784 -d <destination_ip>
Example:
vmkping -s 8784 -d 192.168.1.10
The -d option disables datagram fragmentation and -s options defines the size of ICMP data. ESXi does not support MTU greater than 9000 bytes, with 216 bytes for the header, the effective size should be 8784 bytes.
FlashBlade Configuration
Create subnet with MTU 9000
1. Command Line Interface (CLI):
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> --mtu <mtu> vlan_name
Example:
puresubnet create –prefix 10.25.64.0/21 –vlan vlan2064
2. Graphical User Interface (GUI) - see Figure B1
i. Select Settings in the left pane
ii. Select Network and ‘+’ sign next to “Subnets”
iii. Provide values in Create Subnet dialog window changing MTU to 9000
iv. Click Save
Figure B1
Change an existing subnet MTU to 9000
-
Select Settings in the left pane
-
Select Edit Subnet icon
-
Provide new value for MTU
-
Click Save
Create subnet with MTU 9000
1. Command Line Interface (CLI):
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> --mtu <mtu> vlan_name
Example:
puresubnet create –prefix 10.25.64.0/21 –vlan vlan2064
2. Graphical User Interface (GUI) - see Figure B1
i. Select Settings in the left pane
ii. Select Network and ‘+’ sign next to “Subnets”
iii. Provide values in Create Subnet dialog window changing MTU to 9000
iv. Click Save
Figure B1
Change an existing subnet MTU to 9000
-
Select Settings in the left pane
-
Select Edit Subnet icon
-
Provide new value for MTU
-
Click Save
ESXi Host Configuration
Jumbo frames need to be enabled on per host and per VMkernel switch basis. Only command line configuration examples are provided below.
1. Login as root to the ESXi host
2. Modify MTU for the NFS datastore vSwitch
esxcfg-vswitch -m <MTU> <vSwitch>
Example:
esxcfg-vswitch -m 9000 vSwitch2
3. Modify MTU for the corresponding port group
esxcfg-vmknic -m <MTU> <portgroup_name>
Example:
esxcfg-vmknic -m 9000 VMkernel2vs
Verify connectivity between the ESXi host and the NAS device using jumbo frames
vmkping -s 8784 -d <destination_ip>
Example:
vmkping -s 8784 -d 192.168.1.10
The -d option disables datagram fragmentation and -s options defines the size of ICMP data. ESXi does not support MTU greater than 9000 bytes, with 216 bytes for the header, the effective size should be 8784 bytes.