Welcome to Knowledge Base!

KB at your finger tips

Book a Meeting to Avail the Services of Purestorage overtime

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

Purestorage

SRM User Guide: Operation Log Locations

Pure Storage Storage Replication Adapter Log File Cheat Sheet

This sheet lists all of the common Site Recovery Manager management or recovery operations and the relevant SRA operations initiated by them. The location of the respective log for the SRA operation is listed, is one of the following:

  • Protected SRM server in respect to the Recovery Plan
  • Recovery SRM server in respect to the Recovery Plan
  • Initiating SRM server—in other words where the operation was started from (create an array manager on site A SRM server it will be on site A)
  • Both—this operation will create a log for that operation on both SRM servers
  • Former Recovery Site (in the case of reprotect where the recovery/protected servers have switched for that recovery plan this would be the original, now protected, SRM server)

Photon SRM Appliance Logs

The SRM logs are located at:

/var/log/vmware/srm/

The SRA logs are located at:

/var/log/vmware/srm/SRAs/sha256{RandomCharacters}

Windows SRM Appliance Logs

SRM logs are located at:

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\vmware-dr*

SRA logs are located at:

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\SRAs\purestorage\

Photon SRM Appliance Logs

The SRM logs are located at:

/var/log/vmware/srm/

The SRA logs are located at:

/var/log/vmware/srm/SRAs/sha256{RandomCharacters}

Windows SRM Appliance Logs

SRM logs are located at:

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\vmware-dr*

SRA logs are located at:

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\SRAs\purestorage\

SRM Operations

SRA Discovery

SRM Operation

SRA Operation

Log Location

SRA Discover

QueryInfo

Initiating SRM server

QueryCapabilities

QueryConnectionParameters

QueryErrorDefinitions

DiscoverArrays   [i]

DiscoverDevice   [ii]

QueryReplicationSettings  [iii]

Create Array Manager

SRM Operation

SRA Operation

Log Location

Discover Arrays

DiscoverArrays

Initiating SRM server

Discover New Arrays

SRM Operation

SRA Operation

Log Location

Discover Arrays

DiscoverArrays

Initiating SRM server

Enable Array Pair

SRM Operation

SRA Operation

Log Location

Discover Devices

DiscoverDevices

Both

Discover New Devices

SRM Operation

SRA Operation

Log Location

Discover Devices

DiscoverDevices

Both

Test Recovery Start

SRM Recovery Plan Step

SRA Operation

Log Location

Synchronize Storage

QueryReplicationSettings  [iv]

Protected

SyncOnce  [v]

Protected

QuerySyncStatus  [vi]

Protected

Create Writeable Storage Snapshot

TestFailoverStart

Recovery

DiscoverDevices

Recovery

Test Recovery Cleanup

SRM Recovery Plan Step

SRA Operation

Log Location

Discard test data and reset storage

TestFailoverStop

Recovery

DiscoverDevices

Recovery

Recovery (Planned Migration, in DR some operations may not occur)

SRM Recovery Plan Step

SRA Operation

Log Location

Pre-synchronize Storage

QueryReplicationSettings

Protected

SyncOnce

Protected

QuerySyncStatus

Protected

Prepare Protected VMs for Migration

PrepareFailover

Protected

DiscoverDevices

Protected

Synchronize Storage

SyncOnce

Protected

QuerySyncStatus

Protected

Change Recovery Site Storage to Writeable

Failover

Recovery

DiscoverDevices

Recovery

Reprotect

SRM Recovery Plan Step

SRA Operation

Log Location

Configure Storage to Reverse Direction

ReverseReplication

Former Recovery Site

DiscoverDevices

Both

Synchronize Storage

QueryReplicationSettings

Former Recovery Site

SyncOnce

Former Recovery Site

QuerySyncStatus

Former Recovery Site

SRA Discovery

SRM Operation

SRA Operation

Log Location

SRA Discover

QueryInfo

Initiating SRM server

QueryCapabilities

QueryConnectionParameters

QueryErrorDefinitions

DiscoverArrays   [i]

DiscoverDevice   [ii]

QueryReplicationSettings  [iii]

Create Array Manager

SRM Operation

SRA Operation

Log Location

Discover Arrays

DiscoverArrays

Initiating SRM server

Discover New Arrays

SRM Operation

SRA Operation

Log Location

Discover Arrays

DiscoverArrays

Initiating SRM server

Enable Array Pair

SRM Operation

SRA Operation

Log Location

Discover Devices

DiscoverDevices

Both

Discover New Devices

SRM Operation

SRA Operation

Log Location

Discover Devices

DiscoverDevices

Both

Test Recovery Start

SRM Recovery Plan Step

SRA Operation

Log Location

Synchronize Storage

QueryReplicationSettings  [iv]

Protected

SyncOnce  [v]

Protected

QuerySyncStatus  [vi]

Protected

Create Writeable Storage Snapshot

TestFailoverStart

Recovery

DiscoverDevices

Recovery

Test Recovery Cleanup

SRM Recovery Plan Step

SRA Operation

Log Location

Discard test data and reset storage

TestFailoverStop

Recovery

DiscoverDevices

Recovery

Recovery (Planned Migration, in DR some operations may not occur)

SRM Recovery Plan Step

SRA Operation

Log Location

Pre-synchronize Storage

QueryReplicationSettings

Protected

SyncOnce

Protected

QuerySyncStatus

Protected

Prepare Protected VMs for Migration

PrepareFailover

Protected

DiscoverDevices

Protected

Synchronize Storage

SyncOnce

Protected

QuerySyncStatus

Protected

Change Recovery Site Storage to Writeable

Failover

Recovery

DiscoverDevices

Recovery

Reprotect

SRM Recovery Plan Step

SRA Operation

Log Location

Configure Storage to Reverse Direction

ReverseReplication

Former Recovery Site

DiscoverDevices

Both

Synchronize Storage

QueryReplicationSettings

Former Recovery Site

SyncOnce

Former Recovery Site

QuerySyncStatus

Former Recovery Site

Glossary of SRA Operations

This section lists all of the relevant SRM to SRA operations. Each operation has a definition in accordance to what SRM expects to happen and then also a definition of what the Pure Storage SRA actually does to fulfill SRMs expectations.

  • queryInfo
    • SRM: Queries the SRA for basic properties such as name and version
    • SRA: Returns SRA name, version number, company and website
  • queryCapabilities
    • SRM: Queries the SRA for supported models of storage arrays and supported SRM commands
    • SRA: Returns FA 400 series, Purity 4.0, supported protocols (FC and iSCSI) and supported SRM commands: failover, discoverArrays, discoverDevices, prepareFailover, prepareRestoreReplication, queryCapabilities, queryConnectionParameters, queryErrorDefinitions, queryReplicationSettings, querySyncStatus, restoreReplication, reverseReplication, syncOnce, testFailoverStart, testFailoverStop, queryInfo.
  • queryErrorDefinitions
    • SRM: Queries the SRA for pre-defined array specific errors
    • SRA: Returns error messages relating specifically to the FlashArray
  • queryConnectionParameters
    • SRM: Queries the SRA for parameters needed to connect to the array management system to perform array management operations
    • SRA: Returns questions for array manager configuration to request connection/credential information for the local and remote FlashArray.
  • discoverArrays
    • SRM: Discovers storage array pairs configured for replication
    • SRA: Returns FlashArray information, Purity level, controller information and serial number/name.
  • discoverDevices
    • SRM: Discovers replicated devices on a given storage array
    • SRA: Returns local and remotely replicated devices (name and state), hosts on FlashArray information, initiator and storage port identifiers. Also looks for demoted devices, devices used for test failover and recovered volumes that are not yet replicated.
  • queryReplicationSettings
    • SRM: Queries replication settings for a list of devices
    • SRA: Returns host grouping information for local, replicated devices
  • syncOnce
    • SRM: Requests immediate replication
    • SRA: Starts a FlashRecover “replicatenow” operation and creates new remote snapshots for the given devices on the remote array through their protection groups. If there are multiple protection groups involved they will be all replicated. The source device and the new snapshot name will be returned to SRM along with the replication progress.
  • querySyncStatus
    • SRM: Queries the status of a replication initiated by syncOnce
    • SRA: Returns the source device and the new snapshot name to SRM along with the replication progress.
  • testFailoverStart
    • SRM: Creates writable temporary copies of the target devices
    • SRA: The SRA identifies the latest snapshot for each volume, identifies the ESXi connectivity information and correlates it with the configured hosts on the FlashArray. Favors attaching to a hostgroup over a host. Then creates a new volume for each source volume with the suffix of –puresra-testfailover and associates snapshots and then connects the volumes to the host or host group.
  • testFailoverStop
    • SRM: Deletes the temporary copies created by testFailoverStart
    • SRA: Disconnects and eradicates volumes created for a test recovery. Only volumes with the original prefix (the source name) and the puresra-testfailover suffix will be identified and eradicated. Replica names are returned to SRM.
  • prepareFailover
    • SRM: Makes source devices read-only and optionally takes a snapshot of the source devices in anticipation of a failover
    • SRA: SRA renames the source volumes with a suffix of –puresra-demoted and disconnects them from the hosts. Returns original volume name to SRM.
  • failover
    • SRM: Promotes target devices by stopping replication for those devices and making them writable
    • SRA: The SRA identifies the latest snapshot for each volume, identifies the ESXi connectivity information and correlates it with the configured hosts on the FlashArray. Favors attaching to a hostgroup over a host. Then either creates a new volume for each source volume or it identifies if there is a former volume from a previous recovery with the –puresra-demoted suffix and either names it or renames it with the suffix of –puresra-failover and associates snapshots and then connects the volumes to the host or host group.
  • reverseReplication
    • SRM: Reverses array replication so that the original target array becomes the source array and vice versa
    • SRA: Renames recovery volumes to remove –puresra-failover suffix, recreates the protection groups for the original source volumes on the target side and adds the volumes into it.

Footnotes

[i] Only is created if there are already existing array managers created in SRM for the Pure Storage SRA.

[ii] Only is created if one or more array pairs are enabled

[iii] Only is created if one or more array pairs are enabled

[iv] Only created if “Replicate Recent Changes” is selected at the start of the test recovery

[v] Only created if “Replicate Recent Changes” is selected at the start of the test recovery

[vi] Only created if “Replicate Recent Changes” is selected at the start of the test recovery


Stay Ahead in Today’s Competitive Market!
Unlock your company’s full potential with a Virtual Delivery Center (VDC). Gain specialized expertise, drive seamless operations, and scale effortlessly for long-term success.

Book a Meeting to Avail the Services of Purestorageovertime

Troubleshooting when Formatting iSCSI VMFS Datastore Generates Error

Symptoms

vSphere reports the following error while attempting to format a VMFS datastore using a Pure Storage iSCSI LUN:

"HostDatastoreSystem.CreateVmfsDatastore" for object "<...>" on vCenter Server "<...>" failed

The LUN will report as online and available under the "Storage Adapters" section in the vSphere Client.

Diagnosis

This error can be due to improper configuration in the network path causing jumbo frames to be fragmented from the ESXi Host to the FlashArray.

How to confirm Jumbo Frames can pass through the network

Run the following command from the ESXi Host in question via SSH:

vmkping -d -s 8972 <target portal ipaddress>

If no response is received, or the following message is returned, then jumbo frames are not successfully traversing the network:

sendto() failed (Message too long)

sendto() failed (Message too long)

sendto() failed (Message too long)

There is an L2 device between the ESXi host and FlashArray that is not allowing jumbo frames to properly pass. Please have the customer check virtual and physical switches on the subnet to ensure jumbo frames are configured from end-to-end.

Solution

Make sure all network devices allow jumbo frames to pass from the ESXi host to the Pure Storage FlashArray.

Read article

Troubleshooting ESXi iSCSI Configuration Issue: Pure Shows Different Hosts in GUI and CLI

Read article

Troubleshooting when ESXi Hosts Disconnect with CHAP Enabled

Problem

Enabling CHAP authentication leads to ESXi hosts disconnecting and they are unable to reconnect.

Scenario

The array has CHAP authentication enabled and is unable to reconnect after configuring CHAP on the ESXi host.

Cause

Purity does not support Dynamic Discovery with CHAP.

Solution

Follow this blog post for a more detailed guide.

Configure the ESXi host to use static CHAP, confirm Dynamic CHAP is not set up, and inherit from parent is not checked.

Screen Shot 2015-07-13 at 10.16.52 AM.png


Two methods of configuring CHAP to the pure array:

Procedure 1 - Manually enter Static Discovery targets

  1. Configure array to use CHAP by entering Host User and Host Password in GUI > Storage Tab > Host > Gear > Configure CHAP .
  2. Confirm iSCSI Adapter > Properties > Dynamic Discovery does NOT contain array target.
  3. Configure iSCSI initiator to use CHAP by selecting Use CHAP and entering Name and Secret that matches array CHAP settings in vSphere Client > iSCSI Adapter > Properties > General tab > CHAP .
  4. Add Static Discovery array targets in vSphere Client > iSCSI Adapter > Properties > Static Discovery . Confirm the CHAP settings for each target are set to Inherit from parent.
  5. Rescan adapter

Procedure 2 - Enter CHAP settings for each discovered Static Discovery target

  1. Configure array to use CHAP by entering Host User and Host Password in GUI > Storage Tab > Host > Gear > Configure CHAP .
  2. Enter a single array iSCSI port IP address in iSCSI Adapter > Properties > Dynamic Discovery .
  3. Confirm Static Discovery list is populated with array iSCSI targets in iSCSI Adapter > Properties > Dynamic Discovery .
  4. For each array Static Discovery target configure CHAP settings to NOT inherit from parent and Use CHAP in iSCSI Adapter > Properties > Static Discovery > <target> > Settings > CHAP .
  5. Rescan adapter

Procedure 1 - Manually enter Static Discovery targets

  1. Configure array to use CHAP by entering Host User and Host Password in GUI > Storage Tab > Host > Gear > Configure CHAP .
  2. Confirm iSCSI Adapter > Properties > Dynamic Discovery does NOT contain array target.
  3. Configure iSCSI initiator to use CHAP by selecting Use CHAP and entering Name and Secret that matches array CHAP settings in vSphere Client > iSCSI Adapter > Properties > General tab > CHAP .
  4. Add Static Discovery array targets in vSphere Client > iSCSI Adapter > Properties > Static Discovery . Confirm the CHAP settings for each target are set to Inherit from parent.
  5. Rescan adapter

Procedure 2 - Enter CHAP settings for each discovered Static Discovery target

  1. Configure array to use CHAP by entering Host User and Host Password in GUI > Storage Tab > Host > Gear > Configure CHAP .
  2. Enter a single array iSCSI port IP address in iSCSI Adapter > Properties > Dynamic Discovery .
  3. Confirm Static Discovery list is populated with array iSCSI targets in iSCSI Adapter > Properties > Dynamic Discovery .
  4. For each array Static Discovery target configure CHAP settings to NOT inherit from parent and Use CHAP in iSCSI Adapter > Properties > Static Discovery > <target> > Settings > CHAP .
  5. Rescan adapter
Read article

vSphere Plugin Not Populating in Web Client After Installation

Read article

Verifing that ATS is Configured on a Datastore in a VMware Support Bundle

Confirming the SCSI-2 Reservations are Happening

If VMware is not configured as per best practice expectations (ATS Enabled) then we may see SCSI-2 Reservations in our logs.   This is how you can check to see if that's happening:

  1. Run this command tgrep -c 'vol.pr_cache inserting registration' core* on Penguin Fuse on the date directory in question for the array, this will give you the number of SCSI-2 reservations created every hour:
    quelyn@i-9000a448:/logs/del-valle.k12.tx.us/dvisd-pure01-ct1/2015_10_27$ tgrep -c 'vol.pr_cache inserting registration' core*
    core.log-2015102700.gz:1875
    core.log-2015102701.gz:1798
    core.log-2015102702.gz:1827
    core.log-2015102703.gz:1817
    core.log-2015102704.gz:1860
    core.log-2015102705.gz:1812
    core.log-2015102706.gz:1818
    core.log-2015102707.gz:2577
    core.log-2015102708.gz:8181
    core.log-2015102709.gz:15131
    core.log-2015102710.gz:21826
    core.log-2015102711.gz:19140
    core.log-2015102712.gz:12044
    core.log-2015102713.gz:13451
    core.log-2015102714.gz:22995
    core.log-2015102715.gz:33136
    core.log-2015102716.gz:18587
    core.log-2015102717.gz:5900
    core.log-2015102718.gz:7324
    core.log-2015102719.gz:2541
    core.log-2015102720.gz:2213
    core.log-2015102721.gz:1850
    core.log-2015102722.gz:1807
    core.log-2015102723.gz:1851
  2. Running this command without the '-c' will allow you to see which LUNs are being locked, seen below bolded.  This can yield many lines of output, so you may want to do this per log file:
    quelyn@i-9000a448:/logs/del-valle.k12.tx.us/dvisd-pure01-ct1/2015_10_27$ tgrep 'vol.pr_cache inserting registration' core*
    core.log-2015102700.gz:Oct 26 23:18:51.394 7FB1ACCF3700 I     vol.pr_cache inserting registration, seq 3097137672 vol 69674 i_t 20000025B5AA000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.431 7FB1AD5F5700 I     vol.pr_cache inserting registration, seq 3097137673 vol 69673 i_t 20000025B5BB000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.457 7FB1AE378700 I     vol.pr_cache inserting registration, seq 3097137674 vol 69662 i_t 20000025B5AA000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.483 7FB1AE378700 I     vol.pr_cache inserting registration, seq 3097137675 vol 69661 i_t 20000025B5AA000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.503 7FB1A73FC700 I     vol.pr_cache inserting registration, seq 3097137676 vol 69676 i_t 20000025B5AA000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.522 7FB1AF7FD700 I     vol.pr_cache inserting registration, seq 3097137677 vol 69667 i_t 20000025B5BB000E-spc2-0 res_type 15
  3. You can then run the pslun command to determine the volume name:
    quelyn@i-9000a448:/logs/del-valle.k12.tx.us/dvisd-pure01-ct1/2015_10_27$ pslun
    
    Volume Name                              pslun Name
    --------------------------------         ----------
    PURE-STR-LUN01                           pslun69648
    PURE-STR-LUN02                           pslun69660
    PURE-STR-LUN03                           pslun69661
    PURE-STR-LUN04                           pslun69662
    PURE-STR-LUN05                           pslun69664
    PURE-STR-LUN06                           pslun69665
    PURE-STR-LUN07                           pslun69666
    PURE-STR-LUN08                           pslun69667
    PURE-STR-LUN09                           pslun69668
    PUR-STR-LUN10                            pslun69669
    PUR-STR-LUN11                            pslun69670
    PURE-STR-LUN12                           pslun69671
    PURE-STR-LUN13                           pslun69672
    PURE-STR-LUN14                           pslun69673
    PURE-STR-LUN15                           pslun69674
    PURE-STR-LUN16                           pslun69675
    PURE-STR-LUN17                           pslun69676
    PURE-STR-LUN18                           pslun69677
    PURE-STR-LUN19                           pslun69678
  4. To determine which hosts are creating the SCSI-2 Reservations, we will need a VMware bundle.  The customer can send this to us via FTP.
  5. Once the bundle is uploaded, please prepare for analysis as per  KB: Retrieving Customer Logs from the FTP
  6. Run the vm script that jhop created against the bundles to check the global configuration of VAAI ATS Wiki: VMware vSphere Ovreview and Troubleshooting
    /home/jhop/python/Mr_VMware.py
  7. After we have confirmed that VAAI ATS is enabled globally if we are still seeing SCSI-2 reservations we will want to check each volume individually.  Please proceed to the next section:

Identifying datastore ATS Configuration on VMware ESXi

Step 1:

Since we only care about datastores (not Raw Device Maps (RDMs)) on Pure Storage we will find our applicable LUNs in the "esxcfg-scsidevs -m.txt" file under the "commands" folder in a VMware Support Bundle. Below is an example of what a line from there will look like:

Screen Shot 2015-12-03 at 2.43.59 PM.png

There are several things that we want to identify from this output; the first is the "NAA Identifier". This is important because anything starting with "naa.624a937" is a Pure Storage LUN. Once we have a Pure Storage LUN we then want to take note of the "VMFS UUID" number (i.e. 53c80075-7ddcc5ba-7d03-0025b5000080 ). The reason why we focus on this instead of the "User-Friendly Name" is because the customers can choose any name they want in that option. If we choose the VMFS UUID then we are guaranteed to know we are referring to a Pure Storage LUN since that is a uniquely generated ID that the vCenter Server assigns to individual LUNs.

Step 2:

Once we have this information the next step is to search for the "vmkfstools" text file that contains the File System information on this device; this will also be found in the "commands" folder you already reside in. An example of what the text file will look like is as follows:

vmkfstools_-P--v-10-vmfsvolumes 53c80075-7ddcc5ba-7d03-0025b5000080 .txt

Notice above our "VMFS UUID" is contained in the file name (in red). We can now search this file for the "Mode" it is running in. An example of what this line will look like, if configured properly, is as follows:

Mode: public ATS-only

If the datastore is not configured properly it will look as follows:

Mode: public

If the datastore is showing a "public" mode then we know that this datastore is misconfigured and we'll be receiving an excessive amount of SCSI-2 Reservations from the ESXi Hosts. This means that locking tasks are not being offloaded to the FlashArray.

Obviously if the customer has a lot of LUNs this process above can take a while, so it is best to script this. I have listed below a simple one liner that will do this for you if you would like to use this instead of going through each LUN one-by-one:

grep "naa.624a937" esxcfg-scsidevs_-m.txt | awk '{print $3}' > Pure-LUNs.txt;while read f;do cat vmkfs*$f.txt |grep -e "Mode:" -e "naa.624a937";echo;done < Pure-LUNs.txt

NOTE: This command is able to be copied & pasted then used on any ESXi Host that is 5.0 and higher, as long as you are in the "commands" folder of the ESXi Host you want to verify.

Step 1:

Since we only care about datastores (not Raw Device Maps (RDMs)) on Pure Storage we will find our applicable LUNs in the "esxcfg-scsidevs -m.txt" file under the "commands" folder in a VMware Support Bundle. Below is an example of what a line from there will look like:

Screen Shot 2015-12-03 at 2.43.59 PM.png

There are several things that we want to identify from this output; the first is the "NAA Identifier". This is important because anything starting with "naa.624a937" is a Pure Storage LUN. Once we have a Pure Storage LUN we then want to take note of the "VMFS UUID" number (i.e. 53c80075-7ddcc5ba-7d03-0025b5000080 ). The reason why we focus on this instead of the "User-Friendly Name" is because the customers can choose any name they want in that option. If we choose the VMFS UUID then we are guaranteed to know we are referring to a Pure Storage LUN since that is a uniquely generated ID that the vCenter Server assigns to individual LUNs.

Step 2:

Once we have this information the next step is to search for the "vmkfstools" text file that contains the File System information on this device; this will also be found in the "commands" folder you already reside in. An example of what the text file will look like is as follows:

vmkfstools_-P--v-10-vmfsvolumes 53c80075-7ddcc5ba-7d03-0025b5000080 .txt

Notice above our "VMFS UUID" is contained in the file name (in red). We can now search this file for the "Mode" it is running in. An example of what this line will look like, if configured properly, is as follows:

Mode: public ATS-only

If the datastore is not configured properly it will look as follows:

Mode: public

If the datastore is showing a "public" mode then we know that this datastore is misconfigured and we'll be receiving an excessive amount of SCSI-2 Reservations from the ESXi Hosts. This means that locking tasks are not being offloaded to the FlashArray.

Obviously if the customer has a lot of LUNs this process above can take a while, so it is best to script this. I have listed below a simple one liner that will do this for you if you would like to use this instead of going through each LUN one-by-one:

grep "naa.624a937" esxcfg-scsidevs_-m.txt | awk '{print $3}' > Pure-LUNs.txt;while read f;do cat vmkfs*$f.txt |grep -e "Mode:" -e "naa.624a937";echo;done < Pure-LUNs.txt

NOTE: This command is able to be copied & pasted then used on any ESXi Host that is 5.0 and higher, as long as you are in the "commands" folder of the ESXi Host you want to verify.

Resolution

Once we have the misconfigured LUNs identified the customer can use the VMware KB listed below to resolve the issue:

Link to VMware KB: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033665

Follow the steps outlined in the "Changing an ATS-only volume to public". Simply change the "0" they are setting to a "1" in the listed command they provide to turn ATS-only back on. It is important that the customer read all of the steps before continuing forward and reading the notes & caveats.

Alternatively, and also much less of a headache, the customer can simply create a new LUN from the FlashArray and mount it to the applicable ESXi Host(s). Once the new VMFS datastore is created they can verify that ATS is properly configured. Once confirmed the new datastore has ATS enabled can then migrate the Virtual Machines from the misconfigured datastore to the newly configured datastore. After everything has been moved from the old datastore and they have confirmed all is working well, they can simply destroy the old LUN. This is much easier to do and is typically what should be recommended as the first step.

If there are any questions please reach out to a fellow colleague or Support Escalations team member for assistance.

Read article