Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
Storage and Backups-Netapp
Should nodes in a MetroCluster DR Group be of the same controller type?

Applies to

  • MetroCluster
  • ONTAP 9

Question(s) and Answer(s)

Should nodes in a MetroCluster DR Group be of the same controller type?

In Data ONTAP, the controller forming a High Availability (HA) pair or a MetroCluster has to be of the same type.
If the system consists of different types, contact your NetApp Sales Representative to resolve the issue.

Should nodes in a MetroCluster DR Group be of the same controller type?

In Data ONTAP, the controller forming a High Availability (HA) pair or a MetroCluster has to be of the same type.
If the system consists of different types, contact your NetApp Sales Representative to resolve the issue.

Controller continually placed offline by alternate

Applies to

Controller: 2660
Firmware Version: 07.75.36.30

Issue

Add your text here.

Read article
How to change the NTP server settings in ATTO FibreBridge 6500N

Applies to

  • MetroCluster
  • ATTO 6500N FibreBridge

Description

This article descibes how to change the SNTP (time server) on a ATTO 6500N FibreBridge.

Read article
Apply failed for Object: igroup Method: baseline. Reason: Cannot create initiator group X because the initiator is a member of the initiator group Y with conflicting ostype

Applies to

  • MetroCluster
  • ONTAP 9

Issue

The configurations state of a Vserver in a MetroCluster is marked as degraded due to an ostype conflict from an initiator in an igroup belonging to the Vserver.

Cluster : clusterA
Vserver : vserver1
Partner Vserver : vserver1-mc
Configuration State : degraded
Stream Status : operation-failed
Corrective Action : Resynchronize the configuration using "metrocluster vserver resync" command. Contact technical support for assistance if configuration-state does not change to healthy.
...
Failed Reason : Apply failed for Object: igroup Method: baseline. Reason: Cannot create initiator group "igroup_windows" because the initiator "initiator1" is a member of the initiator group "igroup_vmware" with conflicting ostype "vmware".

Read article
How Quotas display with NFS clients "df " output?

Applies to

Data ONTAP Quotas

Answer

There are three type of quotas, Tree, User and Group. Data ONTAP does not apply Group quotas for Windows IDs.

  • These quotas are governed using the quota rules.
  • A quota rule is always specific to a volume.
  • Quota rules have no effect until quotas are activated on the volume defined in the quota rule.
  • A quota policy is a collection of quota rules for all the volumes of an SVM.
  • Quota policies are not shared among SVMs. An SVM can have up to five quota policies, which enable you to have backup copies of quota policies.
  • One quota policy is assigned to an SVM at any given time.

Tree quotas, when applied will be reflected in the DF output of the NFS clients and CIFS clients.

Example:


cluster-usa::> version
NetApp Release 8.3.2P7: Mon Oct 03 10:59:56 UTC 2016

cluster-usa::>
cluster-usa::> vol size nfs
vol size: Volume "nfssvm:nfs" has size 50g.

cluster-usa::>
cluster-usa::> qtree show nfs
Vserver    Volume        Qtree        Style        Oplocks   Status
---------- ------------- ------------ ------------ --------- --------
nfssvm     nfs           ""           unix         enable    normal
nfssvm     nfs           unix         unix         enable    normal
3 entries were displayed.

cluster-usa::>
cluster-usa::> quota policy rule show -volume nfs

Vserver: nfssvm            Policy: default           Volume: nfs

Soft             Soft
User         Disk     Disk   Files    Files
Type   Target    Qtree   Mapping     Limit    Limit   Limit    Limit  Threshold
-----  --------  ------- -------  --------  -------  ------  -------  ---------
tree   unix      ""      -             1GB        -       -        -          -

cluster-usa::>


So we set the tree quota of 1G on a volume of 50G. Now, mount the qtree from the Linux client and check the size:

[root@sj ~]# mount nfssvm:/nfs/unix /mnt
[root@sj ~]#
[root@sj ~]# df -h /mnt
Filesystem            Size  Used Avail Use% Mounted on
nfssvm:/nfs/unix      1.0G     0  1.0G   0% /mnt
[root@sj ~]#


User quotas on the other hand, will be reflected only on CIFS mappings (mapped drive size) but not on NFS clients df output.

Example:


cluster-usa::> vol size vol_user
vol size: Volume "nfssvm:vol_user" has size 20g.

cluster-usa::>
cluster-usa::> quota policy rule show -volume vol_user

Vserver: nfssvm            Policy: default           Volume: vol_user

Soft             Soft
User         Disk     Disk   Files    Files
Type   Target    Qtree   Mapping     Limit    Limit   Limit    Limit  Threshold
-----  --------  ------- -------  --------  -------  ------  -------  ---------
user   5839      home    off           1GB        -       -        -          -
user   RTP2K8DOM2\jsiva  home off   1GB        -       -        -          -
2 entries were displayed.

cluster-usa::>


In the above example of 20G volume, a disk quota limit of 1G is set for both Windows user RTP2K8DOM2\jsiva and Unix user ID 5839 for qtree called home.
If the qtree has NTFS security style and if the CIFS user maps the qtree, then the mapped drive will be 1G in size under Windows.

However on UNIX, the NFS mount of the qtree will still show the full size of the volume when a user quota is set.


[root@sj ~]# mount nfssvm:/vol_user/home /home
[root@sj ~]#
[root@sj ~]# df -h /home
Filesystem            Size  Used Avail Use% Mounted on
nfssvm:/vol_user/home
19G  4.5M   19G   1% /home
[root@sj ~]#


As the user 'jsiva (uid 5839) attempts to create data more than 1G allowed quota size, it will fail with the following error:

[root@sj ~]# su - jsiva
bash-4.1$
bash-4.1$ df -h .
Filesystem            Size  Used Avail Use% Mounted on
nfssvm:/vol_user/home
19G  4.5M   19G   1% /home
bash-4.1$
bash-4.1$ dd if=/dev/zero of=/home/jsiva/myfile bs=10485760 count=120
dd: closing output file `/home/jsiva/myfile': Input/output error
bash-4.1$


And rquota must be enabled on the controller side to check the user quota from the NFS client.

cluster-usa::> vserver nfs modify -vserver nfssvm -rquota enabled

cluster-usa::>

bash-4.1$ quota jsiva
Disk quotas for user jsiva (uid 5839):
Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
nfssvm:/vol_user/home
1048532  1048576 1048576               2  4294967295 4294967295
bash-4.1$

Additional Information

Add your text here.

Read article
How to handle single bit error counters on ATTO FibreBridge bridges

Applies to

  • MetroCluster
  • ATTO FibreBridge

Description

This article describes how to treat ATTO FibreBridges with an increased correctable memory error counter.

Read article