Data Extraction - MQTT
Procedure
-
Click
Add New Field
.
- Click Add New Field to add another topic or click Next to go to the Category Assignment panel.
This is one stop global knowledge base where you can learn about all the products, solutions and support features.
Product Release Date: 2022-09-28
Last updated: 2022-09-28
For more information about Foundation Platforms Submodule 2.12.1 open source licensing details, see Open Source Licenses for Foundation Platforms 2.12.1 .
For more information about Foundation Platforms Submodule 2.12 open source licensing details, see Open Source Licenses for Foundation Platforms 2.12 .
Last updated: 2022-02-21
For Frame documentation, see https://docs.frame.nutanix.com/
Last updated: 2022-11-24
Karbon Platform Services is a Kubernetes-based multicloud platform as a service that enables rapid development and deployment of microservice-based applications. These applications can range from simple stateful containerized applications to complex web-scale applications across any cloud.
In its simplest form, Karbon Platform Services consists of a Service Domain encapsulating a project, application, and services infrastructure, and other supporting resources. It also incorporates project and administrator user access and cloud and data pipelines to help converge edge and cloud data.
With Karbon Platform Services, you can:
This data can be stored at the Service Domain or published to the cloud. You can then create intelligent applications using data connectors and machine learning modules to consume the collected data. These applications can run on the Service Domain or the cloud where you have pushed collected data.
Nutanix provides the Service Domain initially as a VM appliance hosted in an AOS AHV or ESXi cluster. You manage infrastructure, resources, and Karbon Platform Services capabilities in a console accessible through a web browser.
As part of initial and ongoing configuration, you can define two user types: an infrastructure administrator and a project user . The cloud management console and user experience help create a more intuitive experience for infrastructure administrators and project users.
Karbon Platform Services includes these ready-to-use built-in services, which provide an advantage over self-managed services:
These services are enabled by default on each Service Domain. All services now have monitoring and status capabilities
Ingress controller configuration and management is now available from the cloud management console (as well as from the Karbon Platform Services kps command line). Options to enable and disable the Ingress controller are available in the user interface.
Traefik or Nginx-Ingress. Content-based routing, load balancing, SSL/TLS termination. If your project requires Ingress controller routing, you can choose the open source Traefik router or the NGINX Ingress controller to enable on your Service Domain. You can only enable one Ingress controller per Service Domain.
Istio. Provides traffic management, secure connection, and telemetry collection for your applications.
Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.
Infrastructure administrator creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, services, data sources, and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them. This user has create/read/update/delete (CRUD) permissions for:
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.
When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
A project user can view and use projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and Kubernetes applications.
Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.
The project user has project-specific create/read/update/delete (CRUD) permissions: the project user can create, read, update, and delete the following and associate it with an existing project:
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.
When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
The web browser-based Karbon Platform Services cloud management console enables you to manage infrastructure and related projects, with specific management capability dependent on your role (infrastructure administrator or project user).
You can log on with your My Nutanix or local user credentials.
The default view for an infrastructure administrator is the Dashboard . Click the menu button in the view to expand and display all available pages in this view.
The default view for a project user is the Dashboard .
After you log on to the cloud management console, you are presented with the main Dashboard page as a role-specific landing page. You can also show this information at any time by clicking Dashboard under the main menu.
Each Karbon Platform Services element (Service Domain, function, data pipeline, and so on) includes a dashboard page that includes information about that element. It might also include information about elements associated with that element.
The element dashboard view also enables you to manage that element. For example, click Projects and select a project. Click Kubernetes Apps and click an application in the list. The application dashboard is displayed along with an Edit button.
To delete a service domain that does not have any associated data sources, click Infrastructure > Service Domains , select a Service Domain from the list, then click Remove . Deleting a multinode Service Domain deletes all nodes in that Service Domain.
The Karbon Platform Services management console includes a Quick Start menu next to your user name. Depending on your role (infrastructure administrator or project user), you can quickly create infrastructure or apps and data. Scroll down to see items you can add for use with projects.
These tasks assume you have already done the following. Ensure that any network-connected devices are assigned static IP addresses.
The Quick Start Menu lists the common onboarding tasks for the infrastructure administrator. It includes links to infrastructure-related resource pages. You can also go directly to any infrastructure resource from the Infrastructure menu item. As the infrastructure administrator, you need to create the following minimum infrastructure.
Create and deploy a Service Domain cluster that consists of a single node.
To add (that is, create and deploy) a multinode Service Domain consisting of three or more nodes, see Manage a Multinode Service Domain.
If you deploy the Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.
For example, you could
set a secret variable key named
SD_PASSWORD
with a value of
passwd1234
.For an example of how to use existing environment
variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.
Create categories of grouped attributes you can specify when you create a data source or pipeline.
You can add one or more data sources (a collection of sensors, gateways, or other input devices providing data) to associate with a Service Domain.
Each defined data source consists of the following:
Certificates downloaded from the cloud management console have an expiration date 30 years from the certificate creation date. Download the certificate ZIP file each time you create an MQTT data source. Nutanix recommends that you use each unique set of these security certificates and keys for any MQTT data sources you add.
When naming entities, Up to 200 alphanumeric characters are allowed.
rtsp://username:password@ip-address/
. For example:
rstp://userproject2:
In the next step, you will specify one or more streams.
https://index.docker.io/v1/
or
registry-1.docker.io/distribution/registry:2.1
https://
aws_account_id
.dkr.ecr.
region
.amazonaws.com
As an infrastructure administrator, you can create infrastructure users or project users. Users without My Nutanix credentials log on as a local user.
Each Service Domain image is preconfigured with security certificates and public/private keys.
When you create an MQTT data source, you generate and download a ZIP file that contains X.509 sensor certificate (public key) and its private key and Root CA certificates. Install these components on the MQTT enabled sensor device to securely authenticate the connection between an MQTT enabled sensor device and Service Domain. See your vendor document for your MQTT enabled sensor device for certificate installation details.
Certificates downloaded from the Karbon Platform Services management console have an expiration date 30 years from the certificate creation date. Download the certificate ZIP file each time you create an MQTT data source. Nutanix recommends that you use each unique set of these security certificates and keys for any MQTT data sources you add.
The Karbon Platform Services cloud management console provides a rich administrative control plane to manage your Service Domain and its infrastructure. The topics in this section describe how to create, add, and upgrade a Service Domain.
In the cloud management console, go to Infrastructure > Service Domains to add a VM-based Service Domain. You can also view health status, CPU/Memory/Storage usage, version details, and more information for every service domain.
In the cloud management console, go to Administration > Upgrades to upgrade your existing Service Domains. This page provides you with various levels of control and granularity over your maintenance process. At your convenience, download new versions for all or specific Service Domains and upgrade them with "1-click".
You can now onboard a multinode Service Domain by using Nutanix Karbon as your infrastructure provider to create a Service Domain Kubernetes cluster. To do this, use Karbon on Prism Central with the kps command line and cloud management console Create a Service Domain workflow. See Onboarding a Multinode Service Domain By Using Nutanix Karbon. (You can also continue to use other methods to onboard and create a Service Domain, as described in Onboarding and Managing Your Service Domain.)
For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the command line and the required YAML configuration file for cluster.
This public Github repository at https://github.com/nutanix/karbon-platform-services/tree/master/cli also describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
The Karbon Platform Services Release Notes include information about any new and updated features for the Service Domain. You can create one or more Service Domains depending on your requirements and manage them from the Karbon Platform Services management console.
The Service Domain is available as a qcow disk image provided by Nutanix for hosting the VM in an AOS cluster running AHV.
The Service Domain is also available as an OVA disk image provided by Nutanix for hosting the VM in a non-Nutanix VMware vSphere ESXi cluster. To deploy a VM from an OVA file on vSphere, see the documentation at the VMware web site describing how to deploy a virtual machine from an OVA file for your ESXi version
Each Service Domain you create by using these images are configured with X.509 security certificates.
If your network requires that traffic flow through an HTTP/HTTPS proxy, see HTTP/HTTPS Proxy Support for a Service Domain VM.
Download the Service Domain VM image file from the Nutanix Support portal Downloads page. This table describes the available image file types.
Service Domain Image Type | Use |
---|---|
QCOW2 | Image file for hosting the Service Domain VM on an AHV cluster |
OVA | Image file for hosting the Service Domain VM on vSphere. |
EFI RAW compressed file | RAW file in GZipped TAR file format for bare metal installation, where the hosting machine is using an Extensible Firmware Interface (EFI) BIOS |
RAW compressed file | RAW file in GZipped TAR file format for bare metal installation, where the hosting machine is using a legacy or non-EFI BIOS |
AWS RAW uncompressed file | Uncompressed RAW file for hosting the Service Domain on Amazon Web Services (AWS) |
As a default, in a single VM deployment, the Service Domain requires these resources to support Karbon Platform Services features. You can download the Service Domain VM image file from the Nutanix Support portal Downloads page.
VM Resource Requirements | Supported Clusters |
---|---|
Environment |
AOS cluster running AHV (AOS-version-compatible version), where Service Domain
Infrastructure VM is running as a guest VM.
VMware vSphere ESXi 6.0 or later cluster, where Service Domain Infrastructure VM is running as a guest VM (created from an OVA image file provided by Nutanix). The OVA image as provided by Nutanix is running virtual hardware version 11. |
vCPUs | 8 single core vCPUs |
Memory | 16 GiB memory. You might require more storage as determined by your applications. |
Disk storage | Minimum 200 GB storage. The Service Domain Infrastructure VM image file provides an initial disk size of 100 GiBs (gibibytes). You might require more storage as determined by your applications. Before first power on of the VM, you can increase (but not decrease) the VM disk size. |
GPUs | [optional] GPUs as required by any application using them |
Item | Requirement/Recommendation |
---|---|
Outbound port |
Allow connection for applications requiring outbound connectivity.
Starting with Service Domain 2.2.0, Karbon Platform Services now retrieves Service Domain package images from these locations. Ensure that your firewall or proxy allows outbound Internet access to the following.
|
Allow outbound port 443 for websocket connection to management console / cloud providers | |
NTP | Allow outbound NTP connection For network time protocol server. |
HTTPS proxy | Service Domain Infrastructure VM supports a network configuration that includes an HTTPS proxy. Customers can now configure such a proxy as part of a cloud-init based method when deploying Service Domain Infrastructure VMs. |
Service Domain Infrastructure VM static IP address | The Service Domain Infrastructure VM requires a static IP address as provided through a managed network when hosted on an AOS / AHV cluster. |
Configured network with one or more configured domain name servers (DNS) and optionally a DHCP server | |
Integrated IP address management (IPAM), which you can enable when creating virtual networks for VMs in the Prism web console | |
(Optional) cloud-init script which specifies network details including a DNS server | |
Miscellaneous | The cloud-init package is included in the Service Domain VM image to enable support for Nutanix Calm and its associated deployment automation features. |
Real Time Streaming protocol (rtsp) | Port 554 (default) |
Onboarding the Service Domain VM is a three-step process:
If your network requires that traffic flow through an HTTP/HTTPS proxy, see HTTP/HTTPS Proxy Support for a Service Domain VM.
See also:
How to upload the Service Domain VM disk image file on AHV running in an AOS cluster.
This topic describes how to initially install the Service Domain VM on an AOS cluster by uploading the image file. For details about your cluster AOS version and the procedures, see the Prism Web Console Guide.
To deploy a VM from an OVA file on vSphere, see the documentation at the VMware web site describing how to deploy a virtual machine from an OVF or OVA file for your ESXi version.
After uploading the Service Domain VM disk image file, create the Service Domain VM and power it on. After creating the Service Domain VM, note the VM IP address and ID in the VM Details panel. You will need this information to add your Service Domain in the Karbon Platform Services management console Service Domains page.
This topic describes how to create the Service Domain VM on an AOS cluster and power it on. For details about your cluster's AOS version and VM management, see the Prism Web Console Guide.
To deploy a VM from an OVA file on vSphere, see the VMware documentation for your ESXi version.
The most recent requirements for the Service Domain VM is listed in the Karbon Platform Services Release Notes.
If your network requires that traffic flow through an HTTP/HTTPS proxy, you can use a cloud-init script. See HTTP/HTTPS Proxy Support for a Service Domain VM.
$ sudo lshw -c disk
$ cd /media/ubuntu/drive_label
$ sudo tar -xOzvf service-domain-image.raw.tgz | sudo dd of=destination_disk bs=1M status=progress
$ aws s3 mb s3://raw-image-bkt
$ aws s3 cp service-domain-image.raw s3://raw-image-bkt
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}
$ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
{
"Version":"2012-10-17",
"Statement":[
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::raw-image-bkt",
"arn:aws:s3:::raw-image-bkt/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::raw-image-bkt",
"arn:aws:s3:::raw-image-bkt/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
}
]
}
$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
{
"Description": "Karbon Platform Services Raw Image",
"Format": "RAW",
"UserBucket": {
"S3Bucket": "raw-image-bkt",
"S3Key": "service-domain-image.raw"
}
}
$ aws ec2 import-snapshot --description "exampletext" --disk-container "file://container.json"
$ aws ec2 describe-import-snapshot-tasks --import-task-ids task_id
$ aws ec2 describe-import-snapshot-tasks --import-task-ids task_id
{
"ImportSnapshotTasks": [
{
"Description": "Karbon Platform Services Raw Image",
"ImportTaskId": "import-task_id",
"SnapshotTaskDetail": {
"Description": "Karbon Platform Services Raw Image",
"DiskImageSize": "disk_size",
"Format": "RAW",
"SnapshotId": "snapshot_ID"
"Status": "completed",
"UserBucket": {
"S3Bucket": "raw-image-bkt",
"S3Key": "service-domain-image.raw"
}
}
}
]
}
$ aws ec2 register-image --virtualization-type hvm \
--name "Karbon Platform Services Service Domain Image" --architecture x86_64 \
--root-device-name "/dev/sda1" --block-device-mappings \
"[{\"DeviceName\": \"/dev/sda1\", \"Ebs\": {\"SnapshotId\": \"snapshot_ID\"}}]"
$ aws ec2 describe-instances --instance-id instance_id --query 'Reservations[].Instances[].[PublicIpAddress]' --output text | sed '$!N;s/\n/ /'
$ cat /config/serial_number.txt
$ route -n
Attach a cloud-init script to configure HTTP/HTTPS proxy server support.
If your network policies require that all HTTP network traffic flow through a proxy server, you can configure a Service Domain to use an HTTP proxy. When you create the service domain VM, attach a cloud-init script with the proxy server details. When you then power on the VM and it fully starts, it will include your proxy configuration.
If you require a secure proxy (HTTPS), use the cloud-init script to upload SSL certificates to the Service Domain VM.
This script creates an HTTP/HTTPS proxy server configuration on the Service Domain VM after
you create and start the VM. Note that
CACERT_PATH=
in the first
content
spec is optional in this case, as it is already specified in the
second
path
spec.
#cloud-config
#vim: syntax=yaml
write_files:
- path: /etc/http-proxy-environment
content: |
HTTPS_PROXY="http://ip_address:port"
HTTP_PROXY="http://ip_address:port"
NO_PROXY="127.0.0.1,localhost"
CACERT_PATH="/etc/pki/ca-trust/source/anchors/proxy.crt"
- path: /etc/systemd/system/docker.service.d/http-proxy.conf
content: |
[Service]
Environment="HTTP_PROXY=http://ip_address:port"
Environment="HTTPS_PROXY=http://ip_address:port"
Environment="NO_PROXY=127.0.0.1,localhost"
- path: /etc/pki/ca-trust/source/anchors/proxy.crt
content: |
-----BEGIN CERTIFICATE-----
PASTE CERTIFICATE DATA HERE
-----END CERTIFICATE-----
runcmd:
- update-ca-trust force-enable
- update-ca-trust extract
- yum-config-manager --setopt=proxy=http://ip_address:port --save
- systemctl daemon-reload
- systemctl restart docker
- systemctl restart sherlock_configserver
Create and deploy a Service Domain cluster that consists of a single node.
To add (that is, create and deploy) a multinode Service Domain consisting of three or more nodes, see Manage a Multinode Service Domain.
If you deploy the Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.
For example, you could
set a secret variable key named
SD_PASSWORD
with a value of
passwd1234
.For an example of how to use existing environment
variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.
Create and deploy a Service Domain cluster that consists of three or more nodes.
A multinode Service Domain is a cluster initially consisting of a minimum of three leader nodes. Each node is a single Service Domain VM hosted in an AHV cluster.
Creating and deploying a multinode Service Domain is a three-step process:
The Service Domain image version where Karbon Platform Services introduces this feature is described in the Karbon Platform Services Release Notes.
Starting with new Service Domain version 2.3.0 deployments, high availability support for the Service Domain is now implemented through the Kubernetes API server (kube-apiserver). This support is specific to new multinode Service Domain 2.3.0 deployments. When you create a multinode Service Domain to be hosted in a Nutanix AHV cluster, you must specify a Virtual IP Address (VIP), which is typically the IP address of the first node you add.
Each node requires access to shared storage from an AOS cluster. Ensure that you meet the following requirements to create a storage profile. Adding a Multinode Service Domain requires these details.
On your AOS cluster:
For example, you have upgraded three older single-node Service Domains to a multinode image version. You cannot create a multinode Service Domain from these nodes.
For example, you have upgraded two older single-node Service Domains to a multinode image version. You have a newly created multinode compatible single node. You cannot add these together to form a new muiltinode service domain.
Create and deploy a Service Domain cluster that consists of three or more nodes.
Starting with new Service Domain version 2.3.0 deployments, high availability support for the Service Domain is now implemented through the Kubernetes API server (kube-apiserver). This support is specific to new multinode Service Domain 2.3.0 deployments.
To enable the HA kube-apiserver support, ensure that the VIP address is part of the same subnet as the Service Domain VMs and the VIP address is unique (that is, has not already been allocated to any VM). Otherwise, the Service Domain will not enable this feature.
Also ensure that the VIP address in this case is not part of any cluster IP address pool range that you have specified when you created a virtual network for guest VMs in the AHV cluster. That is, the VIP address must be outside this IP pool address range. Otherwise, creation of the Service Domain in this case will fail.
For example, you
could set a secret variable key named
SD_PASSWORD
with a value of
passwd1234
.For an example of how to use existing environment
variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.
Add nodes to an existing multinode Service Domain.
Remove worker nodes from a multinode Service Domain. Any node added to an existing three-node Service Domain is considered a worker node.
You can now onboard a multinode Service Domain by using Nutanix Karbon as your infrastructure provider to create a Service Domain Kubernetes cluster.
For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the kps command line and the required YAML configuration file for the cluster. This public Github repository at https://github.com/nutanix/karbon-platform-services/tree/master/cli also describes how to install the kps command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
This information populates the kps command line options and parameters in next step.
Your network bandwidth might affect how long it takes to completely download the latest Service Domain version. Nutanix recommends that you perform any upgrades during your scheduled maintenance window.
Upgrade your existing Service Domain VM by using the Upgrades page in the cloud management console. From Upgrades , you can see available updates that you can download and install on one or more Service Domains of your choosing.
Upgrading the Service Domain is a two-step process where you:
Link | Use Case |
---|---|
Service Domains |
"1-click" download or upgrade for all upgrade-eligible Service Domains.
|
Download and upgrade on all eligible | Use this workflow to download an available version to all Service Domains eligible to be upgraded. You can then decide when you want to upgrade the Service Domain to the downloaded version. See Upgrading All Service Domains. |
Download and upgrade on selected | Use this workflow to download an available version to one or more Service Domains that you select and are eligible to be upgraded. This option appears after you select one or more Service Domains. After downloading an available Service Domain version, upgrade one or more Service Domains when convenient. See Upgrading Selected Service Domains. |
Task History
View Recent History |
See Checking Upgrade Task History.
View Recent History appears in the Service Domains page list for each Service Domain, and shows a status summary. |
View Recent History appears in the Service Domains page list for each Service Domain and shows a status summary.
A project is an infrastructure, apps, and data collection created by the infrastructure administrator for use by project users.
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project. When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
A project can consist of:
When you add a Service Domain to a project, all resources such as data sources associated with the Service Domain are available to the users added to the project.
The Projects page lets an infrastructure administrator create a new project and lists projects that administrators can update and view.
For project users, the Projects page lists projects created and assigned by the infrastructure administrator that project users can view and add apps and data. Project users can view and update any projects assigned by the administrator with applications, data pipelines, and so on. Project users cannot remove a project.
When you click a project name, the project Summary dashboard is displayed and shows resources in the project.
You can click any of the project resource menu links to edit or update existing resources, or create and add resources to a project. For example, you can edit an existing data pipeline in the project or create a new one and assign it to the project. For example, click Kafka to shows details for the Kafka data service associated with the project. See Viewing Kafka Status.).
As an infrastructure administrator, create a project. To complete this task, log on to the cloud management console.
Update an existing project. To complete this task, log on to the cloud management console.
As an infrastructure administrator, delete a project. To complete this task, log on to the cloud management console.
The Karbon Platform Services cloud infrastructure provides services that are enabled by default. It also provides access to services that you can enable for your project.
The platform includes these ready-to-use services, which provide an advantage over self-managed services:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Enable or disable services associated with your project.
Kafka is available as a data service through your Service Domain.
The Kafka data service is available for use within a project's applications and data pipelines, running on a Service Domain hosted in your environment. The Kafka service offering from Karbon Platform Services provides the following advantages over a self-managed Kafka service:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Information about application requirements and sample YAML application file
Create an application for your project as described in Creating an Application and specify the application attributes and configuration in a YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: some.container.registry.com/myapp:1.7.9
ports:
- containerPort: 80
env:
- name: KAFKA_ENDPOINT
value: {{.Services.Kafka.Endpoint}}
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
kind
|
Deployment
|
Specify the resource type. Here, use
Deployment
.
|
metadata
|
name
|
Provide a name for your deployment. |
labels
|
Provide at least one label. Here, specify the application name as
app: my-app
|
|
spec
|
Define the Kafka service specification. | |
replicas
|
Here, 1 to indicate a single Kafka cluster (single Service Domain instance or VM) to keep data synchronized. | |
selector
|
Use
matchLabels
and specify the
app
name as in
labels
above.
|
|
template
|
||
Specify the application name here (
my-app
), same as
metadata
specifications above.
|
||
spec
|
Here, define the specifications for the application using Kafka. | |
containers
|
|
|
env
|
Leave these values as shown. |
Information about data pipeline function requirements.
See Functions and Data Pipelines.
You can specify a Kafka endpoint type in a data pipeline. A data pipeline consists of:
For a data pipelines with a Kafka topic endpoint:
In the cloud management console, you can view Kafka data service status when you use Kafka in an application or as a Kafka endpoint in a data pipeline as part of a project. This task assumes you are logged in to the cloud management console.
The Service Domain supports the Traefik open source router as the default Kubernetes Ingress controller. You can also choose the NGINX ingress controller instead.
An infrastructure admin can enable an Ingress controller for your project through Manage Services as described in Managing Project Services. You can only enable one Ingress controller per Service Domain.
When you include Ingress controller annotations as part of your application YAML file, Karbon Platform Services uses Traefik as the default on-demand controller.
If your deployment requires it, you can alternately use NGINX (ingress-nginx) as a Kubernetes Ingress controller instead of Traefik.
In your application YAML, specify two snippets:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
To securely route application traffic with a Service Domain ingress controller, create YAML snippets to define and specify the ingress controller for each Service Domain.
You can only enable and use one Ingress controller per Service Domain.
Create an application for your project as described in Creating an Application to specify the application attributes and configuration in a YAML file. You can include these Service and Secret snippets Service Domain ingress controller annotations and certificate information in this app deployment YAML file.
apiVersion: v1
kind: Service
metadata:
name: whoami
annotations:
sherlock.nutanix.com/http-ingress-path: /notls
sherlock.nutanix.com/https-ingress-path: /tls
sherlock.nutanix.com/https-ingress-host: DNS_name
sherlock.nutanix.com/http-ingress-host: DNS_name
sherlock.nutanix.com/https-ingress-secret: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
kind
|
Service
|
Specify the Kubernetes service. Here, use
Service
to indicate
that this snippet defines the ingress controller details.
|
apiVersion
|
v1
|
Here, the Kubernetes API version. |
metadata
|
name
|
Provide an app name to which this controller applies. |
annotations
|
These
annotations
define the
ingress controller encryption type and paths for Karbon Platform Services.
|
|
sherlock.nutanix.com/http-ingress-path: /notls
|
/notls
specifies no Transport Layer Security
encryption.
|
|
sherlock.nutanix.com/https-ingress-path: /tls
|
|
|
sherlock.nutanix.com/http-ingress-host:
DNS_name
|
Ingress service host path, where the service is bound to port 80.
DNS_name
. DNS name you can give to your application. For
example,
|
|
sherlock.nutanix.com/https-ingress-host:
DNS_name
|
Ingress service host path, where the service is bound to port 443.
DNS_name
. DNS name you can give to your application. For
example,
|
|
sherlock.nutanix.com/https-ingress-secret:
whoami
|
The
sherlock.nutanix.com/https-ingress-secret:
whoami
snippet links the authentication
Secret
information defined above to this
controller.
|
|
spec
|
Define the transfer protocol, port type, and port for the application. | |
A selector to specify the application.
|
Use a Secret snippet to specify the certificates used to secure app traffic.
apiVersion: v1
kind: Secret
metadata:
name: whoami
type: kubernetes.io/tls
data:
ca.crt: cert_auth_cert
tls.crt: tls_cert
tls.key: tls_key
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
apiVersion
|
v1
|
Here, the TLS API version. |
kind
|
Secret
|
Specify the resource type. Here, use
Secret
to indicate that
this snippet defines the authentication details.
|
metadata
|
name
|
Provide an app name to which this certification applies. |
type
|
Define the authentication type used to secure the app. Here,
kubernetes.io/tls
|
|
data
|
ca.crt
|
Add the keys for each certification type: certificate authority certificate (ca.crt), TLS certificate (tls.crt), and TLS key ( |
tls.crt
|
||
tls.key
|
In the cloud management console, you can view Ingress controller status for any controller used as part of a project. This task assumes you are logged in to the cloud management console.
Istio provides secure connection, traffic management, and telemetry.
In the application YAML snippet or file, define the
VirtualService
and
DestinationRules
objects.
These objects specify traffic routing rules for the
recommendation-service
app host. If the traffic rules match, traffic flows to the named destination (or
subset/version of it) as defined here.
In this example, traffic is routed to the
recommendation-service
app host
if it is sent from the FireFox browser. The specific policy version
(
subset
) for each host helps you identify and manage routed data.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: recomm-svc
spec:
hosts:
- recommendation-service
http:
- match:
- headers:
user-agent:
regex: .*Firefox.*
route:
- destination:
host: recommendation-service
subset: v2
- route:
- destination:
host: recommendation-service
subset: v1
This
DestinationRule
YAML snippet defines a load-balancing traffic policy
for the policy versions (
subsets
), where any healthy host can service the
request.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: recomm-svc
spec:
host: recommendation-service
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
In this YAML snipped, you can split traffic for each subset by specifying a
weight
of 30 in one case, and 70 in the other. You can also weight them
evenly by giving each a weight value of 50.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: recomm-svc
spec:
hosts:
- recommendation-service
http:
- route:
- destination:
host: recommendation-service
subset: v2
weight: 30
- destination:
host: recommendation-service
subset: v1
weight: 70
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
In the cloud management console, you can view Istio service mesh status associated with applications in a project. This task assumes you are logged in to the cloud management console.
The Prometheus service included with Karbon Platform Services enables you to monitor endpoints you define in your project's Kubernetes apps. Karbon Platform Services allows one instance of Prometheus per project.
Prometheus collects metrics in your app endoints. The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, endpoints, and alerts. See Viewing Prometheus Service Status.
You can then decide how to view the collected metrics, through graphs or other Prometheus-supported means. See Create Prometheus Graphs with Grafana - Example.
Setting | Default Value or Description |
---|---|
Frequency interval to collect and store metrics (also known as scrape and store) | Every 60 seconds 1 |
Collection endpoint |
/metrics
1
|
Default collection app | collect-metrics |
Data storage retention time | 10 days |
|
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
This sample app YAML specifies an app named
metricsmatter-sample-app
and creates one instance of this containerized app (
replicas: 1
) from the managed Amazon Elastic Container Registry.
apiVersion: apps/v1
kind: Deployment
metadata:
name: metricsmatter-sample-deployment
spec:
replicas: 1
selector:
matchLabels:
app: metricsmatter-sample-app
template:
metadata:
name: metricsmatter-sample-app
labels:
app: metricsmatter-sample-app
spec:
containers:
- name: metricsmatter-sample-app
imagePullPolicy: Always
image: 1234567890.dkr.ecr.us-west-2.amazonaws.com/app-folder/metricmatter_sample_app:latest
Next, in the same application YAML file, create a Service snippet. Add the default
collect-metrics
app label to the
Service
object. When you add
app: collect-metrics
, Prometheus scrapes the default
/metrics
endpoint every 60 seconds, with metrics exposed on port 8010.
---
apiVersion: v1
kind: Service
metadata:
name: metricsmatter-sample-service
labels:
app: collect-metrics
spec:
selector:
app: metricsmatter-sample-app
ports:
- name: web
protocol: TCP
port: 8010
Add a ServiceMonitor snippet to the app YAML above to customize the endpoint to scrape and change the interval to collect and store metrics. Make sure you include the Deployment and Service snippets.
Here, change the endpoint to
/othermetrics
and the collection interval to 15 seconds (
15s
).
Prometheus discovers all ServiceMonitors in a given namespace (that is, each project app) where it is installed.
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: metricsmatter-sample-app
labels:
app: collect-metrics
spec:
selector:
matchLabels:
app: collect-metrics
endpoints:
- path: /othermetrics
interval: 15s
port: 8010
You can also use endpoint environment variables in an application template for the service and AlertManager.
{{.Services.Prometheus.Endpoint}}
defines the service endpoint.
{{.Services.AlertManager.Endpoint}}
defines a custom Alert Manager endpoint.
Configure Service Domain Environment Variables describes how to use these environment variables.
The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, Prometheus endpoints, and alerts.
This example shows how you can set up a Prometheus metrics dashboard with Grafana.
This topic provides examples to help you expose Prometheus endpoints to Grafana, an open-source analytics and monitoring visualization application. You can then view scraped Prometheus metrics graphically.
The first ConfigMap YAML snippet example uses the environment variable
{{.Services.Prometheus.Endpoint}}
to define the service endpoint. If this
YAML snippet is part of an application template created by an infra admin, a project user
can then specify these per-Service Domain variables in their application.
The second snippet provides configuration information for the Grafana server web page. The host name in this example is woodkraft2.ntnxdomain.com
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
data:
prometheus.yaml: |-
{
"apiVersion": 1,
"datasources": [
{
"access":"proxy",
"editable": true,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "{{.Services.Prometheus.Endpoint}}",
"version": 1
}
]
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-ini
data:
grafana.ini: |
[server]
domain = woodkraft2.ntnxdomain.com
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
serve_from_sub_path = true
---
This YAML snippet provides a standard deployment specification for Grafana.
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:latest
ports:
- name: grafana
containerPort: 3000
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "1Gi"
cpu: "500m"
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
- mountPath: /etc/grafana/provisioning/datasources
name: grafana-datasources
readOnly: false
- name: grafana-ini
mountPath: "/etc/grafana/grafana.ini"
subPath: grafana.ini
volumes:
- name: grafana-storage
emptyDir: {}
- name: grafana-datasources
configMap:
defaultMode: 420
name: grafana-datasources
- name: grafana-ini
configMap:
defaultMode: 420
name: grafana-ini
---
Define the Grafana Service object to use port 3000 and an Ingress controller to manage access to the service (through woodkraft2.ntnxdomain.com).
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
selector:
app: grafana
ports:
- port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: grafana
labels:
app: grafana
spec:
rules:
- host: woodkraft2.ntnxdomain.com
http:
paths:
- path: /grafana
backend:
serviceName: grafana
servicePort: 3000
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
You can create intelligent applications to run on the Service Domain infrastructure or the cloud where you have pushed collected data. You can implement application YAML files to use as a template, where you can customize the template by passing existing Categories associated with a Service Domain to it.
You need to create a project with at least one user to create an app.
You can undeploy and deploy any applications that are running on Service Domains or in the cloud. See Deploying and Undeploying a Kubernetes Application.
For Kubernetes apps running as privileged, you might have to specify the Kubernetes
namespace where the application is deployed. You can do this by using the
{{
.Namespace }}
variable you can define in app YAML template file.
In this example, the resource kind of ClusterRoleBinding specifies the
{{
.Namespace }}
variable as the namespace where the subject ServiceAccount is
deployed. As all app resources are deployed in the project namespace, specify the project
name as well (here, name: my-sa).
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- kind: ServiceAccount
name: my-sa
namespace: {{ .Namespace }}
Create a Kubernetes application that you can associate with a project.
Update an existing Kubernetes application.
Delete an existing Kubernetes application.
You can undeploy and deploy any applications that are running on Service Domains or in the cloud. You can choose the Service Domains where you want the app to deploy or undeploy. Select the table or tile view on this page by clicking one of the view icons.
Undeploying a Kubernetes app deletes all config objects directly created for the app, including PersistentVolumeClaim data. Your app can create a PersistentVolumeClaim indirectly through StatefulSets. The following points describe scenarios when data is deleted and when data is preserved after you undeploy an app.
The Data Pipelines page enables you to create and view data pipelines, and also see any alerts associated with existing pipelines.
A data pipeline is a path for data that includes:
It also enables you to process and transform captured data for further consumption or processing.
To create a data pipeline, you must have already created or defined at least one of the following:
Create a data pipeline, with a data source or real-time data source as input and infrastructure or external cloud as the output.
Update a data pipeline, including data source, function, and output destination. See also Naming Guidelines.
A function is code used to perform one or more tasks. Script languages include Python, Golang, and Node.js. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.
An infrastructure administrator or project user can create a function, and later can edit or clone it. You cannot edit a function that is used by an existing data pipeline. In this case, you can clone it to make an editable copy.
Edit an existing function. To complete this task, log on to the cloud management console.
Other than the name and description, you cannot edit a function that is in use by an existing data pipeline. In this case, you can clone a function to duplicate it. See Cloning a Function.
Clone an existing function. To complete this task, log on to the cloud management console.
You can create machine learning (ML) models to enable AI inferencing for your projects. The ML Model feature provides a common interface for functions (that is, scripts) or applications to use the ML model Tensorflow runtime environment on the Service Domain.
The Karbon Platform Services Release Notes list currently supported ML model types.
An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.
You can add multiple models and model versions to a single ML Model instance that you create. In this scenario, multiple client projects can access any model configured in the single ML model instance.
How to delete an ML model.
The ML Models page in the Karbon Platform Services management console shows version and activity status for all models.
A runtime environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.
Karbon Platform Services includes standard runtime environments including but not limited to the following. These runtimes are read-only and cannot be edited, updated, or deleted by users. They are available to all projects, functions, and associated container registries.
How to create a user-added runtime environment for use with your project.
https://index.docker.io/v1/
or
registry-1.docker.io/distribution/registry:2.1
https://
aws_account_id
.dkr.ecr.
region
.amazonaws.com
How to edit a user-added runtime environment from the cloud management console. You cannot edit the included read-only runtime environments.
How to remove a user-added runtime environment from the cloud management console. To complete this task, log on to the cloud management console.
Logging provides a consolidated landing page enabling you to collect, forward, and manage logs from selected Service Domains.
From Logging or System Logs > Logging or the summary page for a specific project, you can:
You can also collect logs and stream them to a cloud profile by using the kps command line available from the Nutanix public Github channel https://github.com/nutanix/karbon-platform-services/tree/master/cli. Readme documentation available there describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Access the Audit Log dashboard to view the most recent operations performed by users.
Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Trail dashboard in the cloud management console. To display the dashboard, log on to the console, open the navigation menu, and click Audit Trail .
Click Filters to display operations by Operation Type, User Name, Resource Name, Resource Type, Time Range, and other filter types so you can narrow your search. Filter Operation Type by CREATE, UPDATE, and DELETE actions.
Log Collector examines the selected Service Domains and collects logs and configuration information useful for troubleshooting issues and finding out details about any Service Domain.
Log Collector examines the selected project application and collects logs and configuration information useful for troubleshooting issues and finding out details about an app.
Create, edit, and delete log forwarding policies to help make collection more granular and then forward those Service Domain logs to the cloud.
monitoring.us-west-2.amazonaws.com
Create a log collector for log forwarding by using the kps command line.
Nutanix has released the kps command line on its public Github channel. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Each sample YAML file defines a log collector. Log collectors can be:
See the most up-to-date sample YAML files and descriptions at https://github.com/nutanix/karbon-platform-services/tree/master/cli.
Create a log collector defined in a YAML file:
user@host$ kps create -f infra-logcollector-cloudwatch.yaml
This sample infrastructure log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).
To enable AWS CloudWatch log streaming, you must specify awsRegion, cloudwatchStream, and cloudwatchGroup.
kind: logcollector
name: infra-log-name
type: infrastructure
destination: cloudwatch
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name | Value or Subfield Name / Description | Value or Subfield Name / Description |
---|---|---|
kind |
logcollector
|
Specify the resource type |
name | infra-log-name | Specify the unique log collector name |
type |
infrastructure
|
Log collector for infrastructure |
destination | cloudwatch | Cloud destination type |
cloudProfile | cloud-profile-name | Specify an existing Karbon Platform Services cloud profile |
awsRegion |
For example,
us-west-2
or
monitoring.us-west-2.amazonaws.com
|
Valid AWS region name or CloudWatch endpoint fully qualified domain name |
cloudwatchGroup | cloudwatch-group-name | Log group name |
cloudwatchStream | cloudwatch-stream-name | Log stream name |
filterSourceCode |
|
Specify the log conversion code |
This sample project log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).
kind: logcollector
name: project-log-name
type: project
project: project-name
destination: cloud-destination type
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name | Value or Subfield Name / Description | Value or Subfield Name / Description |
---|---|---|
kind |
logcollector
|
Specify the resource type |
name | project-log-name | Specify the unique log collector name |
type |
project
|
Log collector for specific project |
project | project-name | Specify the project name |
destination | cloud-destination type | Cloud destination type such as CloudWatch |
cloudProfile | cloud-profile-name | Specify an existing Karbon Platform Services cloud profile |
awsRegion |
For example,
us-west-2
or
monitoring.us-west-2.amazonaws.com
|
Valid AWS region name or CloudWatch endpoint fully qualified domain name |
cloudwatchGroup | cloudwatch-group-name | Log group name |
cloudwatchStream | cloudwatch-stream-name | Log stream name |
filterSourceCode |
|
Specify the log conversion code |
Real-Time Log Monitoring built into Karbon Platform Services provides real-time log monitoring and lets you view application and data pipeline log messages securely in real time.
Viewing the most recent log messages as they occur helps you see and troubleshoot application or data pipeline operations. Messages stream securely over an encrypted channel and are viewable only by authenticated clients (such as an existing user logged on to the Karbon Platform Services cloud platform).
The cloud management console shows the most recent log messages, up to 2 MB. To get the full logs, collect and then download the log bundles by Running Log Collector - Service Domains.
View the most recent real-time logs for applications and data pipelines.
The Karbon Platform Services real-time log monitoring console is a terminal-style display that streams log entries as they occur.
After you do the steps in Displaying Real-Time Logs, a terminal-style window is displayed in one or more tabs. Each tab is streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application.
Your application or data pipeline function generates the log messages. That is, the console shows log messages that you have written into your application or function.
If your Karbon Platform Services Service Domain is connected and your application or function is not logging anything, the console might show a No Logs message. In this case, the message means that the application or function is idle and not generating any log messages.
You might see one or more error messages in the following cases. As a result, Real-Time Log Monitoring cannot retrieve any logs.
Simplifies authentication when you use the Karbon Platform Services API by enabling you to manage your keys from the Karbon Platform Services management console. This topic also describes API key guidelines.
As a user (infrastructure or project), you can manage up to two API keys through the Karbon Platform Services management console. After logging on to the management console, click your user name in the management console, then click Manage API Keys to create, disable, or delete these keys.
Read more about the Karbon Platform Services API at nutanix.dev. For Karbon Platform Services Developers describes related information and links to resources for Karbon Platform Services developers.
Example API request using an API key.
After you create an API key, use it with your Karbon Platform Services API HTTPS Authorization requests. In the request, specify an Authorization header including Bearer and the key you generated and copied from the Karbon Platform Services management console.
For example, here is an example Node JS code snippet:
var http = require("https");
var options = {
"method": "GET",
"hostname": "karbon.nutanix.com",
"port": null,
"path": "/v1.0/applications",
"headers": {
"authorization": "Bearer API_key"
}
};
Create one or more API keys through the Karbon Platform Services management console.
Create one or more API keys through the Karbon Platform Services management console.
Karbon Platform Services provides limited secure shell (SSH) access to your cloud-connected service domain to manage Kubernetes pods.
effectiveProfile
setting.
The Karbon Platform Services cloud management console provides limited secure shell (SSH) access to your cloud-connected Service Domain to manage Kubernetes pods. SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting.
As Karbon Platform Services is secure by design, dynamically generated public/private key pairs with a default
expiration of 30 minutes secure your SSH connection. When you start an SSH session from the
cloud management console, you automatically log on as user
kubeuser
.
Infrastructure administrators have SSH access to Service Domains. Project users do not have access.
Access a Service Domain through SSH to manage Kubernetes pods with kubectl CLI commands. This feature is disabled by default. To enable this feature, contact Nutanix Support.
Use kubectl commands to manage Kubernetes pods on the Service Domain.
kubeuser@host$ kubectl get pods
kubeuser@host$ kubectl get services
kubeuser@host$ kubectl logs pod_name
kubeuser@host$ kubectl exec pod_name command_name
kubeuser@host$ kubectl exec -it pod_name --container container_name -- /bin/sh
The Alerts page and the Alerts Dashboard panel show any alerts triggered by Karbon Platform Services depending on your role.
To see alert details:
Click Filters to sort the alerts by:
An Alert link is available on each Apps & Data and Infrastructure page.
Information and links to resources for Karbon Platform Services developers.
This section contains information about Karbon Platform Services development.
The Karbon Platform Services public Github repository https://github.com/nutanix/karbon-platform-services includes sample application YAML files, instructions describing external client access to services, Karbon Platform Services kps CLI samples, and so on.
Nutanix has released the kps command line on its public Github repository. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Karbon Platform Services supports the Traefik open source router as the default Kubernetes Ingress controller and NGINX (ingress-nginx) as a Kubernetes Ingress controller. For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/ingress.
Kafka is available as a data service through your Service Domain. Clients can manage, publish and subscribe to topics using the native Kafka protocol. Data Pipelines can use Kafka as destination. Applications can also use a Kafka client of choice to access the Kafka data service.
For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/kafka. See alsoKafka as a Service.
Enable a container application to run with elevated privileges.
For information about installing the kps command line, see For Karbon Platform Services Developers.
Karbon Platform Services enables you to develop an application which requires elevated privileges to run successfully. By using the kps command line, you can set your Service Domain to enable an application running in a container to run in privileged mode.
Configure your Service Domain to enable a container application to run with elevated privileges.
user@host$ kps config create-context context_name --email user_email_address --password password
user@host$ kps config get-contexts
user@host$ kps get svcdomain -o yaml
user@host$ kps update svcdomain svc_domain_name --set-privileged
Successfully updated Service Domain:
svc_domain_name
true
.
user@host$ kps get svcdomain svc_domain_name -o yaml
kind: edge
name: svc_domain_name
connected: true
.
.
.
profile:
privileged: true
enableSSH: true
effectiveProfile:
privileged: true
enableSSH: true
effectiveProfile
privileged
set to
true
indicates that Nutanix
Support has enabled this feature. If the setting is
false
,
contact Nutanix Support to enable this feature. In this example, Nutanix has
also enabled SSH access to this Service Domain (see
Secure Shell (SSH) Access to Service
Domains
in
Karbon Platform Services Administration
Guide
).
After elevating privilege as described in Setting Privileged Mode, elevate the application privilege. This sample enables USB device access for an application running in a container on elevated Service Domain
Add a tag similar to the following in the Deployment section in your application YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: usb
annotations:
sherlock.nutanix.com/privileged: "true"
apiVersion: v1
kind: ConfigMap
metadata:
name: usb-scripts
data:
entrypoint.sh: |-
apk add python3
apk add libusb
pip3 install pyusb
echo Read from USB keyboard
python3 read-usb-keyboard.py
read-usb-keyboard.py: |-
import usb.core
import usb.util
import time
USB_IF = 0 # Interface
USB_TIMEOUT = 10000 # Timeout in MS
USB_VENDOR = 0x627
USB_PRODUCT = 0x1
# Find keyboard
dev = usb.core.find(idVendor=USB_VENDOR, idProduct=USB_PRODUCT)
endpoint = dev[0][(0,0)][0]
try:
dev.detach_kernel_driver(USB_IF)
except Exception as err:
print(err)
usb.util.claim_interface(dev, USB_IF)
while True:
try:
control = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, USB_TIMEOUT)
print(control)
except Exception as err:
print(err)
time.sleep(0.01)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: usb
annotations:
sherlock.nutanix.com/privileged: "true"
spec:
replicas: 1
selector:
matchLabels:
app: usb
template:
metadata:
labels:
app: usb
spec:
terminationGracePeriodSeconds: 0
containers:
- name: alpine
image: alpine
volumeMounts:
- name: scripts
mountPath: /scripts
command:
- sh
- -c
- cd /scripts && ./entrypoint.sh
volumes:
- name: scripts
configMap:
name: usb-scripts
defaultMode: 0766
Define environment variables for an individual Service Domain. After defining them, any Kubernetes app that specifies that Service Domain can access them as part of a container spec in the app YAML.
As an infrastructure administrator, you can set environment variables and associated values for each Service Domain, which are available for use in Kubernetes apps. For example:
As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.
How to set environment variables for a Service Domain.
user@host$ kps config create-context context_name --email user_email_address --password password
user@host$ kps config get-contexts
user@host$ kps get svcdomain -o yaml
my-svc-domain
, for example, set the
Service Domain environment variable. In this example, set a secret variable
named
SD_PASSWORD
with a value of
passwd1234
.
user@host$ kps update svcdomain my-svc-domain --set-env '{"SD_PASSWORD":"passwd1234"}'
user@host$ kps get svcdomain my-svc-domain -o yaml
kind: edge
name: my-svc-doamin
connected: true
.
.
.
env: '{"SD_PASSWORD": "passwd1234"}'
kps update
svcdomain my-svc-domain --set-env '{"
variable_name
":
"
variable_value
"}'
command.
user@host$ kps update svcdomain svc_domain_name --unset-env
user@host$ kps update svcdomain svc_domain_name --unset-env '{"variable_name":"variable_value"}'
Example: how to use existing environment variables for a Service Domain in application YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: some.container.registry.com/myapp:1.7.9
ports:
- containerPort: 80
env:
- name: KAFKA_ENDPOINT
value: some.kafka.endpoint
- name: KAFKA_KEY
- value: placeholder
command:
- sh
- c
- "exec node index.js $(KAFKA_KEY)"
Logically grouped Service Domain, data sources, and other items. Applying a category to an entity applies any values and attributes associated with the category to the entity.
Built-in Karbon Platform Services platform feature to publish data to a public cloud like Amazon Web Services or Google Cloud Platform. Requires a customer-owned secured public cloud account and configuration in the Karbon Platform Services management console.
Cloud provider service account (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is transmitted for further processing.
Credentials and location of the Docker container registry hosted on a cloud provider service account. Can also be an existing cloud profile.
Path for data that includes input, processing, and output blocks. Enables you to process and transform captured data for further consumption or processing.
Data service such as Kafka as a Service or Real-Time Stream Processing as a Service.
A collection of sensors, gateways, or other input devices to associate with a node or Service Domain (previously known as an edge). Enables you to manage and monitor sensor integration and connectivity.
A node minimally consists of a node (also known as edge device) and a data source. Any location (hospital, parking lot, retail store, oil rig, factory floor, and so on) where sensors or other input devices are installed and collecting data. Typical sensors measure (temperature, pressure, audio, and so on) or stream (for example, an IP-connected video camera).
Code used to perform one or more tasks. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.
User who creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, data sources and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them.
A collection of infrastructure (Service Domain, data source, project users) plus code and data (Kubernetes apps, data pipelines, functions, run-time environments), created by the infrastructure administrator for use by project users.
User who views and uses projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and applications. This user has project-specific CRUD permissions: the project user can create, read, update, and delete assigned applications, scripts, data pipelines, and other project users.
A run-time environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.
Intelligent Platform as a Service (PaaS) providing the Karbon Platform Services Service Domain infrastructure (consisting of a full software stack combined with a hardware device). It enables customers to deploy intelligent applications (powered by AI/artificial intelligence) to process and transform data ingested by sensors. This data can be published selectively to public clouds.
Browser-based console where you can manage the Karbon Platform Services platform and related infrastructure, depending on your role (infrastructure administrator or project user).
Software as a Service (SaaS)/Platform as a Service (PaaS) based management platform and cloud IoT services. Includes the Karbon Platform Services management console.
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Last updated: 2022-11-24
Karbon Platform Services is a Kubernetes-based multicloud platform as a service that enables rapid development and deployment of microservice-based applications. These applications can range from simple stateful containerized applications to complex web-scale applications across any cloud.
In its simplest form, Karbon Platform Services consists of a Service Domain encapsulating a project, application, and services infrastructure, and other supporting resources. It also incorporates project and administrator user access and cloud and data pipelines to help converge edge and cloud data.
With Karbon Platform Services, you can:
This data can be stored at the Service Domain or published to the cloud. You can then create intelligent applications using data connectors and machine learning modules to consume the collected data. These applications can run on the Service Domain or the cloud where you have pushed collected data.
Nutanix provides the Service Domain initially as a VM appliance hosted in an AOS AHV or ESXi cluster. You manage infrastructure, resources, and Karbon Platform Services capabilities in a console accessible through a web browser.
As part of initial and ongoing configuration, you can define two user types: an infrastructure administrator and a project user . The cloud management console and user experience help create a more intuitive experience for infrastructure administrators and project users.
A project user can view and use projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and Kubernetes applications.
Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.
The project user has project-specific create/read/update/delete (CRUD) permissions: the project user can create, read, update, and delete the following and associate it with an existing project:
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.
When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
The web browser-based Karbon Platform Services cloud management console enables you to manage infrastructure and related projects, with specific management capability dependent on your role (infrastructure administrator or project user).
You can log on with your My Nutanix or local user credentials.
The default view for a project user is the Dashboard .
After you log on to the cloud management console, you are presented with the main Dashboard page as a role-specific landing page. You can also show this information at any time by clicking Dashboard under the main menu.
Each Karbon Platform Services element (Service Domain, function, data pipeline, and so on) includes a dashboard page that includes information about that element. It might also include information about elements associated with that element.
The element dashboard view also enables you to manage that element. For example, click Projects and select a project. Click Kubernetes Apps and click an application in the list. The application dashboard is displayed along with an Edit button.
The Karbon Platform Services management console includes a Quick Start menu next to your user name. Depending on your role (infrastructure administrator or project user), you can quickly create infrastructure or apps and data. Scroll down to see items you can add for use with projects.
The Quick Start Menu lists the common onboarding tasks for the project user. It includes links to project resource pages. You can also go directly to any project resource from the Apps & Data menu item.
As the project user, you can update a project by creating the following items.
If any Getting Started item shows Pending , the infrastructure administrator has not added you to that entity (like a project or application) or you need to create an entity (like an application).
To get started after logging on to the cloud management console, see Projects.
A project is an infrastructure, apps, and data collection created by the infrastructure administrator for use by project users.
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project. When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
A project can consist of:
When you add a Service Domain to a project, all resources such as data sources associated with the Service Domain are available to the users added to the project.
The Projects page lets an infrastructure administrator create a new project and lists projects that administrators can update and view.
For project users, the Projects page lists projects created and assigned by the infrastructure administrator that project users can view and add apps and data. Project users can view and update any projects assigned by the administrator with applications, data pipelines, and so on. Project users cannot remove a project.
When you click a project name, the project Summary dashboard is displayed and shows resources in the project.
You can click any of the project resource menu links to edit or update existing resources, or create and add resources to a project. For example, you can edit an existing data pipeline in the project or create a new one and assign it to the project. For example, click Kafka to shows details for the Kafka data service associated with the project. See Viewing Kafka Status.).
The Karbon Platform Services cloud infrastructure provides services that are enabled by default. It also provides access to services that you can enable for your project.
The platform includes these ready-to-use services, which provide an advantage over self-managed services:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Kafka is available as a data service through your Service Domain.
The Kafka data service is available for use within a project's applications and data pipelines, running on a Service Domain hosted in your environment. The Kafka service offering from Karbon Platform Services provides the following advantages over a self-managed Kafka service:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Information about application requirements and sample YAML application file
Create an application for your project as described in Creating an Application and specify the application attributes and configuration in a YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: some.container.registry.com/myapp:1.7.9
ports:
- containerPort: 80
env:
- name: KAFKA_ENDPOINT
value: {{.Services.Kafka.Endpoint}}
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
kind
|
Deployment
|
Specify the resource type. Here, use
Deployment
.
|
metadata
|
name
|
Provide a name for your deployment. |
labels
|
Provide at least one label. Here, specify the application name as
app: my-app
|
|
spec
|
Define the Kafka service specification. | |
replicas
|
Here, 1 to indicate a single Kafka cluster (single Service Domain instance or VM) to keep data synchronized. | |
selector
|
Use
matchLabels
and specify the
app
name as in
labels
above.
|
|
template
|
||
Specify the application name here (
my-app
), same as
metadata
specifications above.
|
||
spec
|
Here, define the specifications for the application using Kafka. | |
containers
|
|
|
env
|
Leave these values as shown. |
Information about data pipeline function requirements.
See Functions and Data Pipelines.
You can specify a Kafka endpoint type in a data pipeline. A data pipeline consists of:
For a data pipelines with a Kafka topic endpoint:
In the cloud management console, you can view Kafka data service status when you use Kafka in an application or as a Kafka endpoint in a data pipeline as part of a project. This task assumes you are logged in to the cloud management console.
The Service Domain supports the Traefik open source router as the default Kubernetes Ingress controller. You can also choose the NGINX ingress controller instead.
An infrastructure admin can enable an Ingress controller for your project through Manage Services as described in Managing Project Services. You can only enable one Ingress controller per Service Domain.
When you include Ingress controller annotations as part of your application YAML file, Karbon Platform Services uses Traefik as the default on-demand controller.
If your deployment requires it, you can alternately use NGINX (ingress-nginx) as a Kubernetes Ingress controller instead of Traefik.
In your application YAML, specify two snippets:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
To securely route application traffic with a Service Domain ingress controller, create YAML snippets to define and specify the ingress controller for each Service Domain.
You can only enable and use one Ingress controller per Service Domain.
Create an application for your project as described in Creating an Application to specify the application attributes and configuration in a YAML file. You can include these Service and Secret snippets Service Domain ingress controller annotations and certificate information in this app deployment YAML file.
apiVersion: v1
kind: Service
metadata:
name: whoami
annotations:
sherlock.nutanix.com/http-ingress-path: /notls
sherlock.nutanix.com/https-ingress-path: /tls
sherlock.nutanix.com/https-ingress-host: DNS_name
sherlock.nutanix.com/http-ingress-host: DNS_name
sherlock.nutanix.com/https-ingress-secret: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
kind
|
Service
|
Specify the Kubernetes service. Here, use
Service
to indicate
that this snippet defines the ingress controller details.
|
apiVersion
|
v1
|
Here, the Kubernetes API version. |
metadata
|
name
|
Provide an app name to which this controller applies. |
annotations
|
These
annotations
define the
ingress controller encryption type and paths for Karbon Platform Services.
|
|
sherlock.nutanix.com/http-ingress-path: /notls
|
/notls
specifies no Transport Layer Security
encryption.
|
|
sherlock.nutanix.com/https-ingress-path: /tls
|
|
|
sherlock.nutanix.com/http-ingress-host:
DNS_name
|
Ingress service host path, where the service is bound to port 80.
DNS_name
. DNS name you can give to your application. For
example,
|
|
sherlock.nutanix.com/https-ingress-host:
DNS_name
|
Ingress service host path, where the service is bound to port 443.
DNS_name
. DNS name you can give to your application. For
example,
|
|
sherlock.nutanix.com/https-ingress-secret:
whoami
|
The
sherlock.nutanix.com/https-ingress-secret:
whoami
snippet links the authentication
Secret
information defined above to this
controller.
|
|
spec
|
Define the transfer protocol, port type, and port for the application. | |
A selector to specify the application.
|
Use a Secret snippet to specify the certificates used to secure app traffic.
apiVersion: v1
kind: Secret
metadata:
name: whoami
type: kubernetes.io/tls
data:
ca.crt: cert_auth_cert
tls.crt: tls_cert
tls.key: tls_key
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
apiVersion
|
v1
|
Here, the TLS API version. |
kind
|
Secret
|
Specify the resource type. Here, use
Secret
to indicate that
this snippet defines the authentication details.
|
metadata
|
name
|
Provide an app name to which this certification applies. |
type
|
Define the authentication type used to secure the app. Here,
kubernetes.io/tls
|
|
data
|
ca.crt
|
Add the keys for each certification type: certificate authority certificate (ca.crt), TLS certificate (tls.crt), and TLS key ( |
tls.crt
|
||
tls.key
|
In the cloud management console, you can view Ingress controller status for any controller used as part of a project. This task assumes you are logged in to the cloud management console.
Istio provides secure connection, traffic management, and telemetry.
In the application YAML snippet or file, define the
VirtualService
and
DestinationRules
objects.
These objects specify traffic routing rules for the
recommendation-service
app host. If the traffic rules match, traffic flows to the named destination (or
subset/version of it) as defined here.
In this example, traffic is routed to the
recommendation-service
app host
if it is sent from the FireFox browser. The specific policy version
(
subset
) for each host helps you identify and manage routed data.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: recomm-svc
spec:
hosts:
- recommendation-service
http:
- match:
- headers:
user-agent:
regex: .*Firefox.*
route:
- destination:
host: recommendation-service
subset: v2
- route:
- destination:
host: recommendation-service
subset: v1
This
DestinationRule
YAML snippet defines a load-balancing traffic policy
for the policy versions (
subsets
), where any healthy host can service the
request.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: recomm-svc
spec:
host: recommendation-service
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
In this YAML snipped, you can split traffic for each subset by specifying a
weight
of 30 in one case, and 70 in the other. You can also weight them
evenly by giving each a weight value of 50.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: recomm-svc
spec:
hosts:
- recommendation-service
http:
- route:
- destination:
host: recommendation-service
subset: v2
weight: 30
- destination:
host: recommendation-service
subset: v1
weight: 70
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
In the cloud management console, you can view Istio service mesh status associated with applications in a project. This task assumes you are logged in to the cloud management console.
The Prometheus service included with Karbon Platform Services enables you to monitor endpoints you define in your project's Kubernetes apps. Karbon Platform Services allows one instance of Prometheus per project.
Prometheus collects metrics in your app endoints. The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, endpoints, and alerts. See Viewing Prometheus Service Status.
You can then decide how to view the collected metrics, through graphs or other Prometheus-supported means. See Create Prometheus Graphs with Grafana - Example.
Setting | Default Value or Description |
---|---|
Frequency interval to collect and store metrics (also known as scrape and store) | Every 60 seconds 1 |
Collection endpoint |
/metrics
1
|
Default collection app | collect-metrics |
Data storage retention time | 10 days |
|
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, Prometheus endpoints, and alerts.
This example shows how you can set up a Prometheus metrics dashboard with Grafana.
This topic provides examples to help you expose Prometheus endpoints to Grafana, an open-source analytics and monitoring visualization application. You can then view scraped Prometheus metrics graphically.
The first ConfigMap YAML snippet example uses the environment variable
{{.Services.Prometheus.Endpoint}}
to define the service endpoint. If this
YAML snippet is part of an application template created by an infra admin, a project user
can then specify these per-Service Domain variables in their application.
The second snippet provides configuration information for the Grafana server web page. The host name in this example is woodkraft2.ntnxdomain.com
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
data:
prometheus.yaml: |-
{
"apiVersion": 1,
"datasources": [
{
"access":"proxy",
"editable": true,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "{{.Services.Prometheus.Endpoint}}",
"version": 1
}
]
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-ini
data:
grafana.ini: |
[server]
domain = woodkraft2.ntnxdomain.com
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
serve_from_sub_path = true
---
This YAML snippet provides a standard deployment specification for Grafana.
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:latest
ports:
- name: grafana
containerPort: 3000
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "1Gi"
cpu: "500m"
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
- mountPath: /etc/grafana/provisioning/datasources
name: grafana-datasources
readOnly: false
- name: grafana-ini
mountPath: "/etc/grafana/grafana.ini"
subPath: grafana.ini
volumes:
- name: grafana-storage
emptyDir: {}
- name: grafana-datasources
configMap:
defaultMode: 420
name: grafana-datasources
- name: grafana-ini
configMap:
defaultMode: 420
name: grafana-ini
---
Define the Grafana Service object to use port 3000 and an Ingress controller to manage access to the service (through woodkraft2.ntnxdomain.com).
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
selector:
app: grafana
ports:
- port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: grafana
labels:
app: grafana
spec:
rules:
- host: woodkraft2.ntnxdomain.com
http:
paths:
- path: /grafana
backend:
serviceName: grafana
servicePort: 3000
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
You can create intelligent applications to run on the Service Domain infrastructure or the cloud where you have pushed collected data. You can implement application YAML files to use as a template, where you can customize the template by passing existing Categories associated with a Service Domain to it.
You need to create a project with at least one user to create an app.
You can undeploy and deploy any applications that are running on Service Domains or in the cloud. See Deploying and Undeploying a Kubernetes Application.
For Kubernetes apps running as privileged, you might have to specify the Kubernetes
namespace where the application is deployed. You can do this by using the
{{
.Namespace }}
variable you can define in app YAML template file.
In this example, the resource kind of ClusterRoleBinding specifies the
{{
.Namespace }}
variable as the namespace where the subject ServiceAccount is
deployed. As all app resources are deployed in the project namespace, specify the project
name as well (here, name: my-sa).
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- kind: ServiceAccount
name: my-sa
namespace: {{ .Namespace }}
Create a Kubernetes application that you can associate with a project.
Update an existing Kubernetes application.
Delete an existing Kubernetes application.
You can undeploy and deploy any applications that are running on Service Domains or in the cloud. You can choose the Service Domains where you want the app to deploy or undeploy. Select the table or tile view on this page by clicking one of the view icons.
Undeploying a Kubernetes app deletes all config objects directly created for the app, including PersistentVolumeClaim data. Your app can create a PersistentVolumeClaim indirectly through StatefulSets. The following points describe scenarios when data is deleted and when data is preserved after you undeploy an app.
The Data Pipelines page enables you to create and view data pipelines, and also see any alerts associated with existing pipelines.
A data pipeline is a path for data that includes:
It also enables you to process and transform captured data for further consumption or processing.
To create a data pipeline, you must have already created or defined at least one of the following:
After you create one or more data pipelines, the Data Pipelines > Visualization page shows data pipelines and the relationship among data pipeline components.
You can view data pipelines associated with a Service Domain by clicking the filter icon under each title (Data Sources, Data Pipelines to Service Domain, Data Pipelines on Cloud) and selecting one or more Service Domains in the drop-down list.
Create a data pipeline, with a data source or real-time data source as input and infrastructure or external cloud as the output.
Update a data pipeline, including data source, function, and output destination. See also Naming Guidelines.
A function is code used to perform one or more tasks. Script languages include Python, Golang, and Node.js. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.
An infrastructure administrator or project user can create a function, and later can edit or clone it. You cannot edit a function that is used by an existing data pipeline. In this case, you can clone it to make an editable copy.
Edit an existing function. To complete this task, log on to the cloud management console.
Other than the name and description, you cannot edit a function that is in use by an existing data pipeline. In this case, you can clone a function to duplicate it. See Cloning a Function.
Clone an existing function. To complete this task, log on to the cloud management console.
You can create machine learning (ML) models to enable AI inferencing for your projects. The ML Model feature provides a common interface for functions (that is, scripts) or applications to use the ML model Tensorflow runtime environment on the Service Domain.
The Karbon Platform Services Release Notes list currently supported ML model types.
An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.
You can add multiple models and model versions to a single ML Model instance that you create. In this scenario, multiple client projects can access any model configured in the single ML model instance.
How to delete an ML model.
The ML Models page in the Karbon Platform Services management console shows version and activity status for all models.
A runtime environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.
Karbon Platform Services includes standard runtime environments including but not limited to the following. These runtimes are read-only and cannot be edited, updated, or deleted by users. They are available to all projects, functions, and associated container registries.
How to create a user-added runtime environment for use with your project.
https://index.docker.io/v1/
or
registry-1.docker.io/distribution/registry:2.1
https://
aws_account_id
.dkr.ecr.
region
.amazonaws.com
How to edit a user-added runtime environment from the cloud management console. You cannot edit the included read-only runtime environments.
How to remove a user-added runtime environment from the cloud management console. To complete this task, log on to the cloud management console.
Logging provides a consolidated landing page enabling you to collect, forward, and manage logs from selected Service Domains.
From Logging or System Logs > Logging or the summary page for a specific project, you can:
You can also collect logs and stream them to a cloud profile by using the kps command line available from the Nutanix public Github channel https://github.com/nutanix/karbon-platform-services/tree/master/cli. Readme documentation available there describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Access the Audit Log dashboard to view the most recent operations performed by users.
Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Trail dashboard in the cloud management console. To display the dashboard, log on to the console, open the navigation menu, and click Audit Trail .
Click Filters to display operations by Operation Type, User Name, Resource Name, Resource Type, Time Range, and other filter types so you can narrow your search. Filter Operation Type by CREATE, UPDATE, and DELETE actions.
Log Collector examines the selected project application and collects logs and configuration information useful for troubleshooting issues and finding out details about an app.
Create, edit, and delete log forwarding policies to help make collection more granular and then forward those Service Domain logs to the cloud.
monitoring.us-west-2.amazonaws.com
Create a log collector for log forwarding by using the kps command line.
Nutanix has released the kps command line on its public Github channel. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Each sample YAML file defines a log collector. Log collectors can be:
See the most up-to-date sample YAML files and descriptions at https://github.com/nutanix/karbon-platform-services/tree/master/cli.
Create a log collector defined in a YAML file:
user@host$ kps create -f infra-logcollector-cloudwatch.yaml
This sample infrastructure log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).
To enable AWS CloudWatch log streaming, you must specify awsRegion, cloudwatchStream, and cloudwatchGroup.
kind: logcollector
name: infra-log-name
type: infrastructure
destination: cloudwatch
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name | Value or Subfield Name / Description | Value or Subfield Name / Description |
---|---|---|
kind |
logcollector
|
Specify the resource type |
name | infra-log-name | Specify the unique log collector name |
type |
infrastructure
|
Log collector for infrastructure |
destination | cloudwatch | Cloud destination type |
cloudProfile | cloud-profile-name | Specify an existing Karbon Platform Services cloud profile |
awsRegion |
For example,
us-west-2
or
monitoring.us-west-2.amazonaws.com
|
Valid AWS region name or CloudWatch endpoint fully qualified domain name |
cloudwatchGroup | cloudwatch-group-name | Log group name |
cloudwatchStream | cloudwatch-stream-name | Log stream name |
filterSourceCode |
|
Specify the log conversion code |
This sample project log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).
kind: logcollector
name: project-log-name
type: project
project: project-name
destination: cloud-destination type
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name | Value or Subfield Name / Description | Value or Subfield Name / Description |
---|---|---|
kind |
logcollector
|
Specify the resource type |
name | project-log-name | Specify the unique log collector name |
type |
project
|
Log collector for specific project |
project | project-name | Specify the project name |
destination | cloud-destination type | Cloud destination type such as CloudWatch |
cloudProfile | cloud-profile-name | Specify an existing Karbon Platform Services cloud profile |
awsRegion |
For example,
us-west-2
or
monitoring.us-west-2.amazonaws.com
|
Valid AWS region name or CloudWatch endpoint fully qualified domain name |
cloudwatchGroup | cloudwatch-group-name | Log group name |
cloudwatchStream | cloudwatch-stream-name | Log stream name |
filterSourceCode |
|
Specify the log conversion code |
Real-Time Log Monitoring built into Karbon Services provides real-time log monitoring and lets you view application and data pipeline log messages securely in real time.
Viewing the most recent log messages as they occur helps you see and troubleshoot application or data pipeline operations. Messages stream securely over an encrypted channel and are viewable only by authenticated clients (such as an existing user logged on to the Karbon Services cloud platform).
The cloud management console shows the most recent log messages, up to 2 MB.
View the most recent real-time logs for applications and data pipelines.
The Karbon Platform Services real-time log monitoring console is a terminal-style display that streams log entries as they occur.
After you do the steps in Displaying Real-Time Logs, a terminal-style window is displayed in one or more tabs. Each tab is streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application.
Your application or data pipeline function generates the log messages. That is, the console shows log messages that you have written into your application or function.
If your Karbon Platform Services Service Domain is connected and your application or function is not logging anything, the console might show a No Logs message. In this case, the message means that the application or function is idle and not generating any log messages.
You might see one or more error messages in the following cases. As a result, Real-Time Log Monitoring cannot retrieve any logs.
Simplifies authentication when you use the Karbon Platform Services API by enabling you to manage your keys from the Karbon Platform Services management console. This topic also describes API key guidelines.
As a user (infrastructure or project), you can manage up to two API keys through the Karbon Platform Services management console. After logging on to the management console, click your user name in the management console, then click Manage API Keys to create, disable, or delete these keys.
Read more about the Karbon Platform Services API at nutanix.dev. For Karbon Platform Services Developers describes related information and links to resources for Karbon Platform Services developers.
Example API request using an API key.
After you create an API key, use it with your Karbon Platform Services API HTTPS Authorization requests. In the request, specify an Authorization header including Bearer and the key you generated and copied from the Karbon Platform Services management console.
For example, here is an example Node JS code snippet:
var http = require("https");
var options = {
"method": "GET",
"hostname": "karbon.nutanix.com",
"port": null,
"path": "/v1.0/applications",
"headers": {
"authorization": "Bearer API_key"
}
};
Create one or more API keys through the Karbon Platform Services management console.
Create one or more API keys through the Karbon Platform Services management console.
The Alerts page and the Alerts Dashboard panel show any alerts triggered by Karbon Platform Services depending on your role.
To see alert details:
Click Filters to sort the alerts by:
An Alert link is available on each Apps & Data and Infrastructure page.
Information and links to resources for Karbon Platform Services developers.
This section contains information about Karbon Platform Services development.
The Karbon Platform Services public Github repository https://github.com/nutanix/karbon-platform-services includes sample application YAML files, instructions describing external client access to services, Karbon Platform Services kps CLI samples, and so on.
Nutanix has released the kps command line on its public Github repository. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Karbon Platform Services supports the Traefik open source router as the default Kubernetes Ingress controller and NGINX (ingress-nginx) as a Kubernetes Ingress controller. For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/ingress.
Kafka is available as a data service through your Service Domain. Clients can manage, publish and subscribe to topics using the native Kafka protocol. Data Pipelines can use Kafka as destination. Applications can also use a Kafka client of choice to access the Kafka data service.
For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/kafka. See alsoKafka as a Service.
Enable a container application to run with elevated privileges.
For information about installing the kps command line, see For Karbon Platform Services Developers.
Karbon Platform Services enables you to develop an application which requires elevated privileges to run successfully. By using the kps command line, you can set your Service Domain to enable an application running in a container to run in privileged mode.
Configure your Service Domain to enable a container application to run with elevated privileges.
user@host$ kps config create-context context_name --email user_email_address --password password
user@host$ kps config get-contexts
user@host$ kps get svcdomain -o yaml
user@host$ kps update svcdomain svc_domain_name --set-privileged
Successfully updated Service Domain:
svc_domain_name
true
.
user@host$ kps get svcdomain svc_domain_name -o yaml
kind: edge
name: svc_domain_name
connected: true
.
.
.
profile:
privileged: true
enableSSH: true
effectiveProfile:
privileged: true
enableSSH: true
effectiveProfile
privileged
set to
true
indicates that Nutanix
Support has enabled this feature. If the setting is
false
,
contact Nutanix Support to enable this feature. In this example, Nutanix has
also enabled SSH access to this Service Domain (see
Secure Shell (SSH) Access to Service
Domains
in
Karbon Platform Services Administration
Guide
).
After elevating privilege as described in Setting Privileged Mode, elevate the application privilege. This sample enables USB device access for an application running in a container on elevated Service Domain
Add a tag similar to the following in the Deployment section in your application YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: usb
annotations:
sherlock.nutanix.com/privileged: "true"
apiVersion: v1
kind: ConfigMap
metadata:
name: usb-scripts
data:
entrypoint.sh: |-
apk add python3
apk add libusb
pip3 install pyusb
echo Read from USB keyboard
python3 read-usb-keyboard.py
read-usb-keyboard.py: |-
import usb.core
import usb.util
import time
USB_IF = 0 # Interface
USB_TIMEOUT = 10000 # Timeout in MS
USB_VENDOR = 0x627
USB_PRODUCT = 0x1
# Find keyboard
dev = usb.core.find(idVendor=USB_VENDOR, idProduct=USB_PRODUCT)
endpoint = dev[0][(0,0)][0]
try:
dev.detach_kernel_driver(USB_IF)
except Exception as err:
print(err)
usb.util.claim_interface(dev, USB_IF)
while True:
try:
control = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, USB_TIMEOUT)
print(control)
except Exception as err:
print(err)
time.sleep(0.01)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: usb
annotations:
sherlock.nutanix.com/privileged: "true"
spec:
replicas: 1
selector:
matchLabels:
app: usb
template:
metadata:
labels:
app: usb
spec:
terminationGracePeriodSeconds: 0
containers:
- name: alpine
image: alpine
volumeMounts:
- name: scripts
mountPath: /scripts
command:
- sh
- -c
- cd /scripts && ./entrypoint.sh
volumes:
- name: scripts
configMap:
name: usb-scripts
defaultMode: 0766
Define environment variables for an individual Service Domain. After defining them, any Kubernetes app that specifies that Service Domain can access them as part of a container spec in the app YAML.
As an infrastructure administrator, you can set environment variables and associated values for each Service Domain, which are available for use in Kubernetes apps. For example:
As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.
How to set environment variables for a Service Domain.
user@host$ kps config create-context context_name --email user_email_address --password password
user@host$ kps config get-contexts
user@host$ kps get svcdomain -o yaml
my-svc-domain
, for example, set the
Service Domain environment variable. In this example, set a secret variable
named
SD_PASSWORD
with a value of
passwd1234
.
user@host$ kps update svcdomain my-svc-domain --set-env '{"SD_PASSWORD":"passwd1234"}'
user@host$ kps get svcdomain my-svc-domain -o yaml
kind: edge
name: my-svc-doamin
connected: true
.
.
.
env: '{"SD_PASSWORD": "passwd1234"}'
kps update
svcdomain my-svc-domain --set-env '{"
variable_name
":
"
variable_value
"}'
command.
user@host$ kps update svcdomain svc_domain_name --unset-env
user@host$ kps update svcdomain svc_domain_name --unset-env '{"variable_name":"variable_value"}'
Example: how to use existing environment variables for a Service Domain in application YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: some.container.registry.com/myapp:1.7.9
ports:
- containerPort: 80
env:
- name: KAFKA_ENDPOINT
value: some.kafka.endpoint
- name: KAFKA_KEY
- value: placeholder
command:
- sh
- c
- "exec node index.js $(KAFKA_KEY)"
Logically grouped Service Domain, data sources, and other items. Applying a category to an entity applies any values and attributes associated with the category to the entity.
Built-in Karbon Platform Services platform feature to publish data to a public cloud like Amazon Web Services or Google Cloud Platform. Requires a customer-owned secured public cloud account and configuration in the Karbon Platform Services management console.
Cloud provider service account (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is transmitted for further processing.
Credentials and location of the Docker container registry hosted on a cloud provider service account. Can also be an existing cloud profile.
Path for data that includes input, processing, and output blocks. Enables you to process and transform captured data for further consumption or processing.
Data service such as Kafka as a Service or Real-Time Stream Processing as a Service.
A collection of sensors, gateways, or other input devices to associate with a node or Service Domain (previously known as an edge). Enables you to manage and monitor sensor integration and connectivity.
A node minimally consists of a node (also known as edge device) and a data source. Any location (hospital, parking lot, retail store, oil rig, factory floor, and so on) where sensors or other input devices are installed and collecting data. Typical sensors measure (temperature, pressure, audio, and so on) or stream (for example, an IP-connected video camera).
Code used to perform one or more tasks. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.
User who creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, data sources and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them.
A collection of infrastructure (Service Domain, data source, project users) plus code and data (Kubernetes apps, data pipelines, functions, run-time environments), created by the infrastructure administrator for use by project users.
User who views and uses projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and applications. This user has project-specific CRUD permissions: the project user can create, read, update, and delete assigned applications, scripts, data pipelines, and other project users.
A run-time environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.
Intelligent Platform as a Service (PaaS) providing the Karbon Platform Services Service Domain infrastructure (consisting of a full software stack combined with a hardware device). It enables customers to deploy intelligent applications (powered by AI/artificial intelligence) to process and transform data ingested by sensors. This data can be published selectively to public clouds.
Browser-based console where you can manage the Karbon Platform Services platform and related infrastructure, depending on your role (infrastructure administrator or project user).
Software as a Service (SaaS)/Platform as a Service (PaaS) based management platform and cloud IoT services. Includes the Karbon Platform Services management console.
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Last updated: 2022-11-29
The Nutanix corporate web site includes up-to-date information about AOS software editions .
License Manager provides Licensing as a Service (LaaS) by integrating the Nutanix Support portal Licensing page with licensing management and agent software residing on Prism Element and Prism Central clusters. Unlike previous license schemes and work flows that were dependent on specific Nutanix software releases, License Manager is an independent software service residing in your cluster software. You can update it independently from Nutanix software such as AOS, Prism Central, Nutanix Cluster Check, and so on.
License Manager provides these features and benefits.
License Manager provides more than one way to manage your licenses, depending on your preference and cluster deployment. Except for dark-site clusters, where AOS and Prism Central clusters are not connected to the Internet, these options require that your cluster is connected to the Internet.
This feature is not available to dark site clusters, which are not connected to the Internet.
See Enable 1-Click Licensing and Manage Licenses with 1-Click Licensing.Nutanix recommends that you configure and enable 1-click licensing. 1-click licensing simplifies license and add-on management by integrating the licensing work flow into a single interface in the web console. Once you enable this feature, you can perform most licensing tasks from the web console. It is disabled by default.
Depending on the product license you purchase, you apply it through the Prism Element or Prism Central web console. See Prism Element Cluster Licensing or Prism Central License Categories.
This feature is not available to dark site clusters, which are not connected to the Internet.
See Manage Licenses with Update License (3-Step Licensing).After you license your cluster, the web console Licensing page allows you to manage your license tier by upgrading, downgrading, or otherwise updating a license. If you have not enabled 1-click licensing and want to use 3-step licensing, the Licensing page includes a Update License button.
For cloud platform package licenses, see Manage Licenses for Dark Site Clusters (Cloud Platform License Key).
Use these procedures if your dark site cluster is not connected to the Internet (that is, your cluster is deployed at a dark site). To enter dark site cluster information at the Nutanix Support Portal, these procedures require you to use a web browser from a machine connected to the Internet. If you do not have Internet access, you cannot use these procedures.
This topic assumes that you already have user name and password credentials for the Nutanix Support portal.
After you log on to the Nutanix Support portal at https://portal.nutanix.com, click the Licenses link on the portal home page or the hamburger menu available from any page. Licenses provides access to these licensing landing pages:
The most current information about your licenses is available from the Prism Element (PE) or Prism Central (PC) web console. It is also available at the Nutanix Support Portal License page. You can view information about license levels, expiration dates, and any free license inventory (that is, unassigned available licenses).
From a PE Controller VM (CVM) or PC VM, you can also display license details associated with your dark site license key.
nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key
Licenses Allowed:
AOS
Add-Ons:
Software_Encryption
File
License Manager is independent software and is therefore updated independently from Nutanix software such as AOS, Prism Central, Nutanix Cluster Check, and so on.
When upgrades are available, you can upgrade License Manager through Life Cycle Manager (LCM). LCM enables Nutanix to regularly introduce licensing features, functions, and fixes. Upgrades through LCM help ensure that your cluster is running the latest licensing agent logic.
Nutanix has also designed License Manager so that cluster node restarts are not required.
The Life Cycle Manager Guide describes how to perform an inventory and, if a new version is available, how to update a component like License Manager.
Log on to the Prism Element or Prism Central web console and do one of the following.
Not all Nutanix software products require a license. Nutanix provides these products and their features without requiring you to do anything license-wise:
See these Nutanix corporate web sites for the latest information about software licensing and the latest available platforms.
Nutanix generally categorizes licenses as follows.
Licenses you can apply through the Prism Element web console include:
See Prism Element License Categories.
Software-only and third-party OEM platforms require you to download and install a Starter license file that you have purchased.
For all platforms: the AOS Pro and Ultimate license levels require you to install this license on your cluster. When you upgrade a license or add nodes or clusters to your environment, you must install the license.
If you enable 1-click licensing as described in Enable 1-Click Licensing, you can apply the license through the Prism Element web console without needing to log on to the support portal.
If you do not want to enable this feature, you can manage licenses as described in Manage Licenses with Update License (3-Step Licensing). This licensing workflow is also appropriate for dark-site (non-Internet connected or restricted connection) deployments.
The Nutanix corporate web site includes the latest information about these license models.
Capacity-based licensing is the Nutanix licensing model where you purchase and apply licenses based on cluster attributes. Cluster attributes include the number of raw CPU cores and total raw Flash drive capacity in tebibytes (TiBs). See AOS Capacity-Based Licensing.
You can add individual features known as add-ons to your existing license feature set. When Nutanix makes add-ons available, you can add them to your existing license, depending on the license level and add-ons available for that license.
See Add-On Licenses.
As the control console for managing multiple clusters, Prism (also known as Prism Central) consists of three license tiers. If you enable 1-click licensing as described in Enable 1-Click Licensing, you can apply licenses through the Prism web console without needing to log on to the support portal.
If you do not want to enable this feature, you can manage licenses as described in Manage Licenses with Update License (3-Step Licensing). This licensing workflow is also appropriate for dark-site (non-Internet connected or restricted connection) deployments.
Licenses that are available or that you can apply through the Prism web console include:
Default free Prism license, which enables you to register and manage multiple Prism Element clusters, upgrade Prism with 1-click through Life Cycle Manager (LCM), and monitor and troubleshoot managed clusters. You do not have to explicitly apply the Prism Starter license tier, which also never expires.
Includes all Prism Starter features plus customizable dashboards, capacity, planning, and analysis tools, advanced search capabilities, low-code/no-code automation, and reporting.
Prism Ultimate adds application discovery and monitoring, budgeting/chargeback and cost metering for resources, and a SQL Server monitoring content pack. Every Prism Central deployment includes a 90-day trial version of this license tier.
You can add individual features known as add-ons to your existing license feature set. When Nutanix makes add-ons available, you can add them to your existing license, depending on the license level and add-ons available for that license.
See Add-On Licenses.
Prism Central cluster-based licensing allows you to choose the level of data to collect from an individual Prism Element cluster managed by your Prism Central deployment. It also lets you choose the related features you can implement for a cluster depending on the applied Prism Central license tier. You can collect data even if metering types (capacity [cores] and nodes) are different for each node in a Prism Element cluster. See Prism Central Cluster-Based Licensing for more details.
AOS capacity-based licensing is the Nutanix licensing model where you purchase and apply licenses based on cluster attributes. Cluster attributes include the number of raw CPU cores and raw total Flash drive capacity in tebibytes (TiBs). This licensing model helps ensure a consistent licensing experience across different platforms running Nutanix software.
Each license stores the currently licensed capacity (CPU cores/Flash TiBs). If the capacity of the cluster increases, the web console informs you that additional licensing is required.
An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses. For example, you can upgrade your lower tier AOS Pro license to an Ultimate Upgrade license, which allows you to now activate or use the available Ultimate features.
For each upgrade license, you must have an existing unexpired lower tier license to activate the upgrade license. As with future licenses, you cannot apply an upgrade license to a cluster until the Start Date has passed.
For more information about how to enable Calm in Prism Central, see Enabling Calm in the Prism Central Guide and Calm Administration and Operations Guide.
The Nutanix Calm license for Prism Central enables you to manage the number of VMs that are provisioned or managed by Nutanix Calm. Calm licenses are required only for VMs managed by Calm, running in either the Nutanix Enterprise Cloud or public clouds.
The most current status information about your Calm licenses is available from the Prism Central web console. It is also available at the Nutanix Support Portal.
Once Calm is enabled, Nutanix provides a free trial period of 60 days to use Calm. It might take up to 30 minutes to show that Calm is enabled and your trial period is started.
Approximately 30 minutes after you enable Nutanix Calm, the Calm licensing card and licensing details show the trial expiration date. In Use status is displayed as Yes . See also License Warnings in the Web Console.
The Nutanix Calm license VM count is a concurrent VM management limit and is linked to the application life cycle, from blueprint launch to application deletion. Consider the following:
Any VM you have created and are managing independently of Nutanix Calm is not part of the Calm license count. For example, you created a Windows guest OS VM through the Prism Element web console or with other tools (like a Cloudinit script). This VM is not part of a Calm blueprint.
However, if you import an existing VM into an existing Calm blueprint, that VM counts toward the Calm license. It counts until you delete the application deployment. If you stop the VM in this case and the application deployment is active, the VM is considered as under Calm management and part of the license count.
For license usage, Calm also counts each endpoint that you associate to a runbook as one VM under Calm management. This license usage count type is effective as of the Nutanix Calm 3.1 release.
Individual products known as add-ons can be added to your existing license feature set. For more information, see Nutanix Platform Software Options.
Requirements and considerations for licensing. Consider the following before you attempt to manage your licenses.
Before attempting to install a license, ensure that you have created a cluster and logged into the Prism Element or Prism Central web console at least once. You must install a license after creating a cluster for which you purchased AOS Pro or AOS Ultimate licenses. If you are using Nutanix Cloud Clusters, you can reserve licenses for those clusters (see Reserve Licenses for Nutanix Cloud Clusters).
In general, before destroying a cluster with AOS licenses (Starter / Pro / Ultimate) for software-only and third-party hardware platforms, you must reclaim your licenses by unlicensing your cluster. Unlicensing a cluster returns your purchased licenses to your inventory.
You do not need to reclaim licenses in the following cases.
If a cluster includes nodes with different license tiers (for example, AOS Pro and AOS Ultimate), the cluster and each node in the cluster defaults to the feature set enabled by the lowest license tier. For example, if two nodes in the cluster have AOS Ultimate licenses and two nodes in the same cluster have AOS Pro licenses, all nodes effectively have AOS Pro licenses and access to that feature set only.
Attempts to access AOS Ultimate features in this case result in a license noncompliance warning in the web console.
As 1-click licensing is disabled by default, use these Update License procedures to license your cluster that is also connected to the Internet. These procedures also apply if a cluster is unlicensed (newly deployed or previously unlicensed by you).
Use this procedure to license your cluster. These procedures also apply if a cluster is unlicensed (newly deployed or previously unlicensed by you). After you complete this procedure, you can enable 1-click licensing.
For this procedure, keep two browser windows open:
If you choose not to license any add-ons at this time, you can license them later. See Licensing An Add-On (Internet Connected).
Use this procedure after you have licensed your cluster and you did not license add-on features at the same time, or if you later purchased add-on features after you initially licensed your cluster. This procedure describes how to do this on a cluster connected to the Internet.
For add-ons that are based on capacity or other metrics like Nutanix Files, for example, specify the cluster disk capacity to allocate to Files. You can rebalance this add-on in the web console later if you need to make any changes, as described in Using Rebalance to Adjust Your Licenses.
Below each license tile is the 1-Click Licensing options menu, where you can select more licensing actions: Rebalance , Extend , or Unlicense . See 1-Click Licensing Actions.
This feature is not available to dark site clusters, which are not connected to the Internet. To enable 1-click licensing, you must create an API key and download an SSL key from your My Nutanix dashboard. 1-click licensing simplifies licensing by integrating the licensing work flow into a single interface in the web console. As this feature is disabled by default, you need to enable and configure it first.
If your cluster is unlicensed, make sure you have licensed it first, then enable 1-click licensing. See Licensing a Cluster (Internet Connected).
1-click licensing simplifies licensing by integrating the licensing work flow into a single control plane in the Prism Element (PE) and Prism Central (PC) web consoles. Once you enable 1-click licensing, you can perform most licensing tasks from the web console Licensing settings panel without needing to explicitly log on to the Nutanix Support Portal.
1-click license management requires you to create an API key and download an SSL key from your My Nutanix dashboard to secure communications between the Nutanix Support Portal and the PE or PC web console.
With the API and SSL keys associated with your My Nutanix account, the web console can communicate with the Nutanix Support Portal to detect any changes or updates to your cluster license status.
After enabling 1-click licensing, you can also disable it.
This feature is not available to dark site clusters, which are not connected to the Internet. To enable 1-click licensing, first create a Licensing API key and download an SSL public key. After you do this, register both through the Prism Element or Prism Central web console. You might need to turn off any pop-up blockers in your browser to display dialog boxes.
The Licensing page now shows two buttons: Disable 1-Click Licensing , which indicates 1-click licensing is enabled, and License With Portal , which lets you upgrade license tiers and add-ons by using the manual Nutanix Support Portal 3-step licensing work flow.
Below each license tile is the 1-Click Licensing options menu, where you can select more licensing actions: Rebalance , Extend , or Unlicense . See 1-Click Licensing Actions.
After enabling 1-click licensing, you can also disable it .
This feature is not available to dark site clusters, which are not connected to the Internet. Disables 1-click licensing through the web console. For security reasons, you cannot reuse your previously-created API key after disabling 1-click licensing.
You might need to disable the 1-click licensing connection associated with the API and public keys. If you disable the connection as described here, you can enable it again by obtaining a new API key as described in Creating an API Key.
After you license your cluster, 1-click licensing helps simplify licensing by integrating the licensing work flow into a single interface in the web console. As this feature is disabled by default, enable and configure it first.
This feature is not available to dark site clusters, which are not connected to the Internet.
Once you configure this feature, you can perform most tasks from the Prism web console without needing to explicitly log on to the Nutanix Support Portal.
When you open Licensing from the Prism Element or Prism Central web console for a licensed cluster, each license tile includes a drop-down menu so you can manage your licenses without leaving the web console. 1-click licensing communicates with the Nutanix Support Portal to detect any changes or updates to your cluster license status.
If you want to change your license tier by upgrading or downgrading your license, use the procedures in Upgrading or Downgrading (Changing Your License Tier).
On the Licensing page in the Prism Element or Prism Central web console for a licensed cluster, each license tile includes a drop-down menu so you can manage your licenses directly from the web console.
If you have made changes to your cluster, choose Rebalance to help ensure your available licenses (including licensed add-ons) are applied correctly. Use Rebalance if you:
Choose Extend to extend the term of current expiring term-based licenses if you have purchased one or more extensions.
If your license has expired, you have to license the cluster as if it were unlicensed. See License a Cluster and Add-On.
Choose Unlicense to unlicense a cluster (including licensed add-ons) in one click. This action removes the licenses from a cluster and returns them to your license inventory. This action is sometimes referred to as reclaiming licenses.
You do need to reclaim AOS licenses (Starter / Pro / Ultimate) for software-only and third-party hardware platforms.
Choose Extend to extend the term of current expiring term-based licenses if you have purchased one or more extensions. 1-Click Licensing Actions describes Extend and other actions.
If your licenses have expired, you basically have to license the cluster as described in License a Cluster and Add-On.
Choose Unlicense to unlicense your cluster (sometimes referred to as reclaiming licenses). 1-Click Licensing Actions describes when to choose Unlicense and other actions.
Perform this task for each cluster that you want to unlicense. If you unlicense Prism Central (PC), the default license type Prism Center Starter is applied as a result. Registered clusters other than the PC cluster remain licensed.
Use this procedure when you have purchased an upgrade license. An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses.
Use License with Portal or Update License in the web console to change your license tier if you have purchased completely new stand-alone licenses. For example, to upgrade from AOS Pro to AOS Ultimate or downgrade Prism Ultimate to Prism Pro. This procedure does not apply if you purchased an upgrade license, which enables you to upgrade your existing unexpired lower license tier to a higher license tier.
For this procedure, keep two browser windows open:
After you license your cluster and you do not plan to enable 1-click licensing, use Update License in the web console to change your license tier. For example, to upgrade from AOS Pro to AOS Ultimate or downgrade Prism Ultimate to Prism Pro.
After you license your cluster, the web console Licensing page allows you to manage your license tier by upgrading, downgrading, or otherwise updating a license. If you have not enabled 1-click licensing and want to use 3-step licensing, the Licensing page includes a Update License button.
3-Step Licensing refers to most manual licensing procedures. These procedures describe the 3-Step Licensing procedure (in more detail):
For dark-sites where a cluster is not connected to the Internet: In the PE or PC web console, copy the dark-site cluster summary information and then enter it at the Nutanix Support Portal Licensing page. See Manage Licenses for Dark Site Clusters (3-Step Licensing).
If you did not enable 1-click licensing, when you open Licensing from the Prism Element or Prism Central web console for a licensed cluster, you can use the Update License button to manage licenses.
If you have made changes to your cluster, download and apply a new licensing summary file (LSF) to help ensure your available licenses (including licensed add-ons) are applied correctly. Rebalance your cluster if you:
If you have purchased one or more license extensions, download and apply a new LSF to extend the term of current expiring term-based licenses.
If you have purchased an upgrade license, apply it to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses.
Unlicense a cluster (including licensed add-ons) to remove the licenses from a cluster and returns them to your license inventory. This action is sometimes referred to as reclaiming licenses. You also download and apply a new LSF in this case.
You do need to reclaim AOS licenses (Starter / Pro / Ultimate) for software-only and third-party hardware platforms.
If you have purchased one or more license extensions, download and apply a new license summary file to extend the term of current expiring term-based licenses. This procedure includes a step for expired licenses.
Use this procedure if you have not enabled 1-click licensing. See Licensing Actions by Using the Update License Button.
Use this procedure when you have purchased an upgrade license. An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses.
Use License with Portal or Update License in the web console to change your license tier if you have purchased completely new stand-alone licenses. For example, to upgrade from AOS Pro to AOS Ultimate or downgrade Prism Ultimate to Prism Pro. This procedure does not apply if you purchased an upgrade license, which enables you to upgrade your existing unexpired lower license tier to a higher license tier.
For this procedure, keep two browser windows open:
Use the procedure described in Licensing a Cluster (Dark Site Legacy License Key) if:
If you purchased Nutanix cloud platform packages, see License a Cluster and Add-On (Dark Site Cloud Platform License Key).
Use this procedure to generate and apply a legacy license key to a cluster that is not connected to the Internet (that is, a dark site).
ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
AOS
Add-Ons:
Software_Encryption
File
nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
Licenses Allowed:
AOS
Add-Ons:
Software_Encryption
File
Use this procedure after you have licensed your cluster and you did not license add-on features at the same time, or if you later purchased add-on features after you initially licensed your cluster.
For add-ons that are based on capacity or other metrics like Nutanix Files, for example, specify the cluster disk capacity to allocate to Files. You can rebalance this add-on in the web console later if you need to make any changes.
ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
AOS
Add-Ons:
Software_Encryption
File
nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
Licenses Allowed:
AOS
Add-Ons:
Software_Encryption
File
Use these procedures if your cluster is not connected to the Internet (that is, your cluster is deployed at a dark site) and you plan to apply a legacy license key. To enter dark site cluster information at the Nutanix Support Portal and generate a legacy license key, use a web browser from a machine with an Internet connection.
To adhere to regulatory, security, or compliance policies at the customer site, dark-site AOS and Prism Central clusters are not connected to the Internet. 1-click licensing and License with Portal licensing actions are not available in this case.
Considerations and related actions or work flows that apply when you license your dark site Prism Element or Prism Central cluster with a license key. A key point to remember is that each license key is associated to a unique cluster UUID. The generated key cannot be used with a different cluster (that is, with a different cluster UUID).
You need to generate a license key at the Nutanix Support portal and apply it to your cluster for the following scenarios.
No. Each license key is associated to a unique cluster UUID. If you have made changes to your cluster, you do not need to rebalance licenses across your cluster. A cluster rebalance or cluster change is defined as one or more of the following scenarios:
When you use the dark site license key method to apply a key to your cluster, there is no requirement to reclaim licenses. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim.
No. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim. When you create a new cluster, however, you do need to generate and apply a new license key.
Yes. If your cluster is running the latest version of NCC and AOS or Prism Central versions compatible with this NCC version, you can switch to using a license key when any cluster attribute changes. That switch to a key includes clusters where you have upgraded NCC and AOS/Prism Central to versions that support license keys. Nutanix recommends the following if you want to use license keys:
If you have purchased one or more license extensions to extend the term of current expiring term-based legacy licenses, use this procedure to update your cluster license.
ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
AOS
Add-Ons:
Software_Encryption
File
nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
Licenses Allowed:
AOS
Add-Ons:
Software_Encryption
File
Use these procedures if your dark site cluster is not connected to the Internet (that is, your cluster is deployed at a dark site) and you have purchased an upgrade license.
ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
AOS
Add-Ons:
Software_Encryption
File
nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
Licenses Allowed:
AOS
Add-Ons:
Software_Encryption
File
Use this procedure to change your license tier.
ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
AOS
Add-Ons:
Software_Encryption
File
nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
Licenses Allowed:
AOS
Add-Ons:
Software_Encryption
File
To apply licenses for your cloud platform packages, you must use the procedure described in Licensing a Cluster (Dark Site Cloud Platform License Key) to generate and apply a cloud platform license key. This procedure requires you to collect the PC UUID and the UUID of each cluster connected to the PC that you want to license. You enter each UUID at the Nutanix Support portal. To use this procedure, see the following requirements.
In summary, the procedure is as follows.
Use this procedure to generate and apply a cloud platform license key to a cluster that is not connected to the internet (that is, a dark site).
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the NCI or NCM license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
NCI Pro CORES
NCM Pro CORES
Add-Ons:
NCI Security CORES
Nutanix Database Service as Addon CORES
NCI Nutanix Kubernetes Engine CORES
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
NCI Pro CORES
NCM Pro CORES
Add-Ons:
NCI Security CORES
Nutanix Database Service as Addon CORES
NCI Nutanix Kubernetes Engine CORES
Use this procedure after you have licensed your cluster and you did not license add-on features at the same time, or if you later purchased add-on features after you initially licensed your cluster.
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the NCI or NCM license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
NCI Pro CORES
NCM Pro CORES
Add-Ons:
NCI Security CORES
Nutanix Database Service as Addon CORES
NCI Nutanix Kubernetes Engine CORES
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
NCI Pro CORES
NCM Pro CORES
Add-Ons:
NCI Security CORES
Nutanix Database Service as Addon CORES
NCI Nutanix Kubernetes Engine CORES
Use these procedures if your cluster is not connected to the Internet (that is, your cluster is deployed at a dark site) and you plan to apply a cloud platform license key. To enter dark site cluster information at the Nutanix Support Portal and generate a cloud platform license key, use a web browser from a machine with an Internet connection.
To adhere to regulatory, security, or compliance policies at the customer site, dark-site AOS and Prism Central clusters are not connected to the Internet. The License with Portal licensing action is not available in this case.
Considerations and related actions or work flows that apply when you license your dark site Prism Element or Prism Central cluster with a license key. A key point to remember is that each license key is associated to a unique cluster UUID. The generated key cannot be used with a different cluster (that is, with a different cluster UUID).
You must generate a license key at the Nutanix Support portal and apply it to your cluster for the following scenarios.
No. Each license key is associated to a unique cluster UUID. If you have made changes to your cluster, you do not need to rebalance licenses across your cluster. A cluster rebalance or cluster change is defined as one or more of the following scenarios:
When you use the dark site license key method to apply a key to your cluster, there is no requirement to reclaim licenses. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim.
No. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim. When you create a new cluster, however, you do need to generate and apply a new license key.
Yes. If your cluster is running the latest version of Nutanix Cluster Check (NCC) and AOS or Prism Central versions compatible with this NCC version, you can switch to using a license key when any cluster attribute changes. That switch to a key includes clusters where you have upgraded NCC and AOS/Prism Central to versions that support license keys. Nutanix recommends the following if you want to use license keys:
If you have purchased one or more license extensions to extend the term of current expiring term-based cloud platform licenses, use this procedure to update your cluster license.
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the NCI or NCM license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
NCI Pro CORES
NCM Pro CORES
Add-Ons:
NCI Security CORES
Nutanix Database Service as Addon CORES
NCI Nutanix Kubernetes Engine CORES
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
NCI Pro CORES
NCM Pro CORES
Add-Ons:
NCI Security CORES
Nutanix Database Service as Addon CORES
NCI Nutanix Kubernetes Engine CORES
Use this procedure to change your license tier.
When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the NCI or NCM license tier, license class, purchased add-ons, and so on.
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
NCI Pro CORES
NCM Pro CORES
Add-Ons:
NCI Security CORES
Nutanix Database Service as Addon CORES
NCI Nutanix Kubernetes Engine CORES
nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
Licenses Applied:
NCI Pro CORES
NCM Pro CORES
Add-Ons:
NCI Security CORES
Nutanix Database Service as Addon CORES
NCI Nutanix Kubernetes Engine CORES
With a bring-your-own-license (BYOL) experience, you can leverage your existing on-prem licenses for Nutanix Cloud Clusters. You can reserve your licenses, partially or entirely, for Nutanix Cloud Clusters and specify the capacity allocation for cloud deployments. The licenses reserved for Nutanix Cloud Clusters are automatically applied to Cloud Clusters to cover their configuration and usage.
You can unreserve the licenses when you do not need them for Nutanix Cloud Clusters and add them back to the on-prem licenses pool. You can use the unreserved capacity for your on-prem clusters.
You can better use your licenses and control your expenses by tracking the reserved licenses consumption and making appropriate adjustments in the capacity usage. The reserved licenses are consumed first, and when no more reserved licenses are available, your chosen Pay As You Go or Cloud Commit payment plans are used.
To reserve licenses for Nutanix Cloud Clusters, do the following:
The Update reservation for Nutanix Cloud Clusters (NC2) option becomes available only after you select at least one license for reservation.
The available licenses appear in the Total Available to Reserve column.
To update the existing license reservations for Nutanix Cloud Clusters, do the following:
The Update reservation for Nutanix Cloud Clusters (NC2) button becomes available only after you select at least one license for reservation.
The available licenses appear in the Total Available to Reserve column.
Cluster-based licensing allows you to choose the Prism Central (PC) features you can use and level of data to collect from a Prism Element (PE) cluster managed by your PC deployment.
With cluster-based licensing, you decide which PC license tier features to use to manage a specific cluster. For example, you might need features provided by a Prism Ultimate license to manage PE cluster A, while you only need Prism Pro features to manage PE cluster B. You can also designate a cluster (PE Cluster C) as unlicensed (no cluster-based license is applied). For example, you can leave a non-production or development cluster as unlicensed.
All nodes in a cluster must be licensed with a cluster-based license or unlicensed (that is, no cluster-based license is applied). For cluster-based licensing, all nodes in each cluster must be at the same cluster-based licensing tier. You cannot mix license tiers in the same cluster (for example, unlicensed and Prism Ultimate licensed nodes cannot reside in the same cluster).
Only Prism Starter features are available in an unlicensed cluster. If you try to access a Prism Pro or Prism Ultimate feature for an unlicensed cluster (for example, Capacity Runway), PC displays a Feature Disabled type message. Any unlicensed cluster data is filtered out from any reporting.
Cluster-based licensing also lets you select different metering types (capacity [cores] and nodes, depending on the cluster licensing) for each node in a PE cluster. Also, if your PC deployment includes Nutanix Calm or Flow, you can choose Calm Core or Flow Core licensing as the metering type.
See the Prism Central release notes or supplement where PC Cluster-based licensing is introduced. Also read Cluster-based Licensing Requirements and Limitations.
The Prism Central (PC) web console Licensing page allows you to apply your cluster-based licenses with the Apply licenses button and by using 3-Step Licensing. 3-Step Licensing is a mostly manual licensing procedure. After the licenses are applied, manage them with the Update License button.
Before you begin, make sure AOS clusters registered with and managed by PC are licensed with AOS licenses.
At the Prism Central web console Licensing page, use Update License to change the cluster-based license tier of one or more licensed clusters.
At the Nutanix Support portal, use the Advanced Licensing action to change the AOS cluster license metering type to node or core licensing.
Prism Central supports licensing for a set of cloud platform packages that deliver broad solutions to customers with simple and comprehensive bundles. The packages are:
You can add licenses or convert existing licenses to the new packages. For more information about the cloud platform packages, see Nutanix Cloud Platform Software Options.
Cloud platform licensing follows the same workflow as cluster-based licensing (see Prism Central Cluster-Based Licensing). This chapter provides supplemental information about licensing your clusters for the cloud platform packages.
You can view licensing information from the Nutanix Support Portal for your entire account, Prism Central (PC) for all clusters registered to that PC instance, and Prism Element for that cluster. However, only the Support Portal and PC are used to apply or convert cloud platform licenses.
The Licensing view on the Support Portal is extended to include cloud platform packages. Select Licenses from the collapse ("hamburger") menu to display the Licensing view. The Summary page includes widgets for any purchased licenses including cloud platform packages. The cloud platform package names also appear in the relevant fields in other licensing pages, for example the License Tier column in the Licensed Clusters page.
The Licensing view in PC is extended to include cloud platform packages. Select Licensing in the Settings panel to display the Licensing view. The View All Licenses tab includes sections for any applied licenses including cloud platform packages.
Clicking View license details displays the details page, which also now includes applied cloud platform package license information.
The View all clusters tab and cluster license details pages are also extended to include cloud platform packages.
Use this procedure to apply cloud platform licenses to your cluster.
Use this procedure to convert your existing licenses to cloud platform licenses. You can convert your existing licenses whether you have applied them to a cluster or they are unused.
After uploading the file, the information is validated before continuing. If a configuration issue is detected, an appropriate message appears. If all the checks are validated, the "Convert your licenses" screen appears.
The "Select licenses to convert" page displays a table of applied licenses by cluster. The table also lists what each license will be converted to during the process. The table varies slightly depending on which workflow (cluster or unused) you are doing. See Conversion Requirements and Mapping for a list of all the conversion mappings.
The conversion table reappears. Again, the table varies slightly depending on which workflow (cluster or unused) you are doing.
This topic contains requirements and limitations for you to consider before you convert your existing licenses to the new cloud platform packages. This topic also provides a conversion table of old (current) to new (cloud platform) licenses.
Review the following requirements and limitations before you convert licenses to the new cloud platform packages.
The following is a conversion table of old (current) to new (cloud platform) licenses.
Package | Old License | New License |
---|---|---|
Nutanix Cloud Infrastructure (NCI) | AOS Starter | NCI Starter |
AOS Pro | NCI Pro | |
AOS Ultimate | NCI Ultimate | |
AOS Pro + Encryption | NCI Pro + Security | |
AOS Pro + Flow | ||
AOS Pro + Flow + Encryption | ||
AOS Pro + Adv DR | NCI Pro + Adv DR | |
AOS Starter + Flow | NCI Pro + Security | |
AOS Ultimate + Flow | NCI Ultimate | |
AOS Pro + Adv DR + Encryption | NCI Pro + Adv DR + Security | |
AOS Pro + Adv Rep + Flow | ||
AOS Pro + Adv Rep + Encryption + Flow | ||
AOS Starter + Files (for AOS) | NCI Starter + NUS Pro | |
AOS Pro + Files (for AOS) | NCI Pro + NUS Pro | |
AOS Ultimate + Files (for AOS) | NCI Ultimate + NUS Pro | |
AOS Starter + Objects (for AOS) | NCI Starter + NUS Starter | |
AOS Pro + Objects (for AOS) | NCI Pro + NUS Starter | |
AOS Ultimate + Objects (for AOS) | NCI Ultimate + NUS Starter | |
AOS Starter + Era add-on | NCI Starter + NDB add-on | |
AOS Pro + Era add-on | NCI Pro + NDB add-on | |
AOS Ultimate + Era add-on | NCI Ultimate + NDB add-on | |
Nutanix Cloud Manager (NCM) | Prism Pro | NCM Starter |
Prism Ultimate | NCM Pro | |
Calm Cores | NCM Ultimate | |
Prism Pro + Calm Cores | ||
Prism Ultimate + Calm Cores | ||
Pro Special | (n/a) | |
Nutanix Database Service (NDB) | Era Platform | NDB Platform |
Era Cores | NDB add-on | |
Era vCPU | ||
Nutanix Unified Storage (NUS) | Objects Dedicated | NUS Starter |
Objects (for AOS) | ||
Objects Dedicated + Encryption | NUS Starter + Security | |
Objects Dedicated + Adv DR | NUS Starter + Adv DR | |
Files Dedicated | NUS Pro | |
Files Dedicated + Object Dedicated | ||
Files (for AOS) + Objects (for AOS) | ||
Files Dedicated + Adv DR | NUS Pro + Adv DR | |
Files Dedicated + Encryption | NUS Pro + Security | |
Files (for AOS) | NUS Pro | |
Nutanix End User Computing | (n/a) | (no conversion for VDI, Frame, or ROBO) |
On the Licenses > License Inventory page on the Nutanix Support portal, you can label your licenses with a tag to conveniently group them.
Tags help provide more granularity and ease of use to your license management.
For example, you can apply a tag to licenses to group them according to:
When you tag one or more licenses, you can then:
Label your licenses with a tag to conveniently group them. You can add multiple tags to a single license.
Most license warnings in the web console are related to license violations or licenses that are about to expire or have expired. In most cases, the resolution is to extend or purchase licenses.
How to disable the Prism Pro or Prism Ultimate tier (and remove the violating license message that appears in Prism Central if the Prism Pro features are enabled without a valid license).
A message box appears stating that the operation is reversible. Click the Disable Ultimate Trial button. This immediately logs you out of PC and returns you to the logon page. When you log back on, the features are disabled.
Follow these steps to disable the trial license if your current license tier is Prism Pro Trial and after upgrading, a Prism Ultimate tier trial is available. Enable and then disable the Ultimate trial to remove the trial from the cluster.
A message box appears stating that the operation is reversible. Click the Disable Ultimate Trial button. This immediately logs you out of PC and returns you to the logon page. When you log back on, the features are disabled.
After you log on to My Nutanix and depending on your role, the API Key Management tile enables you to create and manage API keys. Use these keys to establish secure communications with Nutanix software and services. Typical user roles able to access this tile include Account Administrators for Cloud Services and existing Support Portal users.
An API key is a unique token that you can use to authenticate API requests associated with your Nutanix software or service. You can create multiple API keys for a single product or service. However, you can use an API key only once to register with that software or service.
It is a randomly generated unique UUID4 hash and can be 36–50 characters long. When you create the key, you choose a service (such as Licensing) and the key is mapped directly to that service. You can use it for the chosen service only. For example, you cannot use a Support Case key with Prism Central (PC).
You can use the API key for secure communication in many scenarios, including but not limited to the following.
Scope is the service group, feature, or specific function of a service group and is defined as part of a unique key value pair (scope name paired with the unique scope category). For example, with Prism Ops as the scope, the generated key enables you to authenticate when using the PC or Prism Element (PE) APIs.
The API Key is restricted for use depending on the scope you choose. For example, a key created with a scope of Licensing allows you to enable 1-click licensing through the PE or PC web console.
The API Key Management tile is available depending on your role. Typical user roles able to access the tile include Account Administrators for Cloud Services and existing Support Portal users.
The API Key Management tile is available depending on your role. Typical user roles able to access the tile include Account Administrators for Cloud Services and existing Support Portal users.
The API Key Management tile is available depending on your role. Typical user roles able to access the tile include Account Administrators for Cloud Services and existing Support Portal users.
Last updated: 2022-09-11
Note the following that are new for release 2.0. The release 1.0 notes also apply to 2.0 (except for the required AOS version).
deb [arch=amd64] https://repository.veeam.com/mine/1/public/updater stable main
deb [arch=amd64] https://repository.veeam.com/mine/1/public/mine stable main
veeam@minevm$ sudo su
root@minevm$ curl http://repository.veeam.com/keys/veeam.gpg | apt-key add
root@minevm$ exit
${distro_id}ESM:${distro_codename}";
${distro_codename}-security";
"Foundation for Mine With Veeam updater:stable";
// Automatically upgrade packages from these (orign:archive) pairs
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
// Extended Security Maintenance; doesn't necessarily exist for
// every release and this system may not have it installed, but if
// avaliable, the policy for updates is such that unattended-upgrades
// should also install from here by default.
"${distro_id}ESM:${distro_codename}";
// "${distro_id}:${distro_codename}-updates";
// "${distro_id}:${distro_codename}-proposed";
// "${distro_id}:${distro_codename}-backports";
"Foundation for Mine With Veeam updater:stable";
veeam@minevm$ sudo apt-get update;
veeam@minevm$ sudo apt-get install nirvana-mine nirvana-appliancemanager;
Mine Version | Veeam Backup & Replication Versions |
---|---|
V1 1.0.406 | 9.5.4.2866 and later updates of 9.5 |
V2 1.0.715 | 10.0.0.4461 with KB3161 and later updates of Veeam Backup & Replication 10 |
V2 patch1 1.0.762 | 10.0.0.4461 with KB3161 and later updates of Veeam Backup & Replication 10 |
V2 patch2 1.0.1014 | 10.0.0.4461 with KB3161 and later updates of Veeam Backup & Replication 10 |
V3 3.0.1238 | 11.0.0.837 with Cumulative Patch20210525 and later updates of Veeam Backup & Replication 11 |
Later updates of version 10 does not include version 11, it is possible to install Cumulative patch or a KB but not Veeam Backup & Replication 11 for mine V2
Note the following:
Nutanix Mine™ is the product name for joint solutions between Nutanix and select data protection software vendors. Nutanix Mine™ is a dedicated backup solution, where only backup component VMs run on the Mine™ cluster and the cluster storage is used to store backup workloads.
This version of Mine™ is a fully integrated data protection appliance that combines the Nutanix AOS software with the Veeam Backup & Replication solution. Mine™ can provide data protection for any applications running in a Nutanix cluster or for any virtualized workload running in your data center. Mine™ includes the following features:
The Mine™ appliance comes in three initial sizes, "extra small" and "small" versions that include one preconfigured NX-1465 block and a "medium" version that includes two preconfigured NX-8235 blocks. You can add additional NX-8235 blocks (but not NX-1465 blocks) to scale out the cluster for more capacity.
Specification | X-Small | Small | Medium | Scale Out |
---|---|---|---|---|
Model | NX-1465-G7 | NX-1465-G7 | NX-8235-G7 (x 2) | NX-8235-G7 |
Rack Size | 2U | 2U | 4U | 2U |
Number of Nodes | 4 | 4 | 4 | 2 |
Processor (per node) | 2x Intel Xeon Silver 4210 (10-core 2.2 GHz) | 2x Intel Xeon Silver 4210 (10-core 2.2 GHz) | 2x Intel Xeon Silver 4214 (12-core 2.2 GHz) | 2x Intel Xeon Silver 4214 (12-core 2.2 GHz) |
RAM (per node) | 192 GB | 192 GB | 192 GB | 192 GB |
SSD (per node) | 1x 1.92 TB | 1x 1.92 TB | 2x 1.92 TB | 2x 1.92 TB |
HDD (per node) | 2x 6 TB | 2x 12 TB | 4x 12 TB | 4x 12 TB |
Networking (per node) | 2 or 4x 10GbE | 2 or 4x 10GbE | 2 or 4x 10GbE, or 2x 25/40 GbE | 2 or 4x 10GbE, or 2x 25/40 GbE |
Raw Capacity | 48 TB | 96 TB | 192 TB | 96 TB |
Effective Capacity | 30-50 TB | 60-100 TB | 120-200 TB | 60-100 TB |
Veeam Universal Licenses (VUL) | 0 (naked Mine) | 250 | 500 | 250 (additional) |
Installing and configuring your Mine™ with Veeam solution requires the following steps:
To install a Mine appliance at your site, do the following:
It is recommended to use the AHV version bundled with the supported AOS (LTS) package. The recommended memory size for the Controller VMs is 32 GB.
Allocate 4 vCPUs and 4 GB of memory to this VM. Select Clone from Image Service as the operation and FoundationMine as the image when adding the disk.
To deploy a new Mine™ cluster, do the following:
sudo nano /etc/network/interfaces
auto eth0
iface eth0 inet dhcp
auto eth0
iface eth0 inet static
address yourIpAddress
netmask yourSubnetMask
gateway yourAddressAddress
sudo service networking restart
If you require an Active Directory join, you also need to specify a DNS server with a proper Active Directory record.
The default user name and password are both " veeam ". On the first login, you are prompted to change the password. Be sure to remember (save somewhere safe) the new password.
The Nutanix mine with Veeam cluster setup screen appears. The setup work flow appears on the left with the step details on the right.
This page displays information about the nodes in the cluster. Verify the information is correct before proceeding.
This is a read-only field for the primary network configuration. However, if you create a guest network (later in this procedure), select the name for the guest network from the pull-down list for an existing network or enter the name for a new network.
This is a read-only field for the primary network configuration, but you specify the VLAN ID for a guest network. If you specify a VLAN other than 0, make sure that network switches are configured accordingly.
Mine™ requires eight available IP addresses.
If you require an Active Directory join, you need to specify a DNS server with a proper Active Directory record.
The VM names and IP addresses are populated automatically. There are three Windows VMs named Veeam-Win-Node x and three Linux VMs named Veeam-Lin-Node x with x being a sequential number from 1 to 3. You can change the name of a VM by entering a different name in the VM Hostname field. The IP addresses are assigned sequentially to the VMs starting after the Starting IP address, but you can change that address in the IP Address field for a VM.
The Windows VMs are configured with 8 vCPUs and 16 GB memory, and they are used to manage the Veeam Backup & Replication application. The Linux VMs are configured with 8 vCPUs and 128 GB memory, and they are used to manage the Veeam scale out backup repository.
If you want the VMs to be backed up on a different network than the Veeam infrastructure, you can create this additional network for that purpose. (Creating an additional network is optional.)
The installer verifies the network configuration before proceeding.
In the Network field, select either New (for a new network) or Existing (for an existing network) from the pull-down list.
The top part of the page displays information about the VMs Mine™ will create, and the bottom part displays cluster information including virtual IP address, storage capacity, and node count.
The installation begins. It typically takes over an hour (sometimes over two hours) for the installation to complete. A progress bar appears with status messages as the installation progresses. You can monitor the progress in more detail by logging on to Prism and checking the Task and VM dashboards (see Monitoring the Cluster).
When the installation completes, a success (or error) message appears. The success message includes a link to Prism; click the link to log on to Prism so you can monitor and manage the Mine™ cluster.
Deploying a Mine™ cluster creates a volume group, and both the volume group and Foundation for Mine™ with Veeam VM are added automatically to a protection domain. A schedule is set up to take daily snapshots of the protection domain. Tune the schedule (if needed) per your company security policy.
It is recommended that you enable erasure coding (disabled by default). Erasure coding can provide significant storage savings for a Mine™ cluster. See the "Erasure Coding" and "Modifying a Storage Container" sections in the Prism Web Console Guide for more information.
To configure the Veeam backup and replication solution for a Mine™ cluster, do the following:
This opens a Veeam console window at the login screen. Enter the Windows administrator credentials you supplied when deploying the cluster (see Deploying a Mine™ Cluster).
You can also launch the Veeam console by going to the VMs dashboard, selecting the Veeam-Win-Node1 VM, and then clicking the Launch Console button below the table.
Depending on which systems you want to back up with Nutanix Mine™, also check the relevant guides from the following list to configure Veeam Backup & Replication for your environment:
After deploying a Mine™ cluster, you can monitor activity and perform administrative tasks as needed.
You can monitor the Mine™ cluster health and activity through Prism. The Prism Web Console Guide describes how to use Prism. (To determine the AOS version your Mine™ cluster is running, go to the About Nutanix option under the user_name drop-down list in the Prism main menu.)
The Prism web console includes a custom Mine with Veeam dashboard specific to a Mine™ cluster, which appears by default when you first log on to Prism. To view this custom dashboard at any time, select Mine with Veeam from the pull-down list on the far left of the main menu.
The custom dashboard displays the following eight information tiles (widgets):
The Prism web console includes other dashboards to monitor specific elements of your cluster.
When a Mine™ cluster is full, the cluster may become unavailable, and you will not be able to continue backup operations. To prevent such a situation, Mine™ includes a special monitoring feature (sometimes referred to as a "watchdog") that dynamically monitors storage usage and takes action as necessary to avoid reaching a storage full condition. If available storage space in the cluster falls below the minimum amount, the monitor automatically stops and disables Veeam Backup & Replication jobs. The monitor is regulated by three thresholds:
The monitor automatically calculates and defines the threshold values according to your environment resources. (If you want to change default threshold values, contact technical support.) In addition, the monitor regulates the location of VMs. If Veeam repository extends and backup proxies are deployed on different AHV nodes, the monitor transfers them to one node.
By default, Mine™ reserves enough storage space for node rebuild should one of the nodes fail. If you want to add new VMs on the cluster or change the default monitor threshold values, consider leaving enough space for the rebuild of the node.
To increase the storage capacity of your Mine™ cluster, you can add an expansion block (see Overview). To add the nodes in an expansion block to the cluster, do the following:
You can upgrade the Mine software whenever a new version is available.
You can upgrade the Mine software using the Mine console whenever a new version is available.
To check for and install an update, do the following:
Check the appropriate box to automatically check for available updates.
If a time is not specified, the reboot (if required) happens immediately when triggered as part of the upgrade process.
You can upgrade the Mine software for a dark site using a Veeam VM console. You can use the dark site upgrade process to upgrade the Mine software at locations without Internet access.
Package | Download Location |
---|---|
libonig4_6.7.0-1_amd64.deb | http://archive.ubuntu.com/ubuntu/pool/universe/libo/libonig/ |
libjq1_1.5+dfsg-2_amd64.deb | http://archive.ubuntu.com/ubuntu/pool/universe/j/jq/ |
jq_1.5+dfsg-2_amd64.deb | http://archive.ubuntu.com/ubuntu/pool/universe/j/jq/ |
To install an update for the Mine software at a dark site (offline upgrade), do the following:
$ sudo systemctl start ssh
$ scp NutanixMineWithVeeamUpdate_v3.0.0.deb veeam@<ip_address>:~/
Where, <ip_address> is the IP address of the Foundation for Mine with Veeam VM.
$ sudo dpkg -i libonig4_6.7.0-1_amd64.deb
$ sudo dpkg -i libjq1_1.5+dfsg-2_amd64.deb
$ sudo dpkg -i jq_1.5+dfsg-2_amd64.deb
$ sudo dpkg -i NutanixMineWithVeeamUpdate_v3.0.0.deb
$ sudo systemctl restart NirvanaManagement
Upgrading AOS requires a few additional steps for a Mine™ cluster.
To upgrade the AOS version in a Mine cluster, do the following:
To verify the Mine™ dashboard is redeployed, log on to Prism, refresh the screen, and check that the dashboard appears again (see Monitoring the Cluster).
You can update your Veeam, Mine™, or Prism user credentials at any time.
To update user account credentials, do the following:
If you encounter a problem, you can download a support bundle to troubleshoot the problem.
The support bundle contains service logs and other related data that can help locate and diagnose system issues. To download a support bundle, do the following:
This step downloads a ZIP (compressed) file to your workstation named logs_veeam_ date&time .zip that contains the support bundle.
If the installation is not successful or you want to start over for any reason, you first need to clean up and reset the environment. To reset a Mine™ cluster, do the following:
Click the plus sign for an entity tab (virtual machines, networks, and so on) to see the list of those entities. All virtual machines, volume groups, and storage containers are checked by default; the networks and images are not. Review the list for each entity and adjust (add or remove check marks) as desired.
The reset process begins. Time estimates and a progress bar appear. When the process completes, the message "Reset process has been completed successfully" appears. Click the Close button. This redisplays the Nutanix Mine™ with Veeam Configuration screen.
Mine™ provides a maintenance mode that stops and disables all running backup jobs which are targeted at the scale-out backup repository. Maintenance mode allows you to reconfigure cluster settings, expand the cluster, and perform additional tasks that might otherwise disrupt backup operations. When cluster maintenance is complete, you can disable maintenance mode, which resumes the scheduling of backup jobs. (However, backup jobs that were stopped during the maintenance window are not restarted.)
To enable maintenance mode, do the following:
To verify maintenance mode is enabled, check the cluster widget on the Mine™ dashboard (see Monitoring the Cluster). The text "maintenance mode on" appears when maintenance mode is enabled.
To disable maintenance mode, do the following:
To verify maintenance mode is disabled, check the cluster widget on the Mine™ dashboard (see Monitoring the Cluster). The text "maintenance mode on" no longer appears when maintenance mode is disabled.