Data Extraction - MQTT
Procedure
-
Click
Add New Field
.
- Click Add New Field to add another topic or click Next to go to the Category Assignment panel.
This is one stop global knowledge base where you can learn about all the products, solutions and support features.
Product Release Date: 2022-02-24
Last updated: 2022-12-09
This Flow Networking Guide describes how to enable and deploy Nutanix Flow Networking on Prism Central.
If you have enabled the early access (EA) version of Flow Networking, disable it before upgrading the Prism Central and enabling the general availability (GA) version of Flow Networking.
Links to Nutanix Support Portal software and documentation.
The Nutanix Support Portal provides software download pages, documentation, compatibility, and other information/
Documentation | Description |
---|---|
Release Notes | Flow Networking | Flow Networking Release Notes |
Port Reference | Port Reference: See this page for details of ports that must be open in the firewalls to enable Flow Networking to function. |
Nutanix Security Guide | Prism Element and Prism Central security, cluster hardening, and authentication. |
AOS guides and release notes | Covers AOS Administration, Hyper-V Administration for Acropolis, Command Reference, Powershell Cmdlets Reference, AOS Family Release Notes, and AOS release-specific Release Notes |
Acropolis Upgrade Guide | How to upgrade core and other Nutanix software. |
AHV guides and release notes | Administration and release information about AHV. |
Prism Central and Web Console guides and release notes | Administration and release information about Prism Central and Prism Element. |
Enabled and administered from Prism Central, Flow Networking powers network virtualization to offer a seamless network experience with enhanced security. It is disabled by default.
To enable and use Flow Networking, ensure that you log on to Prism Central as a local account user with Prism Admin role. If you log on to Prism Central as a non-local account (IDP-based) user or without Prism Admin role privileges, then Prism Central does not allow you to enable or use Flow Networking. The task is reported as Failed with a User Denied Access message.
Nutanix deploys a number of ports and protocols in its software. ports that must be open in the firewalls to enable Flow Networking to function. To see the ports and protocols used Flow Networking, see Port Reference.
It is a software-defined network virtualization solution providing overlay capabilities for the on-prem AHV clusters. It integrates tools to deploy networking features like Virtual Private Cloud (VPC) and Virtual Private Network (VPN) to support flexible app-driven networking that focuses on VMs and applications instead of virtual LANs and network addresses.
After you enable it on Prism Central, Flow Networking delivers the following.
You can enable Flow Networking using a simple Prism Central driven workflow, which installs the network controller. The network controller is a collection of containerized services that run directly on the Prism Central VM(s). The network controller orchestrates all the virtual networking operations.
Enable Flow Networking in Prism Central Settings > Advanced Networking . It is disabled by default. See Enabling Flow Networking
You can opt out of Flow networking by disabling the Advanced Networking option subject to prerequisites to disable advanced networking. See Disabling Flow Networking.
You can deploy Flow Networking in a dark site (a site that does not have Internet access) environment. See the Deploying Flow Networking at a Dark Site topic for more information.
You can upgrade the Flow networking controller. Nutanix releases an upgrade for the Flow networking controller with AOS and Prism Central releases. See Upgrading Flow Networking.
See the AOS Family Release Notes and Release Notes | Prism Central .
Flow networking allows you to create and manage virtual private clouds (VPCs) and overlay subnets to leverage the underlying physical networks that connect clusters and datacenters. See Virtual Private Cloud.
You can upgrade the network gateway version. Network gateway is used to create VPN or VTEP gateways to connect subnets using VPN connections, or Layer 2 subnet extensions over VPN or VTEP.
The Flow Networking architecture uses a three-plane approach to simplify network virtualization.
Prism Central provides the management plane, the network controller itself acts as the control plane while the AHV nodes provide the data plane. This architecture provides a strong foundation for Flow Networking. This architecture is depicted in the following chart.
Flow Networking supports the following scale:
Entities | Scale |
---|---|
Virtual Private Clouds |
500 |
Subnets |
5,000 |
Ports |
50,000 |
Floating IPs |
2,000 per networking controller-enabled Prism Central. |
Routing Policies |
1,000 per Virtual Private Cloud. 10,000 per networking controller-enabled Prism Central. |
A Virtual Private Cloud (VPC) is an independent and isolated IP address space that functions as a logically isolated virtual network. A VPC could be made up of one or more subnets that are connected through a logical or virtual router. The IP addresses within a VPC must be unique. However, IP addresses may overlap across VPCs. As VPCs are provisioned on top of another IP-based infrastructure (connecting AHV nodes), they are often referred to as the overlay networks. Tenants may spin up VMs and connect them to one or more subnets within a VPC. Virtual Private Cloud (VPC) is a virtualized network of resources that are specifically isolated from the rest of the resource pool. VPC allows you to manage the isolated and secure virtual network with enhanced automation and scaling. The isolation is done using network namespace techniques like IP-based subnets or VLAN based networking.
You can use IP address-based subnets to network virtual machines within a VPC. A VPC may use multiple subnets. VPC subnets use private IP address ranges. IP addresses within a single VPC must be unique, in other words, IP addresses inside the same VPC cannot be repeated. However, IP addresses can overlap across multiple VPCs. The following figure shows two VPCs named Blue and Green. Each VPC has two subnets, 192.168.1.0/24 and 192.168.2.0/24, that are connected by a logical router. Each subnet has a VM with an IP address assigned. The subnets and VM IP addresses overlap between the two VPCs.
The communication between VMs in the same subnets or different subnets in the same VPC (also called East-West communication) is enabled using GEneric NEtwork Virtualization Encapsulation (GENEVE). If a Prism Central manages multiple clusters, then the VMs that belong to the same VPC could be deployed across different clusters. The virtual switch on the AHV nodes provide distributed virtual switching and distributed virtual routing for all VPCs.
Subnets outside a VPC are external subnets. External subnets may be subnets within the deployment but not included in a specific VPC. External subnets may also be subnets that connect to the endpoints outside the deployment such as another deployment or site.
External subnets can be deployed with NAT or without NAT. You can add a maximum of two external subnets - one external subnet with NAT and one external subnet without NAT to a VPC. Both external subnets cannot be of the same type. For example, you cannot add two external subnets, both with NAT. You can update an existing VPC similarly.
SNAT and Floating IP addresses are used only when you use NAT for an external subnet.
In Source Network Address Translation (SNAT), the NAT router modifies the IP address of the sender in IP packets. SNAT is commonly used to enable hosts with private addresses to communicate with servers on the public Internet.
For VMs within the VPC to communicate with the rest of the deployment, the VPC must be associated with an external network. In such a case, the VPC is assigned a unique IP address, called the SNAT IP, from the subnet prefix of the external network. When the traffic from a VM needs to be transmitted outside the VPC, the source IP address of the VM, which is a private IP address, is translated to the SNAT IP address. The reverse translation from SNAT IP to private IP address occurs for the return traffic. Since the SNAT IP is shared by multiple VMs within a VPC, only the VMs within the VPC can initiate connections to endpoints outside the VPC. The NAT gateway allows the return traffic for these connections only. Endpoints outside the VPC cannot initiate connections to VMs within a VPC.
In addition to the SNAT IP address, you can also request a Floating IP address — an IP from the external subnet prefix that is assigned to a VM via the VPC that manages the network of the VM. Unless the floating IP address is assigned to the private IP address (primary or secondary IP address) of the VM, the floating IP address is not reachable. When the VM transmits packets outside the VPC, the private IP of the VM is modified to the Floating IP. The reverse translation occurs on the return traffic. As the VM uses the Floating IP address, an endpoint outside the VPC can also initiate a connection to the VM with the floating IP address.
The translation of the private IP addresses to Floating IP or SNAT IP address, and vice versa, is performed in the hypervisor virtual switch. Therefore, the VM is not aware of this translation. Floating IP translation may be performed on the hypervisor that hosts the VM to which the floating IP is assigned to. However, SNAT translation is typically performed in a centralized manner on a specific host.
NAT Gateways are used only when you use NAT for an external subnet.
Network Address Translation (NAT) is a process for modifying the source or destination addresses in the headers of an IP packet while the packet is in transit. In general, the sender and receiver applications are not aware that the IP packets are being manipulated.
A NAT Gateway provides the entities inside an internal network with connectivity to the Internet without exposing the internal network and its entities.
A NAT Gateway is:
The externally-routable IP address may be an IP address from a private IP address space or an RFC1918 address that is used as a NAT gateway. The NAT Gateway IP address could be a static IP address or a DHCP assigned IP address.
Event | Failover Time |
---|---|
Network controller stops on AHV | Up to 45 seconds. |
Node reboot | Up to 45 seconds. |
Node power off:
When NAT Gateway and network controller MSP worker VMs are not on the same node. |
Up to 45 seconds. |
Node power off:
When NAT Gateway and network controller MSP worker VMs are on the same node. |
Up to 300 seconds (5 minutes). |
A static IP address is a fixed IP address that is manually assigned to an interface in a network. Static IP addresses provide stable routes that do not have to be updated frequently in the routing table since the static routes generated using static IP addresses do not need to be updated.
Usually in a large IP-based network (a network that uses IP addresses), a Dynamic Host Configuration Protocol or DHCP server assigns IP addresses to interfaces of an entity (using DHCP client service on the entity). However, some entities may require a static IP address that can be reached (manual remote access or via VPN) quickly. A static IP address can be reached quickly because the IP address is fixed, assigned manually and is stored in the routing table for a long duration. For example, a printer in an internal network would need a static IP address so that it can be connected reliably. Static IP addresses can be used to generate static routes which remain unchanged in routing tables, thus providing stable long-term connectivity to the entity that has the static IP address assigned.
Static routes are fixed routes that are created manually by the network administrator. Static routes are more suited for small networks or subnets. Irrespective of the size of a network, static routes may be required in a variety of cases. For example, in VPCs where you use virtual private networks (VPNs) or Virtual Tunnel End Point (VTEP) over VxLAN transport connections to manage secure connections, you could use static routes for specific connections such as site-to-site connections for disaster recovery. In such a case it is necessary to have a known reliable route over which the disaster recovery operations can be performed smoothly. Static routes are primarily used for:
In a network that is not constantly changing, static routes can provide faster and more reliable services by avoiding the network overheads like route advertisement and routing table updates for specific routes.
You can create an IP-based Overlay subnet for a VPC. An Overlay network is a virtualized network that is configured on top of an underlying virtual or physical network. A special purpose multicast network can be created as an Overlay network within an existing network. A peer-to-peer network or a VPN are also examples of Overlay networks. An important assumption for an Overlay network is that the underlying network is fully connected. Nutanix provides the capability to create Overlay network-based VPCs.
See how overlay networks compare with VLAN networks. A virtual local area network or VLAN network is a Layer 2 network that provides virtualized network segmentation solution. VLANs route and balance traffic in a network based on MAC addresses, Protocols such as Ethernet, ports or specific subnets. A VLAN creates a virtual Layer 3 network using Layer 2 addressing by separating broadcast domains virtually or logically. A VLAN configured network behaves as if the network is segmented using a physical layer 2 switch without implementing a layer 3 IP based subnet for the segmentation. VLAN traffic usually cannot traverse outside the VLAN.
The main advantage that VLAN networks provide is that VLAN networks require only layer 2 (L2) connectivity. VLANs do not require any of the layer 3 (L3) Flow Networking features.
Overlay networks can be laid on underlying physical network connections including VLAN networks. Overlay networks provide the following advantages and constraints:
When all the guest VMs belonging to a subnet are in the same AHV: Flow Networking broadcasts the traffic to all guest VMs in the same subnet.
When some VMs belonging to a subnet are in other AHVs: Flow Networking tunnels the traffic to only those AHVs which have endpoints in the same subnet.
In other words, Flow Networking broadcasts traffic to all the guest VMs in the same subnet.
Unicast traffic is traffic transmitted on a one-to-one basis between IP addresses and ports. There is only one sender and one receiver for the traffic. Unicast traffic is usually the most used form of traffic in any LAN network using Ethernet or IP networking. Flow Networking transmits unicast traffic based on the networking policies set.
Flow Networking always drops unknown unicast traffic. It is not transmitted to any guest VM within or outside the source AHV.
Flow Networking transmits the traffic to the VMs in the multicast group within the same subnet. If the VM is on another AHV, the destination AHV must have an endpoint in the subnet.
A multicast group is defined by an IP address (called a multicast IP address, usually a Class D IP address) and a port number. Once a host has group membership, the host will receive any data packets that are sent to that group defined by an IP address/port number.
Make sure you meet these prerequisites before you enable Flow networking on Prism Central.
You must have the following fulfilled to enable Flow networking:
Ensure that you log on to Prism Central as a local account user with Prism Admin role. If you log on to Prism Central as a non-local account (IDP-based) user or without Prism Admin role privileges, then Prism Central does not allow you to enable or use Flow Networking. The task is reported as Failed with a User Denied Access message.
Ensure that the Prism Central running Flow networking is hosted on an AOS cluster running AHV.
The network controller has a dependency only on the AHV version.
Choose the x-large PC VM size for Flow networking deployments. Small or large PC VMs are not supported for Flow Networking.
If you are running a small or large Prism Central VMs, upgrade the Prism Central VM resources to x-large PC VM. See Acropolis Upgrade Guide for procedure to install an x-large Prism Central deployment.
Although Flow networking may be enabled on a single-node PC, Nutanix strongly recommends that you deploy a three-node scale-out Prism Central for production deployments. The availability of Flow networking service in Prism Central is critical for performing operations on VMs that are connected to overlay networks. A three-node scale-out Prism Central ensures that Flow networking continues to run even if one of the nodes with a PCVM fails.
Prism Central VM registration. You cannot unregister the Prism Element cluster that is hosting the Prism Central deployment where you have enabled Flow Networking. You can unregister other clusters being managed by this Prism Central deployment.
Ensure that Microservices Infrastructure (CMSP) is enabled on Prism Central before you enable Flow Networking. See the Prism Central Guide for more information.
For the procedure to enable Microservices Infrastructure (including enable in dark site), see Enabling Micro Services Infrastructure section in the Prism Central Guide .
Ensure that you have created a virtual IP address (VIP) for Prism Central. The Acropolis Upgrade Guide describes how to set the VIP for the Prism Central VM. Once set, do not change this address.
Ensure connectivity:
Between Prism Central and its managed Prism Element clusters.
To the Internet for connectivity (not required for dark site) to:
Nutanix recommends increasing the MTU to 9000 bytes on the virtual switch vs0 and ensure that the physical networking infrastructure supports higher MTU values (jumbo frame support). The recommended MTU range is 1600-9000 bytes.
Nutanix CVMs use the standard Ethernet MTU (maximum transmission unit) of 1,500 bytes for all the network interfaces by default. The system advertises the MTU of 1442 bytes to guest VMs using DHCP to account for the extra 58 bytes used by Generic Network Virtualization Encapsulation (Geneve). However, some VMs ignore the MTU advertisements in the DHCP response. Therefore, to ensure that Flow networking functions properly with such VMs, enable jumbo frame support on the physical network and the default virtual switch vs0.
If you cannot increase the MTU of the physical network, decrease the MTU of every VM in a VPC to 1442 bytes in the guest VM console.
The following applies to upgrades of Flow networking network controller ( Advanced Networking in Prism Control Settings ):
See Compatibility and Interoperability Matrix on the Nutanix Support portal for AOS and Prism Central compatibility.
The network controller upgrade fails if any of the AHV hosts is running an incompatible version.
Flow networking does not support Flow security for guest VMs.
You cannot configure rules for Flow security if a guest VM has any NICs connected to VPCs.
Flow networking is supported only on AHV clusters. It is not supported on ESXi or Hyper-V clusters.
Flow networking is not enabled on the new PE cluster registering with the Flow networking-enable Prism Central if the Prism Element cluster has an incompatible AHV version.
Flow networking does not support updating a VLAN-backed subnet as an external subnet.
You cannot enable the external connectivity option in the Update Subnet dialog box. Therefore, you cannot modify an existing VLAN-backed subnet to add external connectivity.
VLAN backed subnets for external connectivity are managed by the Flow networking control plane. Traditional AHV VLAN IPAM networks are managed by acropolis.
Flow networking cannot be disabled if any external subnets and VPCs are in use. Delete the external subnets and VPCs and then disable Flow Networking.
Disaster Recovery backup and migration: CMSP-enabled Prism Central does not support disaster recovery backup and migration operations both as a source and target host.
Ensure tha microservices infrastructure is enabled on Prism Central. See Enabling Micro Services Infrastructure section in the Prism Central Guide .
Before you proceed to enable Flow Networking by enabling the Advanced Networking option, see Prerequisites for Enabling Flow Networking.
To enable Advanced Networking, go to Prism Central Settings > Advanced Networking and do the following.
Ensure that the prerequisites specified on the pane are fulfilled.
You can disable Flow Networking. However, the network controller cannot be disabled if any external subnets and VPCs are in use. Delete the subnets and VPCs before you disable advanced networking.
Flow Networking cannot be disabled if any external subnets and VPCs are in use. Delete the external subnets and VPCs and then disable Flow Networking.
To disable Flow Networking, do the following.
To exit without disabling the Advanced Networking controller, click Cancel .
Before unregistering a Prism Element from PC, disable Flow Networking on that Prism Element using network controller CLI (or atlas_cli).
When Flow Networking is enabled on a Prism Central, it propagates the capability to participate in VPC networking to all the registered Prism Elements that are running the required AHV version.
In cases where there are VMs on the Prism Element attached to the VPC network, or if the Prism Element is used to host one or more of the external VLAN networks attached to a VPC, Prism Central alerts you with a prompt. When being alerted about the aforementioned conditions, close the CLI and make adequate configuration to resolve the condition (for example, select a different cluster for the external VLAN network and delete the VMs attached to the VPC network running on the Prism Element). After making such configurations, execute the network controller CLI to disable Flow Networking. If the command goes through successfully, it is safe to unregister the Prism Element.
For example, in a deployment of three Prism Elements - PE1, PE2 and PE3 - registered to the Flow Networking-enabled PC, you want to unregister PE3 from the PC. You must first disable Flow Networking using the following steps:
nutanix@cvm$ atlas_cli
<atlas>
An example of the PC alert, for the condition that PE3 VM is attached to an external network, is as follows:
<atlas> config.add_to_excluded_clusters 0005bf8d-2a7f-3b2e-0310-d8e34995511e Cluster 0005bf8d-2a7f-3b2e-0310-d8e34995511e has 1 external subnet, which will lose connectivity. Are you sure? (yes/no)
The output displays the enable_atlas_networking parameter as False if Flow Networking is disabled and as True if Flow Networking is enabled on the Prism Element.
nutanix@cvm$ acli atlas_config.get config { anc_domain_name_server_list: “10.10.10.10” enable_atlas_networking: False logical_timestamp: 19 minimum_ahv_version: “20190916.101588" ovn_cacert_path: “/home/certs/OvnController/ca.pem” ovn_certificate_path: “/home/certs/OvnController/OvnController.crt” ovn_privkey_path: “/home/certs/OvnController/OvnController.key” ovn_remote_address: “ssl:anc-ovn-external.default.anc.aj.domain:6652" }
You can now unregister the PE from the PC.
You can upgrade the Flow networking controller ( Advanced Networking Controller in Prism Central Settings ) using Life Cycle Manager (LCM) on Prism Central.
See Prerequisites for Enabling Flow Networking.
In case of upgrading the Flow networking controller in a dark site, ensure that LCM is configured to reach the local web server that hosts the dark site upgrade bundles.
The network controller upgrade fails to start after the pre-check if one or more clusters have Flow Networking enabled and are running an AHV version incompatible with the new network controller upgrade version.
To upgrade the network controller using LCM, do the following.
Click Check for Updates on the Advanced Networking page.
When you click Perform Inventory , the system scans the registered Prism Central cluster for software versions that are running currently. Then it checks for any available upgrades and displays the information on the LCM page under Software .
Dark sites are primarily on-premises installations which do not have access to the internet. Such sites are disconnected from the internet for a range of reasons including security. To deploy Flow networking at such dark sites, you need to deploy the dark site bundle at the site.
This dark site deployment procedure includes downloading and deploying MSP and the network controller bundles.
See Prerequisites for Enabling Flow Networking.
You need access to the Nutanix Portal from an Internet-connected device to download the following dark site bundles:
To deploy Flow Networking at a dark site, do the following.
The web server can be a virtual machine on a cluster at the dark site. All the Prism Central VMs at the dark site must have access to this web server. This web server is used when you deploy any dark site bundle including the network controller darksite bundle.
For more information about the server installation, see:
Linux web server
Windows web server
Alternatively, SSH into the Prim Central VM as an admin user and run the following command.
admin@pcvm$ mspctl controller airgap enable --url=http://<LCM-web-server-ip>/release
Where <LCM-web-server-ip> is the IP address of the LCM web server and release is the name of the directory where the packages were extracted.
For example,
admin@pcvm$ mspctl controller airgap enable
--url=http://10.48.111.33/release
. Here,
10.48.111.33
is the IP address of the LCM web server
and
release
is the name of the directory where the
packages were extracted.
nutanix@cvm$ mspctl controller airgap get
After unpacking, check if the system shows a directory path that includes the following as per the example: http://<LCM-web-server-ip>/release/builds/msp-builds/msp-services/464585393164.dkr.ecr.us-west-2.amazonaws.com/nutanix-msp/atlas-hermes/ .
chmod -R +r builds
$> takeown / R / F *
$> icacls <Build-file-path> /t /grant:F
.
See the Enabling Microservices Infrastructure section in the Prism Central Guide for details.
This section provides information to assist troubleshooting of Flow Networking deployments. This is in addition to the information that the "Prism Central Guide" provides.
Prism Central generates audit logs for all the flow networking activities like it does for other activities on Prism Central. See Audit Summary View in the Prism Central Guide , for more information about Audit log.
To support troubleshooting for Flow Networking, you can collect logs.
To collect the logs, run the following commands on the Prism Central VM console:
nutanix@cvm$ logbay collect -t msp,anc
An example of the command is as follows:
nutanix@cvm$ logbay collect -t msp,anc -O msp_pod=true,msp_systemd=true,kubectl_cmds=true,persistent=true --duration=-48h0m0s
Where:
-t
flag indicates the tags to collect
msp tag will collect logs from the services running on MSP pods and persistent log volumes (application-level logs)
anc tag will collect the support bundle, which includes database dumps and OVN state
-O
flag adds tag-level options
msp_pod=true
collects logs from MSP service pods
On the PC, these logs can be found under /var/log/containers .
persistent=true
collects persistent log volumes
(application-level logs for ANC)
On the PC, these can be found under /var/log/ctrlog
kubectl_cmds=true
runs
kubectl
commands to
get the Kubernetes resource state
--duration
sets the duration from the present to collect
The command run generates a zip file at a location, for example: /home/nutanix/data/logbay/bundles/<filename>.zip
Unzip the bundle and you'll find the anc logs under a directory specific to your MSP cluster, the worker VM where the pod is running, and the logging persistent volume of that pod. For example:
./msp/f9684be8-b4e8-4524-74b4-076ed53ca1fd/10.48.128.185__worker_master_etcd/persistent/default/ovn/anc-ovn_StatefulSet/
For more information about the task run, see the text file that the command generates at a location, for example: /home/nutanix/data/logbay/taskdata/<taskID>/collection_result.txt
For more information about the logbay collect command, see the Logbay Log Collection (Command Line) topic in the Nutanix Cluster Check Guide (NCC Guide).
The L2StretchLocalIfConflict alert (Alert with Check ID - 801109) may occur while performing Layer 2 virtual subnet extensions. See KB-10395 for more information about its resolution.
Nutanix deployment can detect and install upgrades for the onprem Nutanix Gateways.
For information about identifying the current Nutanix Gateway version, see Identifying the Gateway Version.
For onprem Nutanix Gateways, the upgrades need to be detected and installed on the respective PC on which each Nutanix Gateway is installed.
For more information, see Detecting Upgrades for Gateways.
When PC detects the upgrades, it displays a banner on the Gateways tab of the Connectivity page. The banner notifies you that a Gateway upgrade is available after you have run LCM inventory. The table on the Gateways tab also displays an alert (exclamation mark) icon for the network gateways that the upgrade applies to. The hover message for the icon informs you that an upgrade is available for that Gateway.
For more information about the upgrade procedure, see Upgrading the PC-managed Onprem Nutanix VPN Gateways.
To identify the current Nutanix Gateway version, do the following:
In the Gateway table, the VPN Gateway name is a clickable link text.
The Gateway Version is listed in the Properties widget.
Prism Central can detect whether new Gateway upgrades are available, or not, for Nutanix Gateways using LCM. You can then install the upgrade.
Nutanix recommends that you select Enable LCM Auto Inventory in the LCM page in Prism Central to continuously detect new Gateway upgrades as soon as they are available.
The upgrade notification banner is displayed on the Gateways page.
Perform upgrades of PC-managed Nutanix Gateways using the respective PC on which the Gateway is created.
To upgrade the on-prem Nutanix Gateways, do the following:
When you click Perform Inventory , the system scans the registered Prism Central cluster for software versions that are running currently. Then it checks for any available upgrades and displays the information on the LCM page under Software .
Skip this step if you have enabled auto-inventory in the LCM page in Prism Central.
LCM upgrades the Gateway version. This process takes sometime.
The Network and Security category in the Entities Menu expands on-click to display the following networking and security entities that are configured for the registered clusters:
Subnets : This dashboard displays the subnets and the operations that you can perform on subnets.
Virtual Private Clouds : This dashboard displays the VPCs and the operations that you can perform on VPCs.
Floating IPs : This dashboard displays a list of floating IP addresses that you are using in the network. It allows you to request for floating IP addresses from the free pool of I addresses available to the clusters managed by the Prism Central instance.
Connectivity : This dashboard allows you to manage the following networking capabilities:
Gateways : This tab provides a list of network gateways that you have created and configured, and the operations you can perform on the network gateways. You can check and upgrade the Gateway bundle in Administration > LCM > Inventory .
VPN Connections : This tab provides a list of VPN connections that you have created and configured, and the operations you can perform on VPN connections.
Subnet Extensions : This tab provides a list of subnets that you have extended at the Layer 2 level using VPN (point-to-point over Nutanix VPN) or VTEP (point-to-multi-point including third party).
Security Policies : This dashboard provides a list of security policies you configured using Flow Segmentation. For more information about Security Policies, see the Flow Microsegmentation Guide.
See "Network Connections" section for information on how to configure network connections.
Subnets (Overlay IP subnets), Virtual private clouds, floating IPs, and Connectivity are Flow Networking features. These features support flexible app-driven networking that focuses on VMs and applications instead of virtual LANs and network addresses. Flow Networking powers network virtualization to offer a seamless network experience with enhanced security. It is disabled by default. It is a software-defined network virtualization solution providing overlay capabilities for the on-premises AHV clusters.
Security policies drives the Flow Segmentation features for secure communications. See Flow Microsegmentation Guide.
Manage subnets in the
List
view of
Subnets
dashboard in the
Network and
Security
section.
To access the Subnets dashboard, select Subnets from the entities menu in Prism Central. The Subnets dashboard allows you to view information about the subnets configured for the registered clusters.
The following table describes the fields that appear in the subnets list. A dash (-) is displayed in a field when a value is not available or applicable.
Parameter | Description | Values |
---|---|---|
Name | Displays the subnet name. | (subnet name) |
External Connectivity | Displays whether or not the subnet has external connectivity configured. | (Yes/No) |
Type | Displays the subnet type. | VLAN |
VLAN ID | Displays the VLAN identification number. | (ID number) |
VPC | Displays the name of the VPC that the Subnet is used in. | (Name of VPC) |
Virtual Switch |
Displays the virtual switch that is configured for the VLAN you selected. The
default value is the default virtual switch
vs0
.
Note:
The virtual
switch name is displayed only if you add a VLAN ID in the VLAN ID
field.
|
(virtual switch name) |
IP Prefix | Displays the IPv4 Address of the network with the prefix. | (IPv4 Address/Prefix) |
Cluster | Displays the name of the cluster for which this subnet is configured. | (cluster name) |
Hypervisor | Displays the hypervisor that the subnet is hosted on. | (Hypervisor) |
To filter the list by network name, enter a string in the filter field. (Ignore the Filters pane as it is blank.)
To view focused fields in the List, select the focus parameter from the Focus drop down list. You can create your own customised focus parameters by selecting Add custom from the drop down list and selecting the necessary fields after providing a Name , in the Subnet Columns .
There is a Network Config action button to configure a new network (see Configuring Network Connections
The Actions menu appears when one or more networks are selected and includes a Manage Categories option (see Assigning a Category ).
Go to the Subnets list view by clicking Network and Security > Subnets on the left side-bar.
To view or select actions you can perform on a subnet, select the subnet and click the Actions dropdown.
Action | Description |
---|---|
Update | Click this action to update the selected subnet. see Updating a Subnet in the Flow Networking Guide. |
Manage Extension | Click this action to create a subnet extension. A subnet extension allows VMs to communicate over the same broadcast domain to a remote Xi availability zone (in case of Xi-Leap based disaster recovery) via the extension. |
Manage Categories | Click this action to associate the subnet with a category or change the categories that the subnet is associated with. |
Delete | Click this action to delete the selected subnet. See Deleting Subnets, Policies, or Routes in the Flow Networking Guide . |
You can also filter the list of subnets by clicking the Filters option and selecting the filtering parameters.
View the details of a subnet listed on the Subnets page.
To view the details of a subnet, click the name of the subnet on the subnet list view.
The Summary page provides buttons for the actions you can perform on the subnet, at the top of the page. Buttons for the following actions are available: Update , Extend , Manage Categories , and Delete .
The subnet Summary page has the following widgets:
Widget Name | Information provided |
---|---|
Subnet Details |
Provides the following:
|
IP Pool | Provides the IP address Pool Range assigned to the network. |
External Connectivity |
Provides the following:
|
You can manage Virtual Private Clouds (VPCs) on the
Virtual Private
Clouds
dashboard.
Go to the Virtual Private Clouds dashboard by clicking Network and Security > Virtual Private Clouds on the left side-bar.
You can configure the table columns for the VPC list table. The available column list includes Externally Routable IP Addresses that provides address space within the VPC that is reachable externally without NAT.. For the list of columns that you can add to the list table, see Customizing the VPC List View.
Ensure that the externally routable IP addresses (subnets with external connectivity without NAT) for different VPCs do not overlap.
Configure the routes for the external connectivity subnets with next hop as the Router or SNAT IP address. Also configure the routes on the router for the return traffic to reach the VPC. See External Connectivity panel in VPC Details View.
To view or select actions you can perform on a VPC, select the VPC and click the Actions drop down.
You can also filter the list of VPC by clicking the Filters option and selecting the filtering parameters.
You can customize the columns in the table. Click the View by drop down and select + Add custom .
In the Virtual Network Columns dialog box, do the following.
During the column selection, the columns you select are moved under the Selected Columns list. The Name (of the VPC) column is the default column already selected. You can add a maximum of 10 columns (including the Name column) to the Selected Column list.
To arrange the order of the selected columns, hover on the column name and click the up or down arrow button as appropriate.
To view the details of a VPC, click the name of the VPC on the VPC list view.
The VPC details view has the following tabs:
The Summary tab provides the following panes:
The Subnet tab provides the following information for the subnets:
The Policies tab maps the following information about the security-based traffic shaping policies you configure:
The Routes tab provides the following information about the routes:
The VPC details view has the following configuration options for the VPC:
You can access floating IPs on the
Floating IPs
dashboard or list view in the
Network and Security
section.
For information about floating IP addresses and their role in Flow Networking, see SNAT and Floating IP Address.
Go to the Floating IPs dashboard by clicking Network and Security > Floating IPs on the left side-bar.
To view or select actions you can perform on a floating IP address assigned, select the floating IP address and click the Actions drop down. The following actions are available for a selected floating IP address:
To filter the list of floating IP address assignments, click the Filters option and select the appropriate filtering parameters.
To request floating IP addresses, see Requesting Floating IPs.
You can access network Gateways, VPN connections and subnet extensions on the
Connectivity
dashboard.
Click Network & Security > Connectivity to see the Connectivity dashboard.
The Connectivity dashboard opens on the Gateways tab. To see the VPN connections, click the VPN Connections tab. To see the subnets extended across AZs, click the Subnet Extensions tab.
The Connectivity dashboard opens on the Gateways dashboard or summary view.
The Gateway dashboard provides a list of gateways created for the clusters managed by the Prism Central.
The Gateways dashboard provides a Create Gateway dropdown menu that lets you create a Local or a Remote gateway. You can create a local or remote gateway with VPN or VTEP service. For more information, see Creating a Network Gateway.
You can select a gateway from the list (select the checkbox provided for the gateway) and then perform an action provided in the Actions dropdown list. The Actions dropdown list allows you to Update or Delete the selected gateway.
The Gateway summary list view provides the following details about the gateway.
Parameter | Description | Values |
---|---|---|
Name | Displays the name of the gateway. | (Name of gateway) |
Type | Displays the gateway type. | (Local or Remote) |
Service | Displays the service that the gateway uses. | (VPN or VTEP) |
Service IP | Displays the IP address used by the service. | (IP address) |
Status | Displays the operational status of the gateway. | (Up or Down) |
Attachment Type/Vendor | Displays the type of subnet associated with the gateway. | (VLAN or Overlay-VPC name) |
Connections | Displays the number of service connections (such as VPN connections) configured and operational on the gateway. | (number) |
You can click the name of a gateway to open the gateway details page that presents the information about the gateway in widgets.
You can click the name of a gateway in the Gateway dashboard list to open the gateway details page that presents the information about the gateway in widgets.
The gateway details page displays the name of the gateway on the top left corner.
On the top right corner, the close button (X) allows you to close the details page.
The Update button opens the Update Gateway page. See Updating Gateways for more information.
The Delete button allows you to delete the gateway. See Deleting Gateways for more information.
The details about the gateway are organized in widgets as follows:
Parameter | Description | Values |
---|---|---|
Properties widget | ||
Type | Displays the gateway type. | (Local or Remote) |
Attachment Type | Displays the network entity like VLAN or VPC that the gateway is attached to. | (VLAN or VPC) |
VPC or Subnet (VLAN) | Displays the name of the attached VPC or VLAN subnet. | (Name of VLAN or VPC) |
Floating or Private IP Address | Displays the Floating (for VPC) or Private (for VLAN) IP address assigned to the gateway. | (IP Address) |
Status | Displays the operational status of the gateway. | (Up or Down) |
Gateway Version | Displays the version of the Nutanix gateway appliance deployed. | (Version) |
Cluster | Displays the name of the cluster on which the gateway is created. | (Cluster name) |
Gateway VM | Displays the name of the VM on which the gateway is created. | (Name of VM - actionable link. Click the name-link to open the VM details page of the gateway VM.) |
Service Configuration | ||
Service | Displays the service used by the gateway. | (VPN or VTEP) |
External Routing | Displays the type of routing associated with the gateway for external traffic routing. | (Static or eBGP with ASN) |
Internal Routing | Displays the type of routing associated with the gateway for internal traffic routing. | (Static or eBGP with ASN) |
VPN Connections | Displays the total number of VPN connections associated with the gateway. | (Number - actionable link. Click the link to open the VPN connection details page for the associated VPN connection.) |
View VPN Connections | Click this link to open the VPN Connections tab. | - |
The Connectivity dashboard allows you to open the VPN Connections dashboard or summary view.
VPN Connection: Represents the VPN IPSec tunnel established between local gateway and remote gateway. When you create a VPN connection, you need to select two gateways between which you want to create the VPN connection.
The VPN Connections dashboard provides a list of VPN connections created for the clusters managed by the Prism Central.
The VPN Connections dashboard provides a Create VPN Connection button that opens the Create VPN Connection . For more information, see Creating a VPN Connection.
You can select a VPN connection from the list (select the checkbox provided for the VPN connection) and then perform an action provided in the Actions dropdown list. The Actions dropdown list allows you to Update or Delete the selected VPN connection.
The VPN Connections summary list view provides the following details about the VPN connection.
Parameter | Description | Values |
---|---|---|
Name | Displays the name of the connection. | (gateway name) |
IPSec Status | Displays the connection status of IPSec tunnel. | (Connected or Not Connected) |
EBGP Status | Displays the status of the EBGP gateway connection. | (Established or Not Established) |
Local Gateway | Displays the name of the local gateway used for the connection. | (Name of local gateway) |
Remote Gateway | Displays the name of the remote gateway used for the connection. | (Name of remote gateway) |
Dynamic Routing Priority | Displays the dynamic routing priority assigned to the connection for throughput management. You can assign any value in the range of 100-1000. Flow networking assigns the first VPN connection the value 500 by default. Thereafter, subsequent VPN connections are assigned values decremented by 50. For example, the first connections is assigned 500, then the second connection is assigned 450, the third one 400 and so on. | (Number in the range of 100-1000. User assigned.) |
You can click the name of a VPN connection in the VPN Connections dashboard list to open the VPN connection details page that presents the information about the VPN connection in widgets.
The VPN connection details page displays the name of the VPN connection on the top left corner.
On the top right corner, the close button (X) allows you to close the details page.
The Update button opens the Update VPN Connection page. For more information, see Updating a VPN Connection.
The Delete button allows you to delete the VPN connection. For more information, see Deleting a VPN Connection.
The details about the VPN connection are organized in widgets as follows:
Parameter | Description | Values |
---|---|---|
VPN Connection widget | ||
IPSec Status | Displays the connection status of IPSec tunnel. | (Connected or Not Connected) |
EBGP Status | Displays the status of the EBGP gateway connection. | (Established or Not Established) |
Dynamic Routing Priority | Displays the dynamic routing priority assigned to the connection for throughput management. You can assign any value in the range of 100-1000. Flow networking assigns the first VPN connection the value 500 by default. Thereafter, subsequent VPN connections are assigned values decremented by 50. For example, the first connections is assigned 500, then the second connection is assigned 450, the third one 400 and so on. | (Number in the range of 100-1000. User assigned.) |
Local Gateway Properties | ||
Gateway Name | Displays the name of the local gateway used for the connection. | (Name of local gateway) |
Type | Displays the type of gateway. | (Local) |
Attachment Type | Displays the network entity like VLAN or VPC that the gateway is attached to. | (VLAN or VPC) |
VPC or Subnet (VLAN) | Displays the name of the attached VPC or VLAN subnet. | (Name of VLAN or VPC) |
Tunnel IP | Displays the Tunnel IP address of the local gateway. | (IP Address) |
Connection Type | Displays the connection type you selected while creating the VPN connection. The connection type may be Initiator or Acceptor of a VPN connection between the local and remote gateways. T | (Initiator or Acceptor) |
External Routing | Displays the type of routing associated with the gateway for external traffic routing. | (Static or eBGP with ASN) |
Internal Routing | Displays the type of routing associated with the gateway for internal traffic routing. | (Static or eBGP with ASN) |
Floating or Private IP Address | Displays the Floating (for VPC) or Private (for VLAN) IP address assigned to the gateway. | (IP Address that you assigned to the local gateway with /30 prefix when you configured the VPN connection.) |
Status | Displays the operational status of the gateway. | (Up or Down) |
Cluster | Displays the name of the cluster on which the gateway is created. | (Cluster name) |
Gateway VM | Displays the name of the VM on which the gateway is created. | (Name of VM - actionable link. Click the name-link to open the VM details page of the gateway VM.) |
Remote Gateway Properties | ||
Gateway Name | Displays the name of the remote gateway used for the connection. | (Name of remote gateway) |
Type | Displays the type of gateway. | (Remote) |
Tunnel IP | Displays the Tunnel IP address of the remote gateway. | (IP Address) |
Connection Type | Displays the connection type you selected while creating the VPN connection. The connection type may be Initiator or Acceptor of a VPN connection between the local and remote gateways. T | (Initiator or Acceptor) |
External Routing | Displays the type of routing associated with the gateway for external traffic routing. | (Static or eBGP with ASN) |
ASN | Displays the ASN of the EBGP route. This information is only displayed if you configured EBGP as the External Routing protocol. | (Number) |
Vendor | Displays the name of the vendor of the gateway appliance at the remote site. | (Name of vendor of gateway appliance) |
External IP | Displays the IP address assigned to remote the gateway. | (IP Address that you assigned to the remote gateway with /30 prefix when you configured the VPN connection.) |
Status | Displays the operational status of the gateway. | - |
Protocol Details | ||
Service | Displays the service used by the gateway. | (VPN or VTEP) |
Gateway Routes | Displays the status of the routes used by the gateways. | (Sent) |
The Connectivity dashboard opens on the Subnet Extensions dashboard or summary view.
The Subnet Extensions dashboard provides a list of subnet extensions created for the clusters managed by the Prism Central.
The Subnet Extensions dashboard provides a Create Subnet Extension dropdown menu that lets you extend a subnet Across Availability Zones or To a Third Party Data Center . You can extend a subnet using VPN or VTEP service. See Layer 2 Virtual Network Extension for more information.
You can select a subnet extension from the list (select the checkbox provided for the subnet extension) and then perform an action provided in the Actions dropdown list. The Actions dropdown list allows you to Update or Delete the selected subnet extension.
The Subnet Extensions summary list view provides the following details about the gateway.
Parameter | Description | Values |
---|---|---|
Name | Displays the name of the subnet extension. | (Name of subnet extension) |
Type | Displays the subnet extension type. | ( Across Availability Zones or To a Third Party Data Center ) |
Extension Over | Displays the service that the subnet extension uses. | (VPN or VTEP) |
Extension Uses | Displays the name of the local network gateway that the subnet extension uses. | (Name of local network gateway) |
Local Subnet | Displays the name of the local subnet that the subnet extension uses. | (Name of local subnet) |
Remote Site | Displays the name of the remote network gateway that the subnet extension uses. | (Name of remote network gateway) |
Connection Status | Displays the status of the connection that is created by the subnet extension. Not Available status indicates that Prism Central is unable to ascertain the status. | (Not Available, Connected, or Disconnected) |
Interface Status | Displays the status of the interface that is used by the subnet extension. | (Connected or Down) |
You can click the name of a subnet extension to open the subnet extension details page that presents the information about the subnet extension in widgets.
You can click the name of a subnet extension in the Subnet Extensions dashboard list to open the subnet extension details page that presents the information about the subnet extension in widgets.
The subnet extension details page displays the name of the subnet extension on the top left corner. It has two tabs - Summary and Address Table . The Summary tab provides the information about the subnet extension in widgets. The Address Table tab provides MAC Address information only when the subnet extension uses VTEP service.
On the top right corner, the close button (X) allows you to close the details page.
The Update button opens the Update Subnet Extension page. See Updating an Extended Subnet for more information.
The Delete button allows you to delete the subnet extension. See Removing an Extended Subnet for more information.
The details about the subnet extension are organized in two tabs. The Summary tab organizes the subnet extension details in the extended widget as provided in the table. The Address Table tab provides details about the MAC addresses in a list.
Parameter | Description | Values |
---|---|---|
Properties | ||
Type | Displays the subnet type. | (VLAN or Overlay) |
VLAN ID | (For VLAN subnets only) Displays the VLAN ID of the VLAN subnet that is extended. | (VLAN ID number) |
VPC | (For Overlay subnets only) Displays the name of the VPC subnet that is extended. | (Name of VPC) |
Cluster | (For VLAN subnets only) Displays the cluster that the VLAN subnet belongs to. | (Name of cluster) |
IP Address Prefix | Displays the network IP address with prefix, of the VLAN subnet that is extended. | (IP Address with prefix) |
Virtual Switch | (For VLAN subnets only) Displays the virtual switch on which the VLAN subnet is configured. | (Virtual Switch name such as vs0 or vs1) |
IP Address Pools | ||
Pool Range | Displays the range of IP addresses in the pool configured in the subnet that is extended. | (IP address range) |
(Interactive Graphic Pie Chart) |
Displays a dynamic pie chart that displays the statistic you hover on. Displays
the following IP address statistics outside the pie chart, that you can hover on:
|
(IP Address statistics) |
Subnet Extension | ||
Subnet Extension (properties) - Common | ||
Type | Displays the subnet extension type. | ( Across Availability Zones or To a Third Party Data Center ) |
Interface Status | Displays the status of the interface that is used by the subnet extension. | (Connected or Down) |
Connection Status | Displays the status of the connection that is created by the subnet extension. Not Available status indicates that Prism Central is unable to ascertain the status. | (Not Available, Connected, or Disconnected) |
Local IP Address | Displays the IP address that you entered in the Local IP Address field while creating the subnet extension. | (IP Address) |
Local Subnet | Displays the name of the local subnet that the subnet extension uses. | (Name of local subnet) |
Subnet Extension (properties) - (Only for Across Availability Zones type) | ||
Local Availability Zone | (Only for Across Availability Zones type) Displays the name of the local AZ that is hosting the subnet that is extended. | (Name of the local Availability Zone) |
Remote Availability Zone | (Only for Across Availability Zones type) Displays the name of the remote AZ that the subnet is extended to. | (Name of the remote Availability Zone) |
Remote Subnet | (Only for Across Availability Zones type) Displays the name of the remote subnet that the subnet extension connects to. | (Name of remote subnet) |
Remote IP Address | (Only for Across Availability Zones type) Displays the IP address that you entered in the Remote IP Address field while creating the subnet extension. | (IP Address) |
Subnet Extension (properties) - (Only for To a Third Party Data Center type) | ||
Local Gateway | (Only for To a Third Party Data Center type) Displays the name of the local gateway used for the subnet extension. | (Name of local gateway) |
Remote Gateway | (Only for To a Third Party Data Center type) Displays the name of the remote gateway used for the subnet extension. | (Name of remote gateway) |
To access the security policies dashboard, select Policies > Security Policies from the entities menu (see Entities Menu). The security policies dashboard allows you to view summary information about defined security policies.
The following table describes the fields that appear in the security policies list. A dash (-) is displayed in a field when a value is not available or applicable.
Parameter | Description | Values |
---|---|---|
Name | Displays the policy name. The policy is one of three types: application, quarantine, or isolation. | (name), Application, Quarantine, Isolation |
Purpose | Describes (briefly) the policy's purpose. | (text string) |
Policy | Displays (high level) what the policy does. | (boxed text) |
Status | Displays the current status of the policy (either applied currently or in monitoring mode). | Applied, Monitoring |
Last Modified | Displays the date the policy was last modified (or the creation date if the policy has never been modified). | (date) |
You can filter the security polices list based on several parameter values. The following table describes the filter options available when you open the Security Policies view Filter pane. To apply a filter, select a parameter and check the box of the desired value (or multiple values) you want to use as a filter. You can apply filters across multiple parameters.
Parameter | Description | Values |
---|---|---|
Name | Filters on the item name. Select a condition from the pull-down list ( Contains , Doesn't contain , Starts with , Ends with , or Equal to ) and enter a string in the field. It will return a list of security policies that satisfy the name condition/string. | (policy name string) |
Type | Filters on the policy type. Check the box for one or more of the policy types (application, quarantine, isolation). It will limit the list to just those policy types. | Application, Quarantine, Isolation |
Status | Filters on the policy status. Check the box for applied or monitoring. | Applied, Monitoring |
The security policies dashboard includes a Create Security Policy action button with a drop-down list to Secure an Application or Isolation Environments .
The Actions menu appears when one or more policies are selected. It includes options to update, apply, monitor, and delete. The available actions appear in bold; other actions are grayed out. (For grayed out options, a tool tip explaining the reason is provided.)
To access the details page for a security policy, click on the desired security policy name in the list (see Security Policies Summary View). The Security Policy details page includes the following:
For more information about Security Policies, see Flow Microsegmentation Guide.
A Virtual Private Cloud (VPC) is an independent and isolated IP address space that functions as a logically isolated virtual network. A VPC could be made up of one or more subnets that are connected through a logical or virtual router. The IP addresses within a VPC must be unique. However, IP addresses may overlap across VPCs. As VPCs are provisioned on top of another IP-based infrastructure (connecting AHV nodes), they are often referred to as the overlay networks. Tenants may spin up VMs and connect them to one or more subnets within a VPC.
Virtual Private Cloud (VPC) is a virtualized network of resources that are specifically isolated from the rest of the resource pool. VPC allows you to manage the isolated and secure virtual network with enhanced automation and scaling. The isolation is done using network namespace techniques like IP-based subnets or VLAN based networking.
AHV provides the framework to deploy VPC on on-premises clusters using the following.
Flow Networking simplifies the deployment and configuration of overlay-based VPCs. It allows you to quickly:
This section covers the concepts and procedures necessary to implement VPCs in the network.
The primary IP address is assigned to a VM during initialization when the cluster provides any virtual NIC (NIC) to a VM.
For more information about attaching a subnet to a VM, see Creating a VM through Prism Central (AHV) in the Prism Central Guide .
For your deployment, you may need to configure multiple (static) IP addresses to a single NIC. These IP addresses (other than the primary IP address) are secondary IP addresses. A secondary IP address can be permanently associated with a specific NIC or be changed to any other NIC. The NIC ownership of a secondary IP address is important for security routing policies.
You can configure secondary IP addresses to a NIC when you want to:
In applications that use secondary IP addresses as virtual IP addresses and the NIC ownership of the secondary IP address changes dynamically from one NIC to another, configure the application to incorporate the ownership change in its settings or configuration. If the applications do not incorporate these ownership changes, the VPCs configured for such applications fail.
For information about configuring secondary IP addresses, see Creating Secondary IP Addresses.
You can view the IP addresses configured on a VM by clicking the See More link in the IP Address column in the VM details view to open the IP Address Information box.
You can assign multiple secondary IP addresses to a single vNIC.
You can add multiple secondary IP addresses to the vNIC configured on a VM. Add the secondary IP addresses to the vNIC in the Create VM or the Update VM page.
Ensure that the secondary IP addresses are within the same subnet that the primary IP address of the NIC is from. The subnets are displayed in the Private IP Assignment section in the Update NIC dialog box.
Ensure that the secondary IP address is not the same as the IP address provided in the Private IP Assignment field.
If you need to make any other changes on the Resources and the Management tabs for any configurations other than adding secondary IP addresses, make the changes and then click Next on these tabs.
Assign the secondary IP addresses to interfaces or subinterfaces on the VM.
To assign the secondary IP addresses to virtual interfaces on the VM, do the following on the VM details page:
root@host$ ifconfig <interface> <secondary ip address> <network mask>
Provide the following in the command:
Parameter | Description |
---|---|
<interface> | The interface of the VM such as eth0. You can provide subinterfaces such as eth0:1 and eth0:2. |
<secondary IP address> | The secondary IP address that you created and want to associate with the interface. |
<network mask> | The network mask that is an expansion of the network prefix of the network that the secondary IP address belongs to. For example, if the secondary IP address belongs to 10.0.0.0/24 then the network mask is 255.255.255.0. |
Assign the secondary IP addresses to floating IP addresses on the VM.
After you assign secondary IP addresses to interfaces or subinterfaces on the VM, you can assign the secondary IP addresses to floating IP addresses that may be used for external connectivity.
Do one of the following:
A virtual private cloud (VPC) can be deployed on Nutanix cluster infrastructure to manage the internal and external networking requirements using Flow Networking. The workflow to create a complete network based on VPC is described below.
This section provides information and procedures that you need to manage virtual private clouds using Flow networking.
You can create VPCs on the Virtual Private Clouds page. Go to the Virtual Private Clouds page by clicking Virtual Infrastructure > Networking > Virtual Private Clouds .
To create a VPC, do the following.
See Network and Security View for more information about the VPC dashboard.
Fields | Description and Values |
---|---|
Name |
Provide a name for the VPC. |
External Connectivity |
This section takes you through configuration of the
parameters necessary for connectivity to the Internet or
clusters outside the VPC.
A subnet with external connectivity (External Subnet) is required if the VPC needs to send traffic to a destination outside of the VPC.
Note:
You
can add a maximum of two external subnets - one external
subnet with NAT and one external subnet without NAT to a
VPC. Both external subnets cannot be of the same type. For
example, you cannot add two external subnets, both with NAT.
You can update an existing VPC similarly.
Network address translation (NAT) Gateways perform the required IP-address translations required for external routing. You can also have external connectivity without NAT. |
External Subnet |
Select an external subnet from the drop down list. By
associating the VPC with the external subnet you can provide
external connectivity to the VPC.
Note:
Ensure that the externally routable IP addresses (subnets with external connectivity without NAT) for different VPCs do not overlap. Configure the routes for the external connectivity subnets with next hop as the Router or SNAT IP address. Also configure the routes on the router for the return traffic to reach the VPC. See External Connectivity panel in VPC Details View. |
Externally Routable IP Addresses | Provide IP addresses that are externally routable. Externally routable IP addresses are IP addresses that within the VPC which can communicate externally without NAT. These IP addresses are used when an external subnet without NAT is used. |
Domain Name Servers (DNS) |
(Optional) DNS is advertised to Guest VMs via DHCP. This can
be overridden in the subnet configuration.
Click + Server IP to add DNS server IPs under IP Address and click the check mark. You can Edit or Delete an IP address you added using the options under Actions . |
Each VPN gateway requires a floating IP. If you do not provide one during the VPN gateway creation, then Flow Networking automatically allocates a floating IP to a VPN gateway. To provide floating IP during the VPN gateway creation, you can request floating IPs and assign them to VMs.
You can view the allocated floating IPs on the Floating IPs page. Click Networking > > Floating IPs .
To request a floating IP, do the following.
Uncheck the Assign Floating IPs box if you want to assign the requested IP addresses after you receive it.
See Floating IPs for more information.
Fields | Description and Values |
---|---|
External Subnet | Select a subnet that you configured with external connectivity. |
Number of Floating IPs | Enter the number of floating IPs you want. You can request a maximum of 5 floating IP addresses. |
Assign Floating IPs |
Select this check box if you want to assign the floating IPs to specific VMs in the table. Based on the number you entered in the Number of Floating IPs field, the system provides an equivalent number of rows of Search VMs and IP Address in the table. Under Search VMs , select the VM to which you want to assign a floating IP address. Under IP Address , select the IP address on the VM (primary or secondary IP address) to which you want to assign the floating IP. You can assign multiple floating IP addresses to multiple secondary IP addresses that you can create on the NIC of the VM. For information about configuring secondary IP addresses, see Creating Secondary IP Addresses. |
You can create subnets on the Subnets page. Go to the Subnets page by clicking Virtual Infrastructure > Networking and open the Create Subnet dialog box.
You can also open the Create Subnet dialog box from the VPC details view by clicking the Add Subnet option.
To create a subnet, do the following.
Fields | Description and Values |
---|---|
Name | Provide a name for the subnet. |
Type |
Select the type of subnet you want to create. You can create a VLAN subnet or an Overlay subnet. |
VLAN ID |
(VLAN subnet only) Enter the number of the VLAN . Enter just the number in this field, for example 1 or 27. Enter 0 for the native VLAN. The value is displayed as vlan.1 or vlan.27 in the View pages.
Note:
Provision any single VLAN ID either in the AHV network
stack or in the Flow Networking (brAtlas) networking stack.
Do not use the same VLAN ID in both the stacks.
|
IP Address management |
(Mandatory for Overlay type subnets) This section provides the Network IP Prefix and Gateway IP fields for the subnet. (Optional for VLAN type subnet) Check this box to display the Network IP Prefix and Gateway IP fields and configure the IP address details. Unchecking this box hides these fields. In this case, it is assumed that this virtual LAN is managed outside the cluster.
Note:
The DHCP Settings option is only available for VLAN subnets if you select this option. |
DHCP Settings |
(Optional for both VLAN and Overlay subnets) Check this box to display fields for defining a domain. Checking this box displays fields to specify DNS servers and domains. Unchecking this box hides those fields. See Settings the DHCP Options for more information. |
Cluster (VLAN subnet only) | (VLAN subnet only) This option is available only for VLAN subnet configuration. Select the cluster that you want to assign to the subnet. |
External Connectivity |
(VLAN subnet
only) Turn on this toggle switch if you want use this
VLAN
subnet for external
connectivity.
Note:
Ensure that the externally routable IP addresses (subnets with external connectivity without NAT) for different VPCs do not overlap. Configure the routes for the external connectivity subnets with next hop as the Router or SNAT IP address. Also configure the routes on the router for the return traffic to reach the VPC. See External Connectivity panel in VPC Details View. |
NAT |
(Option under
External Connectivity
)
If you turn on the
External Connectivity
toggle switch, then you can choose whether to connect to
external networks with or without enabling NAT. Check the
NAT
check box to enable NAT for
external connectivity for VPCs.
|
Virtual Switch | (VLAN subnet only) Select the virtual switch that is configured for the VLAN you selected. The default value is the default virtual switch vs0. This option is displayed only if you add a VLAN ID in the VLAN ID field. |
VPC (Overlay subnet only) |
Select the Virtual Private Cloud (VPC) that you want to assign to the subnet from the drop down list. You can create VPCs and assign them to Overlay subnets. |
IP Address Pool |
Defines a range of addresses for automatic assignment to virtual NICs. This field is optional for both VLAN and Overlay . For VLAN , this field is displayed only if you select the IP Address Management option.
Note:
Configure this field for
VLAN
or
Overlay
to complete the creation
of the VPC, if you do not need external connectivity for
this subnet. You must configure this field only if you need
external connectivity for this subnet.
Click the Create Pool button and enter the following in the Add IP Pool page:
|
Override DHCP Server |
(VLAN subnet only) To configure a DHCP server, check the Override DHCP Server box and enter an IP address in the DHCP Server IP Address field. See Override DHCP Server (VLAN Only) in Settings the DHCP Options for information about this option. |
Selecting the DHCP Settings checkbox in Create Subnet or Update Subnet allows you to configure the DHCP options for the VMs within the subnet. When DHCP settings are configured for a VM in a subnet and the VM is powered on, Flow Networking configures these options on the VM automatically. If you do not configure the DHCP settings, then these options are not available on the VM automatically when you power it on.
You can enable DHCP Settings when you create a subnet and configure the DHCP Settings for the new subnet. You could also update the DHCP Settings for an existing subnet.
DHCP Settings is common to and is available on both the Create Subnet and the Update Subnet dialog boxes.
To configure the DHCP Settings , do the following in the Create Subnet or the Update Subnet dialog box:
Fields | Description and Values |
---|---|
Domain Name Servers |
Provide a comma-separated list of DNS IP addresses. Example: 8.8.8.8, 9.9.9.9 |
Domain Search |
Enter the VLAN domain name. Use only the domain name format. Example: nutanix.com |
TFTP Server Name |
Enter a valid TFTP host server name of the TFTP server where you host the host boot file. The IP address of the TFTP server must be accessible to the virtual machines to download a boot file. Example: tftp_vlan103 |
Boot File Name |
The name of the boot file that the VMs need to download from the TFTP host server. Example: boot_ahv2020xx |
You can configure a DHCP server using the Override DHCP Server option only in case of VLAN networks.
The DHCP Server IP address (reserved IP address for the Acropolis DHCP
server) is visible only to VMs on this network and responds only to DHCP
requests. If this box is not checked, the DHCP Server IP Address field is
not displayed and the DHCP server IP address is generated automatically. The
automatically generated address is
network_IP_address_subnet.254
, or if the default
gateway is using that address,
network_IP_address_subnet.253
.
Usually the default DHCP server IP is configured as the last usable IP in the subnet (For eg., its 10.0.0.254 for 10.0.0.0/24 subnet). If you want to use a different IP address in the subnet as the DHCP server IP, use the override option.
To attach a subnet to a VM, go to the Virtual Infrastructure > VM > List view in Prism Central and do the following.
The Network Connection State selection defines the state of the connection after the NIC configuration is implemented.
You can select Assign with DHCP to assign a DHCP based IP address to the VM.
You can select Assign Static IP to assign a static IP address to the VM to reach the VM quickly from any endpoint in the network such as a laptop.
For Policy-based routing you need to create policies that route the traffic in the network.
Policies control the traffic flowing between subnets (inter-subnet traffic).
Policies control the traffic flowing in and out of the VPC.
Policies do not control the traffic within a subnet (intra-subnet traffic).
You can create a traffic policy using the Create Policy dialog box. You can open the Create Policy dialog box either from the VPC list view or the VPC list view.
On the VPC list view, select the VPC you want to update and click Create Policy in the Actions drop down menu.
On the VPC details view, click the Create Policy option in the More drop down menu.
To create a policy, do the following in the Create Policy dialog box.
Fields | Description and Values | Value in Default Policy |
---|---|---|
Priority |
The priority of the access list (ACL) determines which ACL is
processed first. Priority is indicated by an integer number. A
higher priority number indicates a higher priority.For example,
if two ACLs have priority numbers 100 and 70 respectively, the
ACL with priority 100 takes precedence over the ACl with
priority 70.
Note:
|
1 |
Source |
The source indicates the source IP or subnet for which you want to manage traffic. Source can be:
|
Any |
Source Subnet IP |
Only required if you selected the Source as Custom . Provide the subnet IP and prefix that you want to designate as the source for the policy. Use the CIDR notation format to provide the subnet IP. For example, 10.10.10.0/24. |
None |
Destination |
The destination is the destination IP or subnet for which you want to set the priority. Destination can be:
|
Any |
Destination Subnet IP |
Only required if you selected the Destination as Custom . |
None |
Protocol |
You can also set the priority configure policy for certain
protocols. You can select one of the following options:
|
|
Protocol Number |
This field is displayed only if you select Protocol Number as the value in the Protocol field. The number you provide must be the IANA designated number that indicates respective protocol. See IANA Protocol Numbers . |
None |
Action |
Assign the appropriate action for implementation of the policy.
|
Permit |
You can create a static route using the Create Static Routes dialog box. You can open the Create Static Routes dialog box either from the VPC list view or the VPC details view.
On the VPC list view, select the VPC and click Create Static Routes in the Actions drop down menu.
On the VPC details view, click the Create Static Routes option in the More drop down menu.
To create static route, do the following in the Create Static Routes dialog box:
Fields | Description and Values |
---|---|
Destination Prefix | Provide the IP address with prefix of the destination subnet. |
Next Hop Link | Select the next hop link from the drop down list. The next hop link is the IP address that the traffic must be sent for the static route you are configuring. |
Add Prefix | You can create multiple static routes using this option. Click this link to add another set of Destination Prefix and Next Hop Link to configure another static route. |
You can update a VPC using the Update Virtual Private Cloud (VPC) dialog box. You can open the Update Virtual Private Cloud (VPC) dialog box either from the VPC list view or the VPC details view.
On the VPC list view, select the VPC you want to update and click Update in the Actions drop down menu.
On the VPC details view, click the Update option.
The Update Virtual Private Cloud (VPC) dialog box is identical to the Create Virtual Private Cloud (VPC) dialog box.
For details about the parameters that you can update in the Update Virtual Private Cloud (VPC) dialog box, see Creating Virtual Private Cloud.
You can update a subnet displayed on the Subnets page. Go to the Subnets page by clicking Virtual Infrastructure > Networking > Subnets and open the Update Subnet dialog box.
You can also open the Update Subnet dialog box from the VPC dashboard for a specific VPC. Click the Edit option for the subnet listed on the Subnets tab of the VPC dashboard.
To update a subnets, do the following.
The Update Subnet dialog box has the same fields as the Create Subnet dialog box. For details about the fields and the values that can be updated in the Update Subnet dialog box, see Creating a Subnet.
A category is a key-value pair that groups similar entities. Associating a policy with a category ensures that the policy applies to all the entities in the group regardless of how the group scales with time. For example, you can associate a group of VMs with the Department: Marketing category, where Department is a category that includes a value Marketing along with other values such as Engineering and Sales.
Currently, you can associate only VMs with a category. Categories are implemented in the same way on on-premises Prism Central instances and in Xi Cloud Services. For information about configuring categories, see the Prism Central Guide .
You can update a policy using the Update Policy dialog box. You can open the Update Policy dialog box in two ways in the VPC details view.
The Update Policy dialog box has the same parameters as the Create Policy dialog box.
For details about the parameters that you can update in the Update Policy dialog box, see Creating a Policy.
You can update a static route using the Update Static Routes dialog box. You can open the Update Static Routes dialog box either from the VPC list view or the VPC details view.
The Update Static Routes dialog box has the same parameters as the Create Static Routes dialog box.
For details about the parameters that you can update in the Update Static Routes dialog box, see Creating Static Routes.
Prism Central does not allow you to delete a VPC if the VPC is associated with any subnets and/or VPNs. After you remove all the subnets or VPN associations from the VPC, delete the VPC.
You can delete a VPC from the VPC list view or the VPC details view.
You can delete VPC entities such as subnets, policies or routes from the VPC details page.
Do the following.
This section covers the management of network gateways, VPN connections and Subnet Extensions including operations like create, update and delete network gateways and VPN connections, and extending subnets.
You can create, update or delete network gateways that host one of VPN or VTEP service for connections.
VPN or s connect two networks together, and can be used in both VLAN and VPC networks on AHV. In other words, you can extend the routing domain of a VLAN network or that of a VPC using a VPN. Accordingly, VPN gateways can be configured using VLANs or VPCs. You need VPN gateways on clusters to provide a gateway to the traffic between on-premise clusters or remote sites.
You can create multiple VPN gateways for a VPC. Since a VPC is configured only on a PC, the VPC is available to all the clusters registered to that PC.
A VPN gateway may be defined as a Local gateway or a Remote gateway based on where the traffic needs to be routed.
To create a VPN gateway, do the following on the Networking & Security > Connectivity > Gateways page.
Fields | Description | Values |
---|---|---|
VM Deployment | ||
Name | Enter a name for the network gateway. | (Name) |
Gateway Attachments |
(for Local gateway type only) Select the gateway attachment
as
VPC
or
VLAN
.
The VPN VM is deployed on a VPC VM or a cluster that has the
selected VLAN respectively.
|
(VLAN or VPC) |
Gateway VM Deployment - VPC Attachment | ||
Cluster | Select the cluster on which you want to deploy the Gateway VM on. | (Name of the cluster) |
VPC (If Gateway Attachment type is VPC) | Select the VPC configured on the selected cluster that you want to use for the Gateway VM deployment. | (Name of the VPC selected) |
Floating IP (Optional) |
Select a floating IP for the network gateway configuration. If you do not select a floating IP address then Prism Central allocates a floating IP automatically. This allocated floating IP is deleted when you delete the gateway. To request floating IPs and allocate them to subnets, see Requesting Floating IPs |
(IP address) |
Gateway VM Deployment - VLAN Attachment | ||
Cluster |
Select the Cluster, from the drop down list, on which you
want to deploy the Gateway VM on.
Note:
Only clusters with VLANs
are available in the list.
|
(Name of the cluster) |
Subnet |
Select the subnet you want to attach the Gateway VM to, from
the drop down list.
Note:
The list includes all the subnets you
created on the selected cluster.
After you select the
subnet, the details of the subnet are displayed in a box below
the
Subnet
field. The details include:
VLAN ID, IPAM type being Managed or Unmanaged, and Network
Address with Prefix.
|
(Name of the VLAN subnet) |
Static IP Address for VPN Gateway VM | Enter the static IP address that the Gateway VM needs to use. | (IP Address with Prefix) |
Default Gateway IP | Enter the default gateway IP of the subnet for the Gateway VM. | (IP Address) |
Service Configuration | ||
Gateway Service | Select the gateway service you want to use for the gateway. | (VPN or VTEP) |
VPN Service Configuration - External Routing Configuration (This section is available for VLAN and VPC attachment types) | ||
Routing Protocol |
|
(Static or eBGP) |
Redistribute Connected Routes (Applicable only if VLAN type gateway attachment is selected) | ( VLAN only) Select this checkbox to enable the redistribution of connected routes into the eBGP. | (Check mark or blank) |
ASN (Only available if eBGP routing protocol is selected) |
(For eBGP only) Enter the ASN for your on-prem gateway. If you do not have a BGP environment in your on-prem site, you can choose any number. For example, you can choose a number in the 65000 range.
Note:
Make sure that this ASN does not conflict with any of the
other on-premises BGP ASNs.
ASN must be distinct in case of eBGP. |
(Number) |
eBGP Password | (For eBGP in Local gateway type only) Enter the eBGP password for the eBGP route. |
(Password: The password must be between 1 and 80 characters.
|
VPN Service Configuration - Internal Routing Configuration (This section is available for VLAN attachment type only.) | ||
Routing Protocol (Between On-prem Gateway and On-prem Router) |
Select the
Routing Protocol
to be used
between on-premises Nutanix gateway and on-premises
router.
You can select:
|
(Static or OSPF or iBGP) |
+Add Prefix (Applicable to Static routing) |
(For Static routing selected in Routing Protocol ) Click this to enter a Local Prefix and click the check mark under Actions to add the prefix. If you click the X mark under Actions , the local prefix you entered is not added. The prefixes you add are advertised to all the connected peers via eBGP. The prefix must be a valid IP address with the host bits not set. You can add multiple local prefix IP addresses. |
(prefix like /24) |
Area ID (Applicable to OSPF protocol) | (OSPF only) Enter the OSPF area id in the IPv4 address format. | |
Password Type |
(OSPF only) Select the password type you want to set for the
OSPF route. The options are:
|
|
Password |
(OSPF only) Enter a password for the MD5 or Plain Text password type you select in the Password Type field.
|
|
Peer IP (for iBGP) | Enter the IP Address of the On-prem router used to exchange routes with the network gateway. | (IP Address) |
Password | Enter a password with 1-80 characters. | (Password) |
VTEP Service Configurations | ||
VxLAN (UDP) Port | The default value provided is 4789. Do not change this. | (Number. Default value is 4789) |
Fields | Description | Values |
---|---|---|
Name | Enter a name for the network gateway. | (Name) |
Gateway Service | Select the gateway service you want to use for the gateway. | (VPN or VTEP) |
VPN Service Configurations | ||
Public IP Address | Enter the public IP address of the remote endpoint. If a Floating IP is not selected, a new Floating IP is automatically allocated for the Gateway. These allocated IP addresses are deleted when the network gateway is deleted. | (IP Address) |
Vendor | Select the vendor of the third party gateway appliance. | (Name of Vendor) |
External Routing | ||
Protocol |
|
(Static or eBGP) |
eBGP ASN (Only available if eBGP routing protocol is selected) |
(For eBGP only) Enter the ASN for your on-prem gateway. If you do not have a BGP environment in your on-prem site, you can choose any number. For example, you can choose a number in the 1-65000 range.
Note:
Make sure that this ASN does not conflict with any of the
other on-premises BGP ASNs.
ASN must be distinct in case of eBGP. |
(Number) |
VTEP Service Configurations | ||
VTEP IP Address | Enter VTEP IP Addresses of the remote endpoints that you want to create the gateway for. You can add IP addresses of multiple endpoints in one remote gateway. | (Comma separated list of IP Addresses) |
VxLAN (UDP) Port | The default value provided is 4789. Do not change this. | (Number. Default value is 4789) |
The Gateway you create is displayed in the Gateways page.
You can update a network gateway using the Update Gateway dialog box.
You can open the Update Gateway dialog box. The parameters in the Update Gateway dialog box are the same as those in the Create Local Gateway or Create Remote Gateway dialog box.
If you want to delete a network gateway, you must first delete all the VPN connections associated with the gateway and only then you can delete the network gateway.
To delete a network gateway, do the following on the Gateway page.
You can use the Nutanix VPN solution to set up VPN between your on-prem clusters, which exist in distinct routing domains that are not directly connected. These distinct routing domains could either be VPCs within the same cluster or remote clusters or sites.
If you need to connect one Nutanix deployment in one site to another deployment in a different site, you can create a VPN endpoint in each of the sites. A VPN endpoint consists of a local VPN gateway, remote VPN gateway and VPN connection. Local VPN gateway can be instantiated in a VPC context or a legacy VLAN context. Launching the VPN gateway within a VPC allows stretching of the VPC. For example, in the figure, the Blue VPC is stretched between two sites with a VPN.
VPN connections are useful in connecting two points. You can connect two VPCs in the same cluster using a VPN or VPCs in different clusters in the same site. However, VPN connection can connect only one endpoint to another endpoint. Flow networking based VPN service allows you to only connect two endpoints that use Nutanix VPN based gateway service.
To connect one endpoint to multiple endpoints or third party (non Nutanix) networks, use Virtual Tunnel End Point (VTEP) service based subnet extensions. For more information about VTEP, see .
If you need to connect one Nutanix deployment in one site to another deployment in a different site, you can create a VPN endpoint in each of the sites. A VPN endpoint consists of a local VPN gateway, remote VPN gateway and VPN connection. You can configure multiple VPN endpoints for a site.
Each endpoint must have configurations for a local VPN gateway, remote VPN gateway (pointer information for the peer local VPN in the remote site endpoint) and a VPN connection (connecting the two endpoints). Then, based on the VPN connection configuration as initiator or acceptor, one endpoint initiates a tunnel and the endpoint at the other end accepts the tunnel connection and, thus, establishes the VPN tunnel.
Gateways: Every VPN endpoint for each site consists of two VPN gateway configurations - Local and Remote.
Local gateway is a VM that runs the VPN protocols (IKEv2, IPSec) and routing (BGP and OSPF). Remote gateway is a pointer - database entry - that provides information about the peer remote VPN endpoint. One of the key information contained in the remote gateway is the source IP of the remote VPN endpoint. For security reasons, the local VPN gateway will accept IKEv2 packets originating only from this Source IP.
VPN gateways are of the following types:
On premises Nutanix VPN Gateway: Represents the VPN gateway appliance at your on-premises local or remote site if you are using the Nutanix VPN solution.
On premises Third Party Gateway: Represents the VPN gateway appliance at your on-prem site if you are using your own VPN solution (provided by a third-party vendor).
To configure third party VPN Gateways, see the relevant third party documentation.
VPN Connection: Represents the VPN IPSec tunnel established between local gateway and remote gateway. When you create a VPN connection, you need to select two gateways between which you want to create the VPN connection.
VPN appliances perform the following:
Ensure that you have enabled Flow Networking with microservices Infrastructure.
Ensure that you have floating IP addresses when you create VPN gateways.
Flow Networking automatically allocates a floating IP to a VPN gateway if you do not provide one during the VPN gateway creation. To provide floating IP during the VPN gateway creation, you can request floating IPs. See Requesting Floating IPs.
Ensure that you have one of the following, depending on whether you are using iBGP or OSPF:
Peer IP (for iBGP): The IP address of the router to exchange routes with the VPN gateway VM.
Area ID (for OSPF): The OSPF area ID for the VPN gateway in the IP address format.
Ensure that you have the following details for the deployment of the VPN gateway VM:
Public IP address of the VPN Gateway Device: A public WAN IP address that you want the on-prem gateway to use to communicate with the Xi VPN gateway appliance.
Static IP Address: A static IP address that you want to allocate to the VPN gateway VM. Use a floating IP address requested as the static IP address.
IP Prefix Length: The subnet mask in CIDR format of the subnet on which you want to install the VPN gateway VM. You can use an overlay subnet used for a VPC and assigned to the VM that you are using for the VPN gateway.
Default Gateway IP: The gateway IP address for the on-premise VPN gateway appliance.
Gateway ASN: ASN must not be the same as any of your on-prem BGP ASNs. If you already have a BGP environment in your on-prem site, the customer gateway is the ASN for your organization. If you do not have a BGP environment in your on-prem site, you can choose any number. For example, you can choose a number in the 65000 range.
Nutanix deploys a number of ports and protocols in its software. ports that must be open in the firewalls to enable Flow Networking to function. To see the ports and protocols used Flow Networking, see Port Reference.
The following endpoints and terminations occur in the course of Flow networking based connections. For information about creating, updating or deleting VPN connections, see Connections Management.
In this scenario, the IPSec tunnel terminates behind a network address translation (NAT) or firewall device. For NAT to work, open UDP ports 500 and 4500 in both directions.
Things to do in NAT | Things to do in on-prem VPN GW |
---|---|
Open UDP ports 500 and 4500 on both directions |
Enable the business application policies to Allow the commonly-used business application ports. |
In this scenario, you do not need to open the ports for NAT (500 and 4500).
However, enable the on-prem VPN gateway to allow the traffic from the PC subnet to the advertised load balancer route where the Source port is any and the Destination port may be in the range of 1024-1034.
The PC subnet refers to the subnet where your Prism Central is running.
Create a VPN connection to establish a VPN IPSec tunnel between VPN gateways in your on-prem site. Select the gateways between which you want to create the VPN connection.
To create a VPN connection, do the following on the Networking > VPN Connections page.
Fields | Description and Values |
---|---|
Name | Enter a name for the connection. |
VPN Connection | |
IPSec Secret | Enter a secret password for the IPSec connection. To see the password, click Show . To hide the password, click Hide . |
Local Gateway | Select the connection parameters on the local gateway as Initiator or Acceptor of VPN Tunnel connections. |
VPN Gateway | Select the appropriate VPN Gateway as the local gateway for the VPN connection |
VTI Prefix - Local Gateway |
Enter a IPv4 Address with /<prefix>. Example:
10.25.25.2/30.
This is the VPN Tunnel Interface IP address with prefix for the local gateway. The subnet for this IP address must be a /30 subnet with two usable IP addresses. One of the IP addresses is used for Local Gateway. Use the other IP address for the Remote Gateway. |
Connection Handshake |
This defines the type of handshake that the connection must
use. There are two types of connection handshakes:
Note:
In a VPN connection do not configure both the
gateways (local gateway and remote gateway) in an endpoint as Initiators or as Acceptors. If
you configure the local gateway as Initiator then configure the remote gateway as Acceptor
in one endpoint and vice-versa in the (other) remote endpoint.
|
Remote Gateway | For a specific VPN connection, set the remote gateway as Initiator or Acceptor when you configure the VPN connection on the Remote Gateway. |
VPN Gateway | Select the appropriate VPN Gateway as the remote gateway for the VPN connection. |
VTI Prefix - Remote Gateway |
The VPN Tunnel Interface IP address with prefix for the local
gateway. Provide a IPv4 Address with /<prefix>. Example:
10.25.25.2/30.
This is the VPN Tunnel Interface IP address with prefix for the local gateway. The subnet for this IP address must be a /30 subnet with two usable IP addresses. One of the IP addresses is used for Local Gateway. Use the other IP address for the Remote Gateway. |
Advanced Settings | Set the traffic route priority for the VPN connection. The route priority uses Dynamic route priority because the priority is dependent on the routing protocol configured in the VPN gateway. |
Route Priority - Dynamic Route Priority | Set the route priority as an integer number. The greater the number, higher is the priority. |
You can update a VPN Connection using the Update VPN Connection dialog box.
You can open the Update VPN Connection dialog box. The parameters in the Update VPN Connection dialog box are the same as those in the Create VPN Connection dialog box.
To delete a VPN connection, do the following on the VPN Connection page.
You can connect two VPCs within the same Prism Central availability zone using a VPN connection.
Assume that you have created two VPCs named vpc-a and vpc-b with overlay subnets named subnet-a and subnet-b .
To connect the two VPCs within the same Prism Central using a VPN connection, do the following.
See Creating a Network Gateway for more information about creating a VPN gateway.
See Creating a Network Gateway for more information about creating a VPN gateway.
Ensure that you select local-vpn-a as the local gateway with Connection Handshake set as Acceptor .
Ensure that you select remote-vpn-b as the remote gateway.
Ensure that you select local-vpn-b as the local gateway with Connection Handshake set as Initiator .
Ensure that you select remote-vpn-a as the remote gateway.
You can extend a subnet between on-prem local and remote clusters or sites (Availability Zones or AZs) to support seamless application migration between these clusters or sites.
With Layer 2 subnet extension, you can migrate a set of applications to the remote AZ while retaining their network bindings such as IP address, MAC address, and default gateway. Since the subnet extension mechanism allows VMs to communicate over the same broadcast domain, it eliminates the need to re-architect the network topology, which could otherwise result in downtime.
Layer 2 extension assumes that there are underlying existing layer 3 connectivity already available between the Availability Zones. You can extend a subnet from a remote AZ to the primary (Local) AZ (and other remote AZs in case of VTEP-based subnet extensions)
You can extend subnets for the following configurations.
Ensure the following before you configure Layer 2 subnet extension between your on-prem AZs.
See the Prism Central Upgrade and Installation Guidelines and Requirements section of the Acropolis Upgrade Guide for instructions about how to upgrade a Prism Central instance through the Prism Central web console.
See the Pairing Availability Zones for instructions about how to pair the local and remote AZs.
0.0.0.0/0
prefix and
the External Network next hop for the VPC you use for any subnet extension. This allows
NTP and DNS access for the Network Gateway appliance.
Nutanix recommends the following configurations to allow IP address retention for VMs on extended subnets.
You can manage Layer 2 subnet extension on the Subnet Extensions tab of the Connectivity page. Open the Subnet Extensions by clicking the hamburger icon in the top-left corner of the Dashboard and then clicking Connectivity .
You can create point-to-point Layer 2 subnet extensions between two AZs over VPN or VTEP by opening the Create Subnet Extension Across Availability Zones dialog box. See Extending a Subnet Over VPN for VPN-based extensions. See Extending a Subnet Across Availability Zones Over VTEP for VTEP-based extensions.
You can create point-to-point or point-to-multipoint Layer 2 subnet extensions to third party datacenters over VTEP by opening the Create Subnet Extension To A Third Party Data-Center dialog box. See Extending a Subnet to Third Party Datacenters Over VTEP.
You can update a subnet extension that extends across AZs using the Update Subnet Extension Across Availability Zones dialog box. The Update Subnet Extension Across Availability Zones has the same parameters and fields as the Create Subnet Extension Across Availability Zones dialog box. You can open the Update Subnet Extension Across Availability Zones dialog box by:
Selecting the subnet extended across AZs in the Subnet Extensions and clicking the Update button.
Clicking the subnet extended across AZs in the Subnet Extensions and clicking the Update button on the Summary tab.
You can update a subnet extension that extends to multiple AZs or third party datacenters using the Update Subnet Extension To A Third Party Data-Center dialog box. Update Subnet Extension To A Third Party Data-Center dialog box has the same parameters and fields as the Create Subnet Extension To A Third Party Data-Center dialog box. You can open the Update Subnet Extension To A Third Party Data-Center dialog box by:
Selecting the subnet extended to third datacenters in the Subnet Extensions and clicking the Update button.
Clicking the subnet extended to third datacenters in the Subnet Extensions and clicking the Update button on the Summary tab.
See Updating an Extended Subnet.
Subnet extension using VPN allows seamless, secure migration to a new datacenter or for disaster recovery. VPN based Layer 2 extension provides secure point to point connection to migrate workloads between Availability Zones. Consider VTEP-only Subnet Extension without VPN when encryption is not required.
Subnet extension using VPN is useful:
See Layer 2 Virtual Network Extension for general prerequisites to extend subnets.
To use subnet extension over a VPN, both sites must use the VPN service of the Nutanix Network Gateway. Consider VTEP-only subnet extension to connect to non-Nutanix third party sites.
To replicate entities (protection policies, recovery plans, and recovery points) to different on-prem AZs (AZs) bidirectionally, pair the AZs with each other. To replicate entities to different Nutanix clusters at the same AZ bidirectionally, you need not pair the AZs because the primary and the recovery Nutanix clusters are registered to the same AZ (Prism Central). Without pairing the AZs, you cannot perform DR to a different AZ.
To pair an on-prem AZ with another on-prem AZ, perform the following procedure at both the AZs.
The subnet extension allows VMs to communicate over the same broadcast domain to a remote site or Availability Zone (AZ).
Perform the following procedure to extend a subnet from the on-prem site.
Fields | Description | Values |
---|---|---|
Extend Subnet over a | Select the gateway service you want to use for the subnet extension. | (VPN or VTEP) |
Note:
Configure the following fields for the
Local
and the
Remote
sides of the dialog
box.
|
||
Availability Zone |
(For Local) Local AZ is pre-selected default.
(For Remote) Select the appropriate AZ from the drop-down list of AZs. |
(Local: Local AZ)
(Remote: Dropdown list of AZs.) |
Subnet Type | Select the type of subnet that you want to extend. | (VLAN or Overlay) |
Cluster | Displayed if your selected VLAN subnet. Select the cluster from the dropdown list of clusters. | (Name of cluster selected from dropdown list) |
VPC | Displayed if your selected Overlay subnet. Select the appropriate VPC from the dropdown list of VPCs. | (Name of VPC selected from dropdown list) |
Subnet | Select the subnet that needs to be extended. | (Name of subnet selected from dropdown list) |
(Network Information frame) | Displays the details of the VLAN or Overlay network that you selected in the preceding fields. | (Network information) |
Gateway IP Address/Prefix | Displays the gateway IP address for the subnet. This field is already populated based on the subnet selected. | (IP Address) |
(Local or Remote) IP Address | Enter a unique and available IP address that are externally accessible IP addresses in Local IP Address and Remote IP Address . | (IP Address) |
VPN Connection | Select the appropriate VPN Connection from the dropdown list that Flow networking must use for the subnet extension. See Creating a VPN Connection for instructions to create VPN connection. | (Name of VPN connection selected from the dropdown list) |
A successful subnet extension is listed on the Subnet Extension dashboard. See .
Subnet extension using Virtual tunnel End Point (VTEP) allows seamless migration to new datacenters or for disaster recovery. VTEP based Layer 2 extension provides point-to-multipoint connections to migrate workloads from one Availability Zone to multiple Availability Zones without encryption. If you need security and encryption, consider using Subnet Extension over VPN.
Subnet extension using VTEP is useful:
VTEP-based Layer 2 Subnet Extension provides the following advantages:
See Layer 2 Virtual Network Extension for general prerequisites to extend subnets.
Set up VTEP local and remote gateway services on local and remote AZs. In case of point-to-multipoint extension, ensure that you create local and remote VTEP gateways on all the remote AZs that the subnet needs to be extended to.
The subnet extension over VTEP allows VMs to communicate two Availability Zones (AZ) without a VPN connection.
To extend a subnet over VTEP across two availability zones (AZs), do the following.
On the Subnet Extensions tab, click > Create Subnet Extension > Across Availability Zones > .
In the Subnets dashboard, select the subnet you want to extend and click Actions > Extend > Across Availability Zones
In the Subnets dashboard, click the subnet you want to extend. On the subnet details page, click Extend > Across Availability Zones .
Parameters | Description and Value |
---|---|
Availability Zone | Displays the name of the paired availability zone at the local AZ. |
Subnet Type | Select the type of the subnet - VLAN or Overlay that you are extending. |
Cluster | Select the name of the cluster in the local AZ that the subnet is configured for. |
Subnet | Select the name of the subnet at the local AZ for network. The VLAN ID and the IPAM - managed or unmanaged are displayed in the box below the Subnet field. |
Gateway IP Address. |
Enter the gateway IP address of the subnet you want to
extend. Ensure that you provide the IP address in
<IP-address/network-prefix> format. for example the gateway
IP is 10.20.20.1 in a
/24
subnet then provide the
gateway
IP address as
10.20.20.1/24
.
Note:
For
an
unmanaged network, enter the gateway IP
address of the created subnet.
|
Local IP Address | Enter a unique and available (unused) IP address from the subnet provided in Subnet for the Network Gateway appliance. |
Remote IP Address | Enter a unique and available (unused) IP address from the subnet provided in Subnet for the remote Network Gateway appliance. |
Local VTEP Gateway | Select the local VTEP gateway you created on the local AZ. See Creating a Network Gateway for information about creating VTEP gateways. |
Remote VTEP Gateway | Select the VTEP gateway you created on the remote AZ. See Creating a Network Gateway for information about creating VTEP gateways. |
Connection Properties | |
VxLAN Network Identifier (VNI) | Enter a unique number from the range 0-16777215 as VNI. Ensure that this number is not reused anywhere in the local or remote VTEP Gateways. |
MTU | The default MTU is 1392 to account for 108 bytes of overhead and the standard physical MTU of 1500 bytes. VPC Geneve encapsulation requires 58 bytes and VXLAN encapsulation requires 50. However, you can enter any valid MTU value for the network, taking this overhead into account. For example, if the physical network MTU and vs0 MTU are 1600 bytes, the Network Gateway MTU can be set to 1492 to account for 108 bytes of overhead. Ensure that the MTU value does not exceed the MTU of the AHV Host interface and all the network interfaces between the local and remote AZs. |
The subnet extension over VTEP allows VMs to communicate with multiple remote sites or Availability Zones (AZ) that may be third party (non-Nutanix) networks, or datacenters. It also provides the flexibility of adding more remote AZs to the same VTEP-based extended Layer 2 subnet. Examples of compatible VTEP gateways are switches from Cisco, Juniper, Arista, and others that support plain VXLAN VTEP termination.
To extend a subnet over VTEP across multiple availability zones (AZs) or third party datacenters, do the following.
On the Subnet Extensions tab, click > Create Subnet Extension > To A Third Party Data-Center
In the Subnets dashboard, select the subnet you want to extend and click Actions > Extend > To A Third Party Data-Center
In the Subnets dashboard, click the subnet you want to extend. On the subnet details page, click Extend > To A Third Party Data-Center .
Parameters | Description and Value |
---|---|
Local | |
Availability Zone | Displays the name of the paired availability zone at the local AZ. |
Subnet Type | Select the type of the subnet - VLAN or Overlay that you are extending. |
Cluster | Select the name of the cluster in the local AZ that the subnet is configured for. |
Subnet | Select the name of the subnet at the local AZ for network. The VLAN ID and the IPAM - managed or unmanaged are displayed in the box below the Subnet field. |
Gateway IP Address |
Enter the gateway IP address of the subnet you want to
extend. Ensure that you provide the IP address in
<IP-address/network-prefix> format. for example the gateway
IP is 10.20.20.1 in a .24 subnet then provide the gatewway IP
address as
10.20.20.1/24
.
Note:
For
unmanaged network, enter the gateway IP address of the
created subnet.
|
Local IP Address | Enter a unique and available (unused) IP address from the subnet provided in Subnet . |
Local VTEP Gateway | Select the local VTEP gateway you created on the local AZ. See Creating a Network Gateway for more information about creating a local VTEP gateway. |
Remote | |
Remote VTEP Gateway | Select the remote VTEP gateway you created on the local AZ. See Creating a Network Gateway for more information about creating a remote VTEP gateway. |
Connection Properties | |
VxLAN Network Identifier (VNI) | Enter a unique number from the range 0-16777215 as VNI. Ensure that this number is not reused anywhere in the networks that the Prism Central and Cluster are a part of. |
MTU |
The default MTU is 1392 to account for 108 bytes of overhead and the standard physical MTU of 1500 bytes. VPC Geneve encapsulation requires 58 bytes and VXLAN encapsulation requires 50. However, you can enter any valid MTU value for the network, taking this overhead into account. For example, if the physical network MTU and vs0 MTU are 1600 bytes, the Network Gateway MTU can be set to 1492 to account for 108 bytes of overhead. Ensure that the MTU value does not exceed the MTU of the AHV Host interface and all the network interfaces between the local and remote AZs. |
The Update Subnet Extension Across Availability Zones has the same parameters and fields as the Create Subnet Extension Across Availability Zones dialog box.
You can update a subnet extension that extends across AZs using the Update Subnet Extension Across Availability Zones or the Update Subnet Extension To A Third Party data center dialog box. The Update Subnet Extension Across Availability Zones or the Update Subnet Extension To A Third Party data center dialog box has the same parameters and fields as the Create Subnet Extension Across Availability Zones or the Create Subnet Extension To A Third Party data center dialog box, respectively.
Based on the type of the subnet extension that you want to modify, refer to the following:
Perform the following procedure to remove the subnet extension. This procedure deletes the extended subnet between the two Availability Zones (AZs) or between one Nutanix AZ and one or more third party subnets. Deleting the subnet extension does not automatically remove the network gateways or VPN connections that may have automatically been created by the Subnet Extension wizard. You need to separately delete these entities created automatically when the subnet was extended.
Product Release Date: 2022-09-28
Last updated: 2022-09-28
For more information about Foundation 5.3.1 open source licensing details, see Open Source Licenses for Foundation 5.3.1.
For more information about Foundation 5.3 open source licensing details, see Open Source Licenses for Foundation 5.3.
Product Release Date: 2022-09-28
Last updated: 2022-09-28
For more information about Foundation Platforms Submodule 2.12.1 open source licensing details, see Open Source Licenses for Foundation Platforms 2.12.1 .
For more information about Foundation Platforms Submodule 2.12 open source licensing details, see Open Source Licenses for Foundation Platforms 2.12 .
Last updated: 2022-02-21
For Frame documentation, see https://docs.frame.nutanix.com/
Last updated: 2022-11-24
Karbon Platform Services is a Kubernetes-based multicloud platform as a service that enables rapid development and deployment of microservice-based applications. These applications can range from simple stateful containerized applications to complex web-scale applications across any cloud.
In its simplest form, Karbon Platform Services consists of a Service Domain encapsulating a project, application, and services infrastructure, and other supporting resources. It also incorporates project and administrator user access and cloud and data pipelines to help converge edge and cloud data.
With Karbon Platform Services, you can:
This data can be stored at the Service Domain or published to the cloud. You can then create intelligent applications using data connectors and machine learning modules to consume the collected data. These applications can run on the Service Domain or the cloud where you have pushed collected data.
Nutanix provides the Service Domain initially as a VM appliance hosted in an AOS AHV or ESXi cluster. You manage infrastructure, resources, and Karbon Platform Services capabilities in a console accessible through a web browser.
As part of initial and ongoing configuration, you can define two user types: an infrastructure administrator and a project user . The cloud management console and user experience help create a more intuitive experience for infrastructure administrators and project users.
Karbon Platform Services includes these ready-to-use built-in services, which provide an advantage over self-managed services:
These services are enabled by default on each Service Domain. All services now have monitoring and status capabilities
Ingress controller configuration and management is now available from the cloud management console (as well as from the Karbon Platform Services kps command line). Options to enable and disable the Ingress controller are available in the user interface.
Traefik or Nginx-Ingress. Content-based routing, load balancing, SSL/TLS termination. If your project requires Ingress controller routing, you can choose the open source Traefik router or the NGINX Ingress controller to enable on your Service Domain. You can only enable one Ingress controller per Service Domain.
Istio. Provides traffic management, secure connection, and telemetry collection for your applications.
Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.
Infrastructure administrator creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, services, data sources, and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them. This user has create/read/update/delete (CRUD) permissions for:
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.
When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
A project user can view and use projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and Kubernetes applications.
Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.
The project user has project-specific create/read/update/delete (CRUD) permissions: the project user can create, read, update, and delete the following and associate it with an existing project:
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.
When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
The web browser-based Karbon Platform Services cloud management console enables you to manage infrastructure and related projects, with specific management capability dependent on your role (infrastructure administrator or project user).
You can log on with your My Nutanix or local user credentials.
The default view for an infrastructure administrator is the Dashboard . Click the menu button in the view to expand and display all available pages in this view.
The default view for a project user is the Dashboard .
After you log on to the cloud management console, you are presented with the main Dashboard page as a role-specific landing page. You can also show this information at any time by clicking Dashboard under the main menu.
Each Karbon Platform Services element (Service Domain, function, data pipeline, and so on) includes a dashboard page that includes information about that element. It might also include information about elements associated with that element.
The element dashboard view also enables you to manage that element. For example, click Projects and select a project. Click Kubernetes Apps and click an application in the list. The application dashboard is displayed along with an Edit button.
To delete a service domain that does not have any associated data sources, click Infrastructure > Service Domains , select a Service Domain from the list, then click Remove . Deleting a multinode Service Domain deletes all nodes in that Service Domain.
The Karbon Platform Services management console includes a Quick Start menu next to your user name. Depending on your role (infrastructure administrator or project user), you can quickly create infrastructure or apps and data. Scroll down to see items you can add for use with projects.
These tasks assume you have already done the following. Ensure that any network-connected devices are assigned static IP addresses.
The Quick Start Menu lists the common onboarding tasks for the infrastructure administrator. It includes links to infrastructure-related resource pages. You can also go directly to any infrastructure resource from the Infrastructure menu item. As the infrastructure administrator, you need to create the following minimum infrastructure.
Create and deploy a Service Domain cluster that consists of a single node.
To add (that is, create and deploy) a multinode Service Domain consisting of three or more nodes, see Manage a Multinode Service Domain.
If you deploy the Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.
For example, you could
set a secret variable key named
SD_PASSWORD
with a value of
passwd1234
.For an example of how to use existing environment
variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.
Create categories of grouped attributes you can specify when you create a data source or pipeline.
You can add one or more data sources (a collection of sensors, gateways, or other input devices providing data) to associate with a Service Domain.
Each defined data source consists of the following:
Certificates downloaded from the cloud management console have an expiration date 30 years from the certificate creation date. Download the certificate ZIP file each time you create an MQTT data source. Nutanix recommends that you use each unique set of these security certificates and keys for any MQTT data sources you add.
When naming entities, Up to 200 alphanumeric characters are allowed.
rtsp://username:password@ip-address/
. For example:
rstp://userproject2:
In the next step, you will specify one or more streams.
https://index.docker.io/v1/
or
registry-1.docker.io/distribution/registry:2.1
https://
aws_account_id
.dkr.ecr.
region
.amazonaws.com
As an infrastructure administrator, you can create infrastructure users or project users. Users without My Nutanix credentials log on as a local user.
Each Service Domain image is preconfigured with security certificates and public/private keys.
When you create an MQTT data source, you generate and download a ZIP file that contains X.509 sensor certificate (public key) and its private key and Root CA certificates. Install these components on the MQTT enabled sensor device to securely authenticate the connection between an MQTT enabled sensor device and Service Domain. See your vendor document for your MQTT enabled sensor device for certificate installation details.
Certificates downloaded from the Karbon Platform Services management console have an expiration date 30 years from the certificate creation date. Download the certificate ZIP file each time you create an MQTT data source. Nutanix recommends that you use each unique set of these security certificates and keys for any MQTT data sources you add.
The Karbon Platform Services cloud management console provides a rich administrative control plane to manage your Service Domain and its infrastructure. The topics in this section describe how to create, add, and upgrade a Service Domain.
In the cloud management console, go to Infrastructure > Service Domains to add a VM-based Service Domain. You can also view health status, CPU/Memory/Storage usage, version details, and more information for every service domain.
In the cloud management console, go to Administration > Upgrades to upgrade your existing Service Domains. This page provides you with various levels of control and granularity over your maintenance process. At your convenience, download new versions for all or specific Service Domains and upgrade them with "1-click".
You can now onboard a multinode Service Domain by using Nutanix Karbon as your infrastructure provider to create a Service Domain Kubernetes cluster. To do this, use Karbon on Prism Central with the kps command line and cloud management console Create a Service Domain workflow. See Onboarding a Multinode Service Domain By Using Nutanix Karbon. (You can also continue to use other methods to onboard and create a Service Domain, as described in Onboarding and Managing Your Service Domain.)
For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the command line and the required YAML configuration file for cluster.
This public Github repository at https://github.com/nutanix/karbon-platform-services/tree/master/cli also describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
The Karbon Platform Services Release Notes include information about any new and updated features for the Service Domain. You can create one or more Service Domains depending on your requirements and manage them from the Karbon Platform Services management console.
The Service Domain is available as a qcow disk image provided by Nutanix for hosting the VM in an AOS cluster running AHV.
The Service Domain is also available as an OVA disk image provided by Nutanix for hosting the VM in a non-Nutanix VMware vSphere ESXi cluster. To deploy a VM from an OVA file on vSphere, see the documentation at the VMware web site describing how to deploy a virtual machine from an OVA file for your ESXi version
Each Service Domain you create by using these images are configured with X.509 security certificates.
If your network requires that traffic flow through an HTTP/HTTPS proxy, see HTTP/HTTPS Proxy Support for a Service Domain VM.
Download the Service Domain VM image file from the Nutanix Support portal Downloads page. This table describes the available image file types.
Service Domain Image Type | Use |
---|---|
QCOW2 | Image file for hosting the Service Domain VM on an AHV cluster |
OVA | Image file for hosting the Service Domain VM on vSphere. |
EFI RAW compressed file | RAW file in GZipped TAR file format for bare metal installation, where the hosting machine is using an Extensible Firmware Interface (EFI) BIOS |
RAW compressed file | RAW file in GZipped TAR file format for bare metal installation, where the hosting machine is using a legacy or non-EFI BIOS |
AWS RAW uncompressed file | Uncompressed RAW file for hosting the Service Domain on Amazon Web Services (AWS) |
As a default, in a single VM deployment, the Service Domain requires these resources to support Karbon Platform Services features. You can download the Service Domain VM image file from the Nutanix Support portal Downloads page.
VM Resource Requirements | Supported Clusters |
---|---|
Environment |
AOS cluster running AHV (AOS-version-compatible version), where Service Domain
Infrastructure VM is running as a guest VM.
VMware vSphere ESXi 6.0 or later cluster, where Service Domain Infrastructure VM is running as a guest VM (created from an OVA image file provided by Nutanix). The OVA image as provided by Nutanix is running virtual hardware version 11. |
vCPUs | 8 single core vCPUs |
Memory | 16 GiB memory. You might require more storage as determined by your applications. |
Disk storage | Minimum 200 GB storage. The Service Domain Infrastructure VM image file provides an initial disk size of 100 GiBs (gibibytes). You might require more storage as determined by your applications. Before first power on of the VM, you can increase (but not decrease) the VM disk size. |
GPUs | [optional] GPUs as required by any application using them |
Item | Requirement/Recommendation |
---|---|
Outbound port |
Allow connection for applications requiring outbound connectivity.
Starting with Service Domain 2.2.0, Karbon Platform Services now retrieves Service Domain package images from these locations. Ensure that your firewall or proxy allows outbound Internet access to the following.
|
Allow outbound port 443 for websocket connection to management console / cloud providers | |
NTP | Allow outbound NTP connection For network time protocol server. |
HTTPS proxy | Service Domain Infrastructure VM supports a network configuration that includes an HTTPS proxy. Customers can now configure such a proxy as part of a cloud-init based method when deploying Service Domain Infrastructure VMs. |
Service Domain Infrastructure VM static IP address | The Service Domain Infrastructure VM requires a static IP address as provided through a managed network when hosted on an AOS / AHV cluster. |
Configured network with one or more configured domain name servers (DNS) and optionally a DHCP server | |
Integrated IP address management (IPAM), which you can enable when creating virtual networks for VMs in the Prism web console | |
(Optional) cloud-init script which specifies network details including a DNS server | |
Miscellaneous | The cloud-init package is included in the Service Domain VM image to enable support for Nutanix Calm and its associated deployment automation features. |
Real Time Streaming protocol (rtsp) | Port 554 (default) |
Onboarding the Service Domain VM is a three-step process:
If your network requires that traffic flow through an HTTP/HTTPS proxy, see HTTP/HTTPS Proxy Support for a Service Domain VM.
See also:
How to upload the Service Domain VM disk image file on AHV running in an AOS cluster.
This topic describes how to initially install the Service Domain VM on an AOS cluster by uploading the image file. For details about your cluster AOS version and the procedures, see the Prism Web Console Guide.
To deploy a VM from an OVA file on vSphere, see the documentation at the VMware web site describing how to deploy a virtual machine from an OVF or OVA file for your ESXi version.
After uploading the Service Domain VM disk image file, create the Service Domain VM and power it on. After creating the Service Domain VM, note the VM IP address and ID in the VM Details panel. You will need this information to add your Service Domain in the Karbon Platform Services management console Service Domains page.
This topic describes how to create the Service Domain VM on an AOS cluster and power it on. For details about your cluster's AOS version and VM management, see the Prism Web Console Guide.
To deploy a VM from an OVA file on vSphere, see the VMware documentation for your ESXi version.
The most recent requirements for the Service Domain VM is listed in the Karbon Platform Services Release Notes.
If your network requires that traffic flow through an HTTP/HTTPS proxy, you can use a cloud-init script. See HTTP/HTTPS Proxy Support for a Service Domain VM.
$ sudo lshw -c disk
$ cd /media/ubuntu/drive_label
$ sudo tar -xOzvf service-domain-image.raw.tgz | sudo dd of=destination_disk bs=1M status=progress
$ aws s3 mb s3://raw-image-bkt
$ aws s3 cp service-domain-image.raw s3://raw-image-bkt
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}
$ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
{
"Version":"2012-10-17",
"Statement":[
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::raw-image-bkt",
"arn:aws:s3:::raw-image-bkt/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::raw-image-bkt",
"arn:aws:s3:::raw-image-bkt/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
}
]
}
$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
{
"Description": "Karbon Platform Services Raw Image",
"Format": "RAW",
"UserBucket": {
"S3Bucket": "raw-image-bkt",
"S3Key": "service-domain-image.raw"
}
}
$ aws ec2 import-snapshot --description "exampletext" --disk-container "file://container.json"
$ aws ec2 describe-import-snapshot-tasks --import-task-ids task_id
$ aws ec2 describe-import-snapshot-tasks --import-task-ids task_id
{
"ImportSnapshotTasks": [
{
"Description": "Karbon Platform Services Raw Image",
"ImportTaskId": "import-task_id",
"SnapshotTaskDetail": {
"Description": "Karbon Platform Services Raw Image",
"DiskImageSize": "disk_size",
"Format": "RAW",
"SnapshotId": "snapshot_ID"
"Status": "completed",
"UserBucket": {
"S3Bucket": "raw-image-bkt",
"S3Key": "service-domain-image.raw"
}
}
}
]
}
$ aws ec2 register-image --virtualization-type hvm \
--name "Karbon Platform Services Service Domain Image" --architecture x86_64 \
--root-device-name "/dev/sda1" --block-device-mappings \
"[{\"DeviceName\": \"/dev/sda1\", \"Ebs\": {\"SnapshotId\": \"snapshot_ID\"}}]"
$ aws ec2 describe-instances --instance-id instance_id --query 'Reservations[].Instances[].[PublicIpAddress]' --output text | sed '$!N;s/\n/ /'
$ cat /config/serial_number.txt
$ route -n
Attach a cloud-init script to configure HTTP/HTTPS proxy server support.
If your network policies require that all HTTP network traffic flow through a proxy server, you can configure a Service Domain to use an HTTP proxy. When you create the service domain VM, attach a cloud-init script with the proxy server details. When you then power on the VM and it fully starts, it will include your proxy configuration.
If you require a secure proxy (HTTPS), use the cloud-init script to upload SSL certificates to the Service Domain VM.
This script creates an HTTP/HTTPS proxy server configuration on the Service Domain VM after
you create and start the VM. Note that
CACERT_PATH=
in the first
content
spec is optional in this case, as it is already specified in the
second
path
spec.
#cloud-config
#vim: syntax=yaml
write_files:
- path: /etc/http-proxy-environment
content: |
HTTPS_PROXY="http://ip_address:port"
HTTP_PROXY="http://ip_address:port"
NO_PROXY="127.0.0.1,localhost"
CACERT_PATH="/etc/pki/ca-trust/source/anchors/proxy.crt"
- path: /etc/systemd/system/docker.service.d/http-proxy.conf
content: |
[Service]
Environment="HTTP_PROXY=http://ip_address:port"
Environment="HTTPS_PROXY=http://ip_address:port"
Environment="NO_PROXY=127.0.0.1,localhost"
- path: /etc/pki/ca-trust/source/anchors/proxy.crt
content: |
-----BEGIN CERTIFICATE-----
PASTE CERTIFICATE DATA HERE
-----END CERTIFICATE-----
runcmd:
- update-ca-trust force-enable
- update-ca-trust extract
- yum-config-manager --setopt=proxy=http://ip_address:port --save
- systemctl daemon-reload
- systemctl restart docker
- systemctl restart sherlock_configserver
Create and deploy a Service Domain cluster that consists of a single node.
To add (that is, create and deploy) a multinode Service Domain consisting of three or more nodes, see Manage a Multinode Service Domain.
If you deploy the Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.
For example, you could
set a secret variable key named
SD_PASSWORD
with a value of
passwd1234
.For an example of how to use existing environment
variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.
Create and deploy a Service Domain cluster that consists of three or more nodes.
A multinode Service Domain is a cluster initially consisting of a minimum of three leader nodes. Each node is a single Service Domain VM hosted in an AHV cluster.
Creating and deploying a multinode Service Domain is a three-step process:
The Service Domain image version where Karbon Platform Services introduces this feature is described in the Karbon Platform Services Release Notes.
Starting with new Service Domain version 2.3.0 deployments, high availability support for the Service Domain is now implemented through the Kubernetes API server (kube-apiserver). This support is specific to new multinode Service Domain 2.3.0 deployments. When you create a multinode Service Domain to be hosted in a Nutanix AHV cluster, you must specify a Virtual IP Address (VIP), which is typically the IP address of the first node you add.
Each node requires access to shared storage from an AOS cluster. Ensure that you meet the following requirements to create a storage profile. Adding a Multinode Service Domain requires these details.
On your AOS cluster:
For example, you have upgraded three older single-node Service Domains to a multinode image version. You cannot create a multinode Service Domain from these nodes.
For example, you have upgraded two older single-node Service Domains to a multinode image version. You have a newly created multinode compatible single node. You cannot add these together to form a new muiltinode service domain.
Create and deploy a Service Domain cluster that consists of three or more nodes.
Starting with new Service Domain version 2.3.0 deployments, high availability support for the Service Domain is now implemented through the Kubernetes API server (kube-apiserver). This support is specific to new multinode Service Domain 2.3.0 deployments.
To enable the HA kube-apiserver support, ensure that the VIP address is part of the same subnet as the Service Domain VMs and the VIP address is unique (that is, has not already been allocated to any VM). Otherwise, the Service Domain will not enable this feature.
Also ensure that the VIP address in this case is not part of any cluster IP address pool range that you have specified when you created a virtual network for guest VMs in the AHV cluster. That is, the VIP address must be outside this IP pool address range. Otherwise, creation of the Service Domain in this case will fail.
For example, you
could set a secret variable key named
SD_PASSWORD
with a value of
passwd1234
.For an example of how to use existing environment
variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.
Add nodes to an existing multinode Service Domain.
Remove worker nodes from a multinode Service Domain. Any node added to an existing three-node Service Domain is considered a worker node.
You can now onboard a multinode Service Domain by using Nutanix Karbon as your infrastructure provider to create a Service Domain Kubernetes cluster.
For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the kps command line and the required YAML configuration file for the cluster. This public Github repository at https://github.com/nutanix/karbon-platform-services/tree/master/cli also describes how to install the kps command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
This information populates the kps command line options and parameters in next step.
Your network bandwidth might affect how long it takes to completely download the latest Service Domain version. Nutanix recommends that you perform any upgrades during your scheduled maintenance window.
Upgrade your existing Service Domain VM by using the Upgrades page in the cloud management console. From Upgrades , you can see available updates that you can download and install on one or more Service Domains of your choosing.
Upgrading the Service Domain is a two-step process where you:
Link | Use Case |
---|---|
Service Domains |
"1-click" download or upgrade for all upgrade-eligible Service Domains.
|
Download and upgrade on all eligible | Use this workflow to download an available version to all Service Domains eligible to be upgraded. You can then decide when you want to upgrade the Service Domain to the downloaded version. See Upgrading All Service Domains. |
Download and upgrade on selected | Use this workflow to download an available version to one or more Service Domains that you select and are eligible to be upgraded. This option appears after you select one or more Service Domains. After downloading an available Service Domain version, upgrade one or more Service Domains when convenient. See Upgrading Selected Service Domains. |
Task History
View Recent History |
See Checking Upgrade Task History.
View Recent History appears in the Service Domains page list for each Service Domain, and shows a status summary. |
View Recent History appears in the Service Domains page list for each Service Domain and shows a status summary.
A project is an infrastructure, apps, and data collection created by the infrastructure administrator for use by project users.
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project. When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
A project can consist of:
When you add a Service Domain to a project, all resources such as data sources associated with the Service Domain are available to the users added to the project.
The Projects page lets an infrastructure administrator create a new project and lists projects that administrators can update and view.
For project users, the Projects page lists projects created and assigned by the infrastructure administrator that project users can view and add apps and data. Project users can view and update any projects assigned by the administrator with applications, data pipelines, and so on. Project users cannot remove a project.
When you click a project name, the project Summary dashboard is displayed and shows resources in the project.
You can click any of the project resource menu links to edit or update existing resources, or create and add resources to a project. For example, you can edit an existing data pipeline in the project or create a new one and assign it to the project. For example, click Kafka to shows details for the Kafka data service associated with the project. See Viewing Kafka Status.).
As an infrastructure administrator, create a project. To complete this task, log on to the cloud management console.
Update an existing project. To complete this task, log on to the cloud management console.
As an infrastructure administrator, delete a project. To complete this task, log on to the cloud management console.
The Karbon Platform Services cloud infrastructure provides services that are enabled by default. It also provides access to services that you can enable for your project.
The platform includes these ready-to-use services, which provide an advantage over self-managed services:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Enable or disable services associated with your project.
Kafka is available as a data service through your Service Domain.
The Kafka data service is available for use within a project's applications and data pipelines, running on a Service Domain hosted in your environment. The Kafka service offering from Karbon Platform Services provides the following advantages over a self-managed Kafka service:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Information about application requirements and sample YAML application file
Create an application for your project as described in Creating an Application and specify the application attributes and configuration in a YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: some.container.registry.com/myapp:1.7.9
ports:
- containerPort: 80
env:
- name: KAFKA_ENDPOINT
value: {{.Services.Kafka.Endpoint}}
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
kind
|
Deployment
|
Specify the resource type. Here, use
Deployment
.
|
metadata
|
name
|
Provide a name for your deployment. |
labels
|
Provide at least one label. Here, specify the application name as
app: my-app
|
|
spec
|
Define the Kafka service specification. | |
replicas
|
Here, 1 to indicate a single Kafka cluster (single Service Domain instance or VM) to keep data synchronized. | |
selector
|
Use
matchLabels
and specify the
app
name as in
labels
above.
|
|
template
|
||
Specify the application name here (
my-app
), same as
metadata
specifications above.
|
||
spec
|
Here, define the specifications for the application using Kafka. | |
containers
|
|
|
env
|
Leave these values as shown. |
Information about data pipeline function requirements.
See Functions and Data Pipelines.
You can specify a Kafka endpoint type in a data pipeline. A data pipeline consists of:
For a data pipelines with a Kafka topic endpoint:
In the cloud management console, you can view Kafka data service status when you use Kafka in an application or as a Kafka endpoint in a data pipeline as part of a project. This task assumes you are logged in to the cloud management console.
The Service Domain supports the Traefik open source router as the default Kubernetes Ingress controller. You can also choose the NGINX ingress controller instead.
An infrastructure admin can enable an Ingress controller for your project through Manage Services as described in Managing Project Services. You can only enable one Ingress controller per Service Domain.
When you include Ingress controller annotations as part of your application YAML file, Karbon Platform Services uses Traefik as the default on-demand controller.
If your deployment requires it, you can alternately use NGINX (ingress-nginx) as a Kubernetes Ingress controller instead of Traefik.
In your application YAML, specify two snippets:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
To securely route application traffic with a Service Domain ingress controller, create YAML snippets to define and specify the ingress controller for each Service Domain.
You can only enable and use one Ingress controller per Service Domain.
Create an application for your project as described in Creating an Application to specify the application attributes and configuration in a YAML file. You can include these Service and Secret snippets Service Domain ingress controller annotations and certificate information in this app deployment YAML file.
apiVersion: v1
kind: Service
metadata:
name: whoami
annotations:
sherlock.nutanix.com/http-ingress-path: /notls
sherlock.nutanix.com/https-ingress-path: /tls
sherlock.nutanix.com/https-ingress-host: DNS_name
sherlock.nutanix.com/http-ingress-host: DNS_name
sherlock.nutanix.com/https-ingress-secret: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
kind
|
Service
|
Specify the Kubernetes service. Here, use
Service
to indicate
that this snippet defines the ingress controller details.
|
apiVersion
|
v1
|
Here, the Kubernetes API version. |
metadata
|
name
|
Provide an app name to which this controller applies. |
annotations
|
These
annotations
define the
ingress controller encryption type and paths for Karbon Platform Services.
|
|
sherlock.nutanix.com/http-ingress-path: /notls
|
/notls
specifies no Transport Layer Security
encryption.
|
|
sherlock.nutanix.com/https-ingress-path: /tls
|
|
|
sherlock.nutanix.com/http-ingress-host:
DNS_name
|
Ingress service host path, where the service is bound to port 80.
DNS_name
. DNS name you can give to your application. For
example,
|
|
sherlock.nutanix.com/https-ingress-host:
DNS_name
|
Ingress service host path, where the service is bound to port 443.
DNS_name
. DNS name you can give to your application. For
example,
|
|
sherlock.nutanix.com/https-ingress-secret:
whoami
|
The
sherlock.nutanix.com/https-ingress-secret:
whoami
snippet links the authentication
Secret
information defined above to this
controller.
|
|
spec
|
Define the transfer protocol, port type, and port for the application. | |
A selector to specify the application.
|
Use a Secret snippet to specify the certificates used to secure app traffic.
apiVersion: v1
kind: Secret
metadata:
name: whoami
type: kubernetes.io/tls
data:
ca.crt: cert_auth_cert
tls.crt: tls_cert
tls.key: tls_key
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
apiVersion
|
v1
|
Here, the TLS API version. |
kind
|
Secret
|
Specify the resource type. Here, use
Secret
to indicate that
this snippet defines the authentication details.
|
metadata
|
name
|
Provide an app name to which this certification applies. |
type
|
Define the authentication type used to secure the app. Here,
kubernetes.io/tls
|
|
data
|
ca.crt
|
Add the keys for each certification type: certificate authority certificate (ca.crt), TLS certificate (tls.crt), and TLS key ( |
tls.crt
|
||
tls.key
|
In the cloud management console, you can view Ingress controller status for any controller used as part of a project. This task assumes you are logged in to the cloud management console.
Istio provides secure connection, traffic management, and telemetry.
In the application YAML snippet or file, define the
VirtualService
and
DestinationRules
objects.
These objects specify traffic routing rules for the
recommendation-service
app host. If the traffic rules match, traffic flows to the named destination (or
subset/version of it) as defined here.
In this example, traffic is routed to the
recommendation-service
app host
if it is sent from the FireFox browser. The specific policy version
(
subset
) for each host helps you identify and manage routed data.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: recomm-svc
spec:
hosts:
- recommendation-service
http:
- match:
- headers:
user-agent:
regex: .*Firefox.*
route:
- destination:
host: recommendation-service
subset: v2
- route:
- destination:
host: recommendation-service
subset: v1
This
DestinationRule
YAML snippet defines a load-balancing traffic policy
for the policy versions (
subsets
), where any healthy host can service the
request.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: recomm-svc
spec:
host: recommendation-service
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
In this YAML snipped, you can split traffic for each subset by specifying a
weight
of 30 in one case, and 70 in the other. You can also weight them
evenly by giving each a weight value of 50.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: recomm-svc
spec:
hosts:
- recommendation-service
http:
- route:
- destination:
host: recommendation-service
subset: v2
weight: 30
- destination:
host: recommendation-service
subset: v1
weight: 70
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
In the cloud management console, you can view Istio service mesh status associated with applications in a project. This task assumes you are logged in to the cloud management console.
The Prometheus service included with Karbon Platform Services enables you to monitor endpoints you define in your project's Kubernetes apps. Karbon Platform Services allows one instance of Prometheus per project.
Prometheus collects metrics in your app endoints. The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, endpoints, and alerts. See Viewing Prometheus Service Status.
You can then decide how to view the collected metrics, through graphs or other Prometheus-supported means. See Create Prometheus Graphs with Grafana - Example.
Setting | Default Value or Description |
---|---|
Frequency interval to collect and store metrics (also known as scrape and store) | Every 60 seconds 1 |
Collection endpoint |
/metrics
1
|
Default collection app | collect-metrics |
Data storage retention time | 10 days |
|
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
This sample app YAML specifies an app named
metricsmatter-sample-app
and creates one instance of this containerized app (
replicas: 1
) from the managed Amazon Elastic Container Registry.
apiVersion: apps/v1
kind: Deployment
metadata:
name: metricsmatter-sample-deployment
spec:
replicas: 1
selector:
matchLabels:
app: metricsmatter-sample-app
template:
metadata:
name: metricsmatter-sample-app
labels:
app: metricsmatter-sample-app
spec:
containers:
- name: metricsmatter-sample-app
imagePullPolicy: Always
image: 1234567890.dkr.ecr.us-west-2.amazonaws.com/app-folder/metricmatter_sample_app:latest
Next, in the same application YAML file, create a Service snippet. Add the default
collect-metrics
app label to the
Service
object. When you add
app: collect-metrics
, Prometheus scrapes the default
/metrics
endpoint every 60 seconds, with metrics exposed on port 8010.
---
apiVersion: v1
kind: Service
metadata:
name: metricsmatter-sample-service
labels:
app: collect-metrics
spec:
selector:
app: metricsmatter-sample-app
ports:
- name: web
protocol: TCP
port: 8010
Add a ServiceMonitor snippet to the app YAML above to customize the endpoint to scrape and change the interval to collect and store metrics. Make sure you include the Deployment and Service snippets.
Here, change the endpoint to
/othermetrics
and the collection interval to 15 seconds (
15s
).
Prometheus discovers all ServiceMonitors in a given namespace (that is, each project app) where it is installed.
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: metricsmatter-sample-app
labels:
app: collect-metrics
spec:
selector:
matchLabels:
app: collect-metrics
endpoints:
- path: /othermetrics
interval: 15s
port: 8010
You can also use endpoint environment variables in an application template for the service and AlertManager.
{{.Services.Prometheus.Endpoint}}
defines the service endpoint.
{{.Services.AlertManager.Endpoint}}
defines a custom Alert Manager endpoint.
Configure Service Domain Environment Variables describes how to use these environment variables.
The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, Prometheus endpoints, and alerts.
This example shows how you can set up a Prometheus metrics dashboard with Grafana.
This topic provides examples to help you expose Prometheus endpoints to Grafana, an open-source analytics and monitoring visualization application. You can then view scraped Prometheus metrics graphically.
The first ConfigMap YAML snippet example uses the environment variable
{{.Services.Prometheus.Endpoint}}
to define the service endpoint. If this
YAML snippet is part of an application template created by an infra admin, a project user
can then specify these per-Service Domain variables in their application.
The second snippet provides configuration information for the Grafana server web page. The host name in this example is woodkraft2.ntnxdomain.com
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
data:
prometheus.yaml: |-
{
"apiVersion": 1,
"datasources": [
{
"access":"proxy",
"editable": true,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "{{.Services.Prometheus.Endpoint}}",
"version": 1
}
]
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-ini
data:
grafana.ini: |
[server]
domain = woodkraft2.ntnxdomain.com
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
serve_from_sub_path = true
---
This YAML snippet provides a standard deployment specification for Grafana.
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:latest
ports:
- name: grafana
containerPort: 3000
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "1Gi"
cpu: "500m"
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
- mountPath: /etc/grafana/provisioning/datasources
name: grafana-datasources
readOnly: false
- name: grafana-ini
mountPath: "/etc/grafana/grafana.ini"
subPath: grafana.ini
volumes:
- name: grafana-storage
emptyDir: {}
- name: grafana-datasources
configMap:
defaultMode: 420
name: grafana-datasources
- name: grafana-ini
configMap:
defaultMode: 420
name: grafana-ini
---
Define the Grafana Service object to use port 3000 and an Ingress controller to manage access to the service (through woodkraft2.ntnxdomain.com).
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
selector:
app: grafana
ports:
- port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: grafana
labels:
app: grafana
spec:
rules:
- host: woodkraft2.ntnxdomain.com
http:
paths:
- path: /grafana
backend:
serviceName: grafana
servicePort: 3000
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
You can create intelligent applications to run on the Service Domain infrastructure or the cloud where you have pushed collected data. You can implement application YAML files to use as a template, where you can customize the template by passing existing Categories associated with a Service Domain to it.
You need to create a project with at least one user to create an app.
You can undeploy and deploy any applications that are running on Service Domains or in the cloud. See Deploying and Undeploying a Kubernetes Application.
For Kubernetes apps running as privileged, you might have to specify the Kubernetes
namespace where the application is deployed. You can do this by using the
{{
.Namespace }}
variable you can define in app YAML template file.
In this example, the resource kind of ClusterRoleBinding specifies the
{{
.Namespace }}
variable as the namespace where the subject ServiceAccount is
deployed. As all app resources are deployed in the project namespace, specify the project
name as well (here, name: my-sa).
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- kind: ServiceAccount
name: my-sa
namespace: {{ .Namespace }}
Create a Kubernetes application that you can associate with a project.
Update an existing Kubernetes application.
Delete an existing Kubernetes application.
You can undeploy and deploy any applications that are running on Service Domains or in the cloud. You can choose the Service Domains where you want the app to deploy or undeploy. Select the table or tile view on this page by clicking one of the view icons.
Undeploying a Kubernetes app deletes all config objects directly created for the app, including PersistentVolumeClaim data. Your app can create a PersistentVolumeClaim indirectly through StatefulSets. The following points describe scenarios when data is deleted and when data is preserved after you undeploy an app.
The Data Pipelines page enables you to create and view data pipelines, and also see any alerts associated with existing pipelines.
A data pipeline is a path for data that includes:
It also enables you to process and transform captured data for further consumption or processing.
To create a data pipeline, you must have already created or defined at least one of the following:
Create a data pipeline, with a data source or real-time data source as input and infrastructure or external cloud as the output.
Update a data pipeline, including data source, function, and output destination. See also Naming Guidelines.
A function is code used to perform one or more tasks. Script languages include Python, Golang, and Node.js. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.
An infrastructure administrator or project user can create a function, and later can edit or clone it. You cannot edit a function that is used by an existing data pipeline. In this case, you can clone it to make an editable copy.
Edit an existing function. To complete this task, log on to the cloud management console.
Other than the name and description, you cannot edit a function that is in use by an existing data pipeline. In this case, you can clone a function to duplicate it. See Cloning a Function.
Clone an existing function. To complete this task, log on to the cloud management console.
You can create machine learning (ML) models to enable AI inferencing for your projects. The ML Model feature provides a common interface for functions (that is, scripts) or applications to use the ML model Tensorflow runtime environment on the Service Domain.
The Karbon Platform Services Release Notes list currently supported ML model types.
An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.
You can add multiple models and model versions to a single ML Model instance that you create. In this scenario, multiple client projects can access any model configured in the single ML model instance.
How to delete an ML model.
The ML Models page in the Karbon Platform Services management console shows version and activity status for all models.
A runtime environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.
Karbon Platform Services includes standard runtime environments including but not limited to the following. These runtimes are read-only and cannot be edited, updated, or deleted by users. They are available to all projects, functions, and associated container registries.
How to create a user-added runtime environment for use with your project.
https://index.docker.io/v1/
or
registry-1.docker.io/distribution/registry:2.1
https://
aws_account_id
.dkr.ecr.
region
.amazonaws.com
How to edit a user-added runtime environment from the cloud management console. You cannot edit the included read-only runtime environments.
How to remove a user-added runtime environment from the cloud management console. To complete this task, log on to the cloud management console.
Logging provides a consolidated landing page enabling you to collect, forward, and manage logs from selected Service Domains.
From Logging or System Logs > Logging or the summary page for a specific project, you can:
You can also collect logs and stream them to a cloud profile by using the kps command line available from the Nutanix public Github channel https://github.com/nutanix/karbon-platform-services/tree/master/cli. Readme documentation available there describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Access the Audit Log dashboard to view the most recent operations performed by users.
Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Trail dashboard in the cloud management console. To display the dashboard, log on to the console, open the navigation menu, and click Audit Trail .
Click Filters to display operations by Operation Type, User Name, Resource Name, Resource Type, Time Range, and other filter types so you can narrow your search. Filter Operation Type by CREATE, UPDATE, and DELETE actions.
Log Collector examines the selected Service Domains and collects logs and configuration information useful for troubleshooting issues and finding out details about any Service Domain.
Log Collector examines the selected project application and collects logs and configuration information useful for troubleshooting issues and finding out details about an app.
Create, edit, and delete log forwarding policies to help make collection more granular and then forward those Service Domain logs to the cloud.
monitoring.us-west-2.amazonaws.com
Create a log collector for log forwarding by using the kps command line.
Nutanix has released the kps command line on its public Github channel. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Each sample YAML file defines a log collector. Log collectors can be:
See the most up-to-date sample YAML files and descriptions at https://github.com/nutanix/karbon-platform-services/tree/master/cli.
Create a log collector defined in a YAML file:
user@host$ kps create -f infra-logcollector-cloudwatch.yaml
This sample infrastructure log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).
To enable AWS CloudWatch log streaming, you must specify awsRegion, cloudwatchStream, and cloudwatchGroup.
kind: logcollector
name: infra-log-name
type: infrastructure
destination: cloudwatch
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name | Value or Subfield Name / Description | Value or Subfield Name / Description |
---|---|---|
kind |
logcollector
|
Specify the resource type |
name | infra-log-name | Specify the unique log collector name |
type |
infrastructure
|
Log collector for infrastructure |
destination | cloudwatch | Cloud destination type |
cloudProfile | cloud-profile-name | Specify an existing Karbon Platform Services cloud profile |
awsRegion |
For example,
us-west-2
or
monitoring.us-west-2.amazonaws.com
|
Valid AWS region name or CloudWatch endpoint fully qualified domain name |
cloudwatchGroup | cloudwatch-group-name | Log group name |
cloudwatchStream | cloudwatch-stream-name | Log stream name |
filterSourceCode |
|
Specify the log conversion code |
This sample project log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).
kind: logcollector
name: project-log-name
type: project
project: project-name
destination: cloud-destination type
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name | Value or Subfield Name / Description | Value or Subfield Name / Description |
---|---|---|
kind |
logcollector
|
Specify the resource type |
name | project-log-name | Specify the unique log collector name |
type |
project
|
Log collector for specific project |
project | project-name | Specify the project name |
destination | cloud-destination type | Cloud destination type such as CloudWatch |
cloudProfile | cloud-profile-name | Specify an existing Karbon Platform Services cloud profile |
awsRegion |
For example,
us-west-2
or
monitoring.us-west-2.amazonaws.com
|
Valid AWS region name or CloudWatch endpoint fully qualified domain name |
cloudwatchGroup | cloudwatch-group-name | Log group name |
cloudwatchStream | cloudwatch-stream-name | Log stream name |
filterSourceCode |
|
Specify the log conversion code |
Real-Time Log Monitoring built into Karbon Platform Services provides real-time log monitoring and lets you view application and data pipeline log messages securely in real time.
Viewing the most recent log messages as they occur helps you see and troubleshoot application or data pipeline operations. Messages stream securely over an encrypted channel and are viewable only by authenticated clients (such as an existing user logged on to the Karbon Platform Services cloud platform).
The cloud management console shows the most recent log messages, up to 2 MB. To get the full logs, collect and then download the log bundles by Running Log Collector - Service Domains.
View the most recent real-time logs for applications and data pipelines.
The Karbon Platform Services real-time log monitoring console is a terminal-style display that streams log entries as they occur.
After you do the steps in Displaying Real-Time Logs, a terminal-style window is displayed in one or more tabs. Each tab is streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application.
Your application or data pipeline function generates the log messages. That is, the console shows log messages that you have written into your application or function.
If your Karbon Platform Services Service Domain is connected and your application or function is not logging anything, the console might show a No Logs message. In this case, the message means that the application or function is idle and not generating any log messages.
You might see one or more error messages in the following cases. As a result, Real-Time Log Monitoring cannot retrieve any logs.
Simplifies authentication when you use the Karbon Platform Services API by enabling you to manage your keys from the Karbon Platform Services management console. This topic also describes API key guidelines.
As a user (infrastructure or project), you can manage up to two API keys through the Karbon Platform Services management console. After logging on to the management console, click your user name in the management console, then click Manage API Keys to create, disable, or delete these keys.
Read more about the Karbon Platform Services API at nutanix.dev. For Karbon Platform Services Developers describes related information and links to resources for Karbon Platform Services developers.
Example API request using an API key.
After you create an API key, use it with your Karbon Platform Services API HTTPS Authorization requests. In the request, specify an Authorization header including Bearer and the key you generated and copied from the Karbon Platform Services management console.
For example, here is an example Node JS code snippet:
var http = require("https");
var options = {
"method": "GET",
"hostname": "karbon.nutanix.com",
"port": null,
"path": "/v1.0/applications",
"headers": {
"authorization": "Bearer API_key"
}
};
Create one or more API keys through the Karbon Platform Services management console.
Create one or more API keys through the Karbon Platform Services management console.
Karbon Platform Services provides limited secure shell (SSH) access to your cloud-connected service domain to manage Kubernetes pods.
effectiveProfile
setting.
The Karbon Platform Services cloud management console provides limited secure shell (SSH) access to your cloud-connected Service Domain to manage Kubernetes pods. SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting.
As Karbon Platform Services is secure by design, dynamically generated public/private key pairs with a default
expiration of 30 minutes secure your SSH connection. When you start an SSH session from the
cloud management console, you automatically log on as user
kubeuser
.
Infrastructure administrators have SSH access to Service Domains. Project users do not have access.
Access a Service Domain through SSH to manage Kubernetes pods with kubectl CLI commands. This feature is disabled by default. To enable this feature, contact Nutanix Support.
Use kubectl commands to manage Kubernetes pods on the Service Domain.
kubeuser@host$ kubectl get pods
kubeuser@host$ kubectl get services
kubeuser@host$ kubectl logs pod_name
kubeuser@host$ kubectl exec pod_name command_name
kubeuser@host$ kubectl exec -it pod_name --container container_name -- /bin/sh
The Alerts page and the Alerts Dashboard panel show any alerts triggered by Karbon Platform Services depending on your role.
To see alert details:
Click Filters to sort the alerts by:
An Alert link is available on each Apps & Data and Infrastructure page.
Information and links to resources for Karbon Platform Services developers.
This section contains information about Karbon Platform Services development.
The Karbon Platform Services public Github repository https://github.com/nutanix/karbon-platform-services includes sample application YAML files, instructions describing external client access to services, Karbon Platform Services kps CLI samples, and so on.
Nutanix has released the kps command line on its public Github repository. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Karbon Platform Services supports the Traefik open source router as the default Kubernetes Ingress controller and NGINX (ingress-nginx) as a Kubernetes Ingress controller. For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/ingress.
Kafka is available as a data service through your Service Domain. Clients can manage, publish and subscribe to topics using the native Kafka protocol. Data Pipelines can use Kafka as destination. Applications can also use a Kafka client of choice to access the Kafka data service.
For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/kafka. See alsoKafka as a Service.
Enable a container application to run with elevated privileges.
For information about installing the kps command line, see For Karbon Platform Services Developers.
Karbon Platform Services enables you to develop an application which requires elevated privileges to run successfully. By using the kps command line, you can set your Service Domain to enable an application running in a container to run in privileged mode.
Configure your Service Domain to enable a container application to run with elevated privileges.
user@host$ kps config create-context context_name --email user_email_address --password password
user@host$ kps config get-contexts
user@host$ kps get svcdomain -o yaml
user@host$ kps update svcdomain svc_domain_name --set-privileged
Successfully updated Service Domain:
svc_domain_name
true
.
user@host$ kps get svcdomain svc_domain_name -o yaml
kind: edge
name: svc_domain_name
connected: true
.
.
.
profile:
privileged: true
enableSSH: true
effectiveProfile:
privileged: true
enableSSH: true
effectiveProfile
privileged
set to
true
indicates that Nutanix
Support has enabled this feature. If the setting is
false
,
contact Nutanix Support to enable this feature. In this example, Nutanix has
also enabled SSH access to this Service Domain (see
Secure Shell (SSH) Access to Service
Domains
in
Karbon Platform Services Administration
Guide
).
After elevating privilege as described in Setting Privileged Mode, elevate the application privilege. This sample enables USB device access for an application running in a container on elevated Service Domain
Add a tag similar to the following in the Deployment section in your application YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: usb
annotations:
sherlock.nutanix.com/privileged: "true"
apiVersion: v1
kind: ConfigMap
metadata:
name: usb-scripts
data:
entrypoint.sh: |-
apk add python3
apk add libusb
pip3 install pyusb
echo Read from USB keyboard
python3 read-usb-keyboard.py
read-usb-keyboard.py: |-
import usb.core
import usb.util
import time
USB_IF = 0 # Interface
USB_TIMEOUT = 10000 # Timeout in MS
USB_VENDOR = 0x627
USB_PRODUCT = 0x1
# Find keyboard
dev = usb.core.find(idVendor=USB_VENDOR, idProduct=USB_PRODUCT)
endpoint = dev[0][(0,0)][0]
try:
dev.detach_kernel_driver(USB_IF)
except Exception as err:
print(err)
usb.util.claim_interface(dev, USB_IF)
while True:
try:
control = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, USB_TIMEOUT)
print(control)
except Exception as err:
print(err)
time.sleep(0.01)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: usb
annotations:
sherlock.nutanix.com/privileged: "true"
spec:
replicas: 1
selector:
matchLabels:
app: usb
template:
metadata:
labels:
app: usb
spec:
terminationGracePeriodSeconds: 0
containers:
- name: alpine
image: alpine
volumeMounts:
- name: scripts
mountPath: /scripts
command:
- sh
- -c
- cd /scripts && ./entrypoint.sh
volumes:
- name: scripts
configMap:
name: usb-scripts
defaultMode: 0766
Define environment variables for an individual Service Domain. After defining them, any Kubernetes app that specifies that Service Domain can access them as part of a container spec in the app YAML.
As an infrastructure administrator, you can set environment variables and associated values for each Service Domain, which are available for use in Kubernetes apps. For example:
As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.
How to set environment variables for a Service Domain.
user@host$ kps config create-context context_name --email user_email_address --password password
user@host$ kps config get-contexts
user@host$ kps get svcdomain -o yaml
my-svc-domain
, for example, set the
Service Domain environment variable. In this example, set a secret variable
named
SD_PASSWORD
with a value of
passwd1234
.
user@host$ kps update svcdomain my-svc-domain --set-env '{"SD_PASSWORD":"passwd1234"}'
user@host$ kps get svcdomain my-svc-domain -o yaml
kind: edge
name: my-svc-doamin
connected: true
.
.
.
env: '{"SD_PASSWORD": "passwd1234"}'
kps update
svcdomain my-svc-domain --set-env '{"
variable_name
":
"
variable_value
"}'
command.
user@host$ kps update svcdomain svc_domain_name --unset-env
user@host$ kps update svcdomain svc_domain_name --unset-env '{"variable_name":"variable_value"}'
Example: how to use existing environment variables for a Service Domain in application YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: some.container.registry.com/myapp:1.7.9
ports:
- containerPort: 80
env:
- name: KAFKA_ENDPOINT
value: some.kafka.endpoint
- name: KAFKA_KEY
- value: placeholder
command:
- sh
- c
- "exec node index.js $(KAFKA_KEY)"
Logically grouped Service Domain, data sources, and other items. Applying a category to an entity applies any values and attributes associated with the category to the entity.
Built-in Karbon Platform Services platform feature to publish data to a public cloud like Amazon Web Services or Google Cloud Platform. Requires a customer-owned secured public cloud account and configuration in the Karbon Platform Services management console.
Cloud provider service account (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is transmitted for further processing.
Credentials and location of the Docker container registry hosted on a cloud provider service account. Can also be an existing cloud profile.
Path for data that includes input, processing, and output blocks. Enables you to process and transform captured data for further consumption or processing.
Data service such as Kafka as a Service or Real-Time Stream Processing as a Service.
A collection of sensors, gateways, or other input devices to associate with a node or Service Domain (previously known as an edge). Enables you to manage and monitor sensor integration and connectivity.
A node minimally consists of a node (also known as edge device) and a data source. Any location (hospital, parking lot, retail store, oil rig, factory floor, and so on) where sensors or other input devices are installed and collecting data. Typical sensors measure (temperature, pressure, audio, and so on) or stream (for example, an IP-connected video camera).
Code used to perform one or more tasks. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.
User who creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, data sources and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them.
A collection of infrastructure (Service Domain, data source, project users) plus code and data (Kubernetes apps, data pipelines, functions, run-time environments), created by the infrastructure administrator for use by project users.
User who views and uses projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and applications. This user has project-specific CRUD permissions: the project user can create, read, update, and delete assigned applications, scripts, data pipelines, and other project users.
A run-time environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.
Intelligent Platform as a Service (PaaS) providing the Karbon Platform Services Service Domain infrastructure (consisting of a full software stack combined with a hardware device). It enables customers to deploy intelligent applications (powered by AI/artificial intelligence) to process and transform data ingested by sensors. This data can be published selectively to public clouds.
Browser-based console where you can manage the Karbon Platform Services platform and related infrastructure, depending on your role (infrastructure administrator or project user).
Software as a Service (SaaS)/Platform as a Service (PaaS) based management platform and cloud IoT services. Includes the Karbon Platform Services management console.
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Last updated: 2022-11-24
Karbon Platform Services is a Kubernetes-based multicloud platform as a service that enables rapid development and deployment of microservice-based applications. These applications can range from simple stateful containerized applications to complex web-scale applications across any cloud.
In its simplest form, Karbon Platform Services consists of a Service Domain encapsulating a project, application, and services infrastructure, and other supporting resources. It also incorporates project and administrator user access and cloud and data pipelines to help converge edge and cloud data.
With Karbon Platform Services, you can:
This data can be stored at the Service Domain or published to the cloud. You can then create intelligent applications using data connectors and machine learning modules to consume the collected data. These applications can run on the Service Domain or the cloud where you have pushed collected data.
Nutanix provides the Service Domain initially as a VM appliance hosted in an AOS AHV or ESXi cluster. You manage infrastructure, resources, and Karbon Platform Services capabilities in a console accessible through a web browser.
As part of initial and ongoing configuration, you can define two user types: an infrastructure administrator and a project user . The cloud management console and user experience help create a more intuitive experience for infrastructure administrators and project users.
A project user can view and use projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and Kubernetes applications.
Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.
The project user has project-specific create/read/update/delete (CRUD) permissions: the project user can create, read, update, and delete the following and associate it with an existing project:
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.
When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
The web browser-based Karbon Platform Services cloud management console enables you to manage infrastructure and related projects, with specific management capability dependent on your role (infrastructure administrator or project user).
You can log on with your My Nutanix or local user credentials.
The default view for a project user is the Dashboard .
After you log on to the cloud management console, you are presented with the main Dashboard page as a role-specific landing page. You can also show this information at any time by clicking Dashboard under the main menu.
Each Karbon Platform Services element (Service Domain, function, data pipeline, and so on) includes a dashboard page that includes information about that element. It might also include information about elements associated with that element.
The element dashboard view also enables you to manage that element. For example, click Projects and select a project. Click Kubernetes Apps and click an application in the list. The application dashboard is displayed along with an Edit button.
The Karbon Platform Services management console includes a Quick Start menu next to your user name. Depending on your role (infrastructure administrator or project user), you can quickly create infrastructure or apps and data. Scroll down to see items you can add for use with projects.
The Quick Start Menu lists the common onboarding tasks for the project user. It includes links to project resource pages. You can also go directly to any project resource from the Apps & Data menu item.
As the project user, you can update a project by creating the following items.
If any Getting Started item shows Pending , the infrastructure administrator has not added you to that entity (like a project or application) or you need to create an entity (like an application).
To get started after logging on to the cloud management console, see Projects.
A project is an infrastructure, apps, and data collection created by the infrastructure administrator for use by project users.
When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project. When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.
A project can consist of:
When you add a Service Domain to a project, all resources such as data sources associated with the Service Domain are available to the users added to the project.
The Projects page lets an infrastructure administrator create a new project and lists projects that administrators can update and view.
For project users, the Projects page lists projects created and assigned by the infrastructure administrator that project users can view and add apps and data. Project users can view and update any projects assigned by the administrator with applications, data pipelines, and so on. Project users cannot remove a project.
When you click a project name, the project Summary dashboard is displayed and shows resources in the project.
You can click any of the project resource menu links to edit or update existing resources, or create and add resources to a project. For example, you can edit an existing data pipeline in the project or create a new one and assign it to the project. For example, click Kafka to shows details for the Kafka data service associated with the project. See Viewing Kafka Status.).
The Karbon Platform Services cloud infrastructure provides services that are enabled by default. It also provides access to services that you can enable for your project.
The platform includes these ready-to-use services, which provide an advantage over self-managed services:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Kafka is available as a data service through your Service Domain.
The Kafka data service is available for use within a project's applications and data pipelines, running on a Service Domain hosted in your environment. The Kafka service offering from Karbon Platform Services provides the following advantages over a self-managed Kafka service:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
Information about application requirements and sample YAML application file
Create an application for your project as described in Creating an Application and specify the application attributes and configuration in a YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: some.container.registry.com/myapp:1.7.9
ports:
- containerPort: 80
env:
- name: KAFKA_ENDPOINT
value: {{.Services.Kafka.Endpoint}}
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
kind
|
Deployment
|
Specify the resource type. Here, use
Deployment
.
|
metadata
|
name
|
Provide a name for your deployment. |
labels
|
Provide at least one label. Here, specify the application name as
app: my-app
|
|
spec
|
Define the Kafka service specification. | |
replicas
|
Here, 1 to indicate a single Kafka cluster (single Service Domain instance or VM) to keep data synchronized. | |
selector
|
Use
matchLabels
and specify the
app
name as in
labels
above.
|
|
template
|
||
Specify the application name here (
my-app
), same as
metadata
specifications above.
|
||
spec
|
Here, define the specifications for the application using Kafka. | |
containers
|
|
|
env
|
Leave these values as shown. |
Information about data pipeline function requirements.
See Functions and Data Pipelines.
You can specify a Kafka endpoint type in a data pipeline. A data pipeline consists of:
For a data pipelines with a Kafka topic endpoint:
In the cloud management console, you can view Kafka data service status when you use Kafka in an application or as a Kafka endpoint in a data pipeline as part of a project. This task assumes you are logged in to the cloud management console.
The Service Domain supports the Traefik open source router as the default Kubernetes Ingress controller. You can also choose the NGINX ingress controller instead.
An infrastructure admin can enable an Ingress controller for your project through Manage Services as described in Managing Project Services. You can only enable one Ingress controller per Service Domain.
When you include Ingress controller annotations as part of your application YAML file, Karbon Platform Services uses Traefik as the default on-demand controller.
If your deployment requires it, you can alternately use NGINX (ingress-nginx) as a Kubernetes Ingress controller instead of Traefik.
In your application YAML, specify two snippets:
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
To securely route application traffic with a Service Domain ingress controller, create YAML snippets to define and specify the ingress controller for each Service Domain.
You can only enable and use one Ingress controller per Service Domain.
Create an application for your project as described in Creating an Application to specify the application attributes and configuration in a YAML file. You can include these Service and Secret snippets Service Domain ingress controller annotations and certificate information in this app deployment YAML file.
apiVersion: v1
kind: Service
metadata:
name: whoami
annotations:
sherlock.nutanix.com/http-ingress-path: /notls
sherlock.nutanix.com/https-ingress-path: /tls
sherlock.nutanix.com/https-ingress-host: DNS_name
sherlock.nutanix.com/http-ingress-host: DNS_name
sherlock.nutanix.com/https-ingress-secret: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
kind
|
Service
|
Specify the Kubernetes service. Here, use
Service
to indicate
that this snippet defines the ingress controller details.
|
apiVersion
|
v1
|
Here, the Kubernetes API version. |
metadata
|
name
|
Provide an app name to which this controller applies. |
annotations
|
These
annotations
define the
ingress controller encryption type and paths for Karbon Platform Services.
|
|
sherlock.nutanix.com/http-ingress-path: /notls
|
/notls
specifies no Transport Layer Security
encryption.
|
|
sherlock.nutanix.com/https-ingress-path: /tls
|
|
|
sherlock.nutanix.com/http-ingress-host:
DNS_name
|
Ingress service host path, where the service is bound to port 80.
DNS_name
. DNS name you can give to your application. For
example,
|
|
sherlock.nutanix.com/https-ingress-host:
DNS_name
|
Ingress service host path, where the service is bound to port 443.
DNS_name
. DNS name you can give to your application. For
example,
|
|
sherlock.nutanix.com/https-ingress-secret:
whoami
|
The
sherlock.nutanix.com/https-ingress-secret:
whoami
snippet links the authentication
Secret
information defined above to this
controller.
|
|
spec
|
Define the transfer protocol, port type, and port for the application. | |
A selector to specify the application.
|
Use a Secret snippet to specify the certificates used to secure app traffic.
apiVersion: v1
kind: Secret
metadata:
name: whoami
type: kubernetes.io/tls
data:
ca.crt: cert_auth_cert
tls.crt: tls_cert
tls.key: tls_key
Field Name | Value or Field Name / Description | Value or Field Name / Description |
---|---|---|
apiVersion
|
v1
|
Here, the TLS API version. |
kind
|
Secret
|
Specify the resource type. Here, use
Secret
to indicate that
this snippet defines the authentication details.
|
metadata
|
name
|
Provide an app name to which this certification applies. |
type
|
Define the authentication type used to secure the app. Here,
kubernetes.io/tls
|
|
data
|
ca.crt
|
Add the keys for each certification type: certificate authority certificate (ca.crt), TLS certificate (tls.crt), and TLS key ( |
tls.crt
|
||
tls.key
|
In the cloud management console, you can view Ingress controller status for any controller used as part of a project. This task assumes you are logged in to the cloud management console.
Istio provides secure connection, traffic management, and telemetry.
In the application YAML snippet or file, define the
VirtualService
and
DestinationRules
objects.
These objects specify traffic routing rules for the
recommendation-service
app host. If the traffic rules match, traffic flows to the named destination (or
subset/version of it) as defined here.
In this example, traffic is routed to the
recommendation-service
app host
if it is sent from the FireFox browser. The specific policy version
(
subset
) for each host helps you identify and manage routed data.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: recomm-svc
spec:
hosts:
- recommendation-service
http:
- match:
- headers:
user-agent:
regex: .*Firefox.*
route:
- destination:
host: recommendation-service
subset: v2
- route:
- destination:
host: recommendation-service
subset: v1
This
DestinationRule
YAML snippet defines a load-balancing traffic policy
for the policy versions (
subsets
), where any healthy host can service the
request.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: recomm-svc
spec:
host: recommendation-service
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
In this YAML snipped, you can split traffic for each subset by specifying a
weight
of 30 in one case, and 70 in the other. You can also weight them
evenly by giving each a weight value of 50.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: recomm-svc
spec:
hosts:
- recommendation-service
http:
- route:
- destination:
host: recommendation-service
subset: v2
weight: 30
- destination:
host: recommendation-service
subset: v1
weight: 70
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
In the cloud management console, you can view Istio service mesh status associated with applications in a project. This task assumes you are logged in to the cloud management console.
The Prometheus service included with Karbon Platform Services enables you to monitor endpoints you define in your project's Kubernetes apps. Karbon Platform Services allows one instance of Prometheus per project.
Prometheus collects metrics in your app endoints. The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, endpoints, and alerts. See Viewing Prometheus Service Status.
You can then decide how to view the collected metrics, through graphs or other Prometheus-supported means. See Create Prometheus Graphs with Grafana - Example.
Setting | Default Value or Description |
---|---|
Frequency interval to collect and store metrics (also known as scrape and store) | Every 60 seconds 1 |
Collection endpoint |
/metrics
1
|
Default collection app | collect-metrics |
Data storage retention time | 10 days |
|
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, Prometheus endpoints, and alerts.
This example shows how you can set up a Prometheus metrics dashboard with Grafana.
This topic provides examples to help you expose Prometheus endpoints to Grafana, an open-source analytics and monitoring visualization application. You can then view scraped Prometheus metrics graphically.
The first ConfigMap YAML snippet example uses the environment variable
{{.Services.Prometheus.Endpoint}}
to define the service endpoint. If this
YAML snippet is part of an application template created by an infra admin, a project user
can then specify these per-Service Domain variables in their application.
The second snippet provides configuration information for the Grafana server web page. The host name in this example is woodkraft2.ntnxdomain.com
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
data:
prometheus.yaml: |-
{
"apiVersion": 1,
"datasources": [
{
"access":"proxy",
"editable": true,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "{{.Services.Prometheus.Endpoint}}",
"version": 1
}
]
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-ini
data:
grafana.ini: |
[server]
domain = woodkraft2.ntnxdomain.com
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
serve_from_sub_path = true
---
This YAML snippet provides a standard deployment specification for Grafana.
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:latest
ports:
- name: grafana
containerPort: 3000
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "1Gi"
cpu: "500m"
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
- mountPath: /etc/grafana/provisioning/datasources
name: grafana-datasources
readOnly: false
- name: grafana-ini
mountPath: "/etc/grafana/grafana.ini"
subPath: grafana.ini
volumes:
- name: grafana-storage
emptyDir: {}
- name: grafana-datasources
configMap:
defaultMode: 420
name: grafana-datasources
- name: grafana-ini
configMap:
defaultMode: 420
name: grafana-ini
---
Define the Grafana Service object to use port 3000 and an Ingress controller to manage access to the service (through woodkraft2.ntnxdomain.com).
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
selector:
app: grafana
ports:
- port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: grafana
labels:
app: grafana
spec:
rules:
- host: woodkraft2.ntnxdomain.com
http:
paths:
- path: /grafana
backend:
serviceName: grafana
servicePort: 3000
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.
You can create intelligent applications to run on the Service Domain infrastructure or the cloud where you have pushed collected data. You can implement application YAML files to use as a template, where you can customize the template by passing existing Categories associated with a Service Domain to it.
You need to create a project with at least one user to create an app.
You can undeploy and deploy any applications that are running on Service Domains or in the cloud. See Deploying and Undeploying a Kubernetes Application.
For Kubernetes apps running as privileged, you might have to specify the Kubernetes
namespace where the application is deployed. You can do this by using the
{{
.Namespace }}
variable you can define in app YAML template file.
In this example, the resource kind of ClusterRoleBinding specifies the
{{
.Namespace }}
variable as the namespace where the subject ServiceAccount is
deployed. As all app resources are deployed in the project namespace, specify the project
name as well (here, name: my-sa).
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- kind: ServiceAccount
name: my-sa
namespace: {{ .Namespace }}
Create a Kubernetes application that you can associate with a project.
Update an existing Kubernetes application.
Delete an existing Kubernetes application.
You can undeploy and deploy any applications that are running on Service Domains or in the cloud. You can choose the Service Domains where you want the app to deploy or undeploy. Select the table or tile view on this page by clicking one of the view icons.
Undeploying a Kubernetes app deletes all config objects directly created for the app, including PersistentVolumeClaim data. Your app can create a PersistentVolumeClaim indirectly through StatefulSets. The following points describe scenarios when data is deleted and when data is preserved after you undeploy an app.
The Data Pipelines page enables you to create and view data pipelines, and also see any alerts associated with existing pipelines.
A data pipeline is a path for data that includes:
It also enables you to process and transform captured data for further consumption or processing.
To create a data pipeline, you must have already created or defined at least one of the following:
After you create one or more data pipelines, the Data Pipelines > Visualization page shows data pipelines and the relationship among data pipeline components.
You can view data pipelines associated with a Service Domain by clicking the filter icon under each title (Data Sources, Data Pipelines to Service Domain, Data Pipelines on Cloud) and selecting one or more Service Domains in the drop-down list.
Create a data pipeline, with a data source or real-time data source as input and infrastructure or external cloud as the output.
Update a data pipeline, including data source, function, and output destination. See also Naming Guidelines.
A function is code used to perform one or more tasks. Script languages include Python, Golang, and Node.js. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.
An infrastructure administrator or project user can create a function, and later can edit or clone it. You cannot edit a function that is used by an existing data pipeline. In this case, you can clone it to make an editable copy.
Edit an existing function. To complete this task, log on to the cloud management console.
Other than the name and description, you cannot edit a function that is in use by an existing data pipeline. In this case, you can clone a function to duplicate it. See Cloning a Function.
Clone an existing function. To complete this task, log on to the cloud management console.
You can create machine learning (ML) models to enable AI inferencing for your projects. The ML Model feature provides a common interface for functions (that is, scripts) or applications to use the ML model Tensorflow runtime environment on the Service Domain.
The Karbon Platform Services Release Notes list currently supported ML model types.
An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.
You can add multiple models and model versions to a single ML Model instance that you create. In this scenario, multiple client projects can access any model configured in the single ML model instance.
How to delete an ML model.
The ML Models page in the Karbon Platform Services management console shows version and activity status for all models.
A runtime environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.
Karbon Platform Services includes standard runtime environments including but not limited to the following. These runtimes are read-only and cannot be edited, updated, or deleted by users. They are available to all projects, functions, and associated container registries.
How to create a user-added runtime environment for use with your project.
https://index.docker.io/v1/
or
registry-1.docker.io/distribution/registry:2.1
https://
aws_account_id
.dkr.ecr.
region
.amazonaws.com
How to edit a user-added runtime environment from the cloud management console. You cannot edit the included read-only runtime environments.
How to remove a user-added runtime environment from the cloud management console. To complete this task, log on to the cloud management console.
Logging provides a consolidated landing page enabling you to collect, forward, and manage logs from selected Service Domains.
From Logging or System Logs > Logging or the summary page for a specific project, you can:
You can also collect logs and stream them to a cloud profile by using the kps command line available from the Nutanix public Github channel https://github.com/nutanix/karbon-platform-services/tree/master/cli. Readme documentation available there describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Access the Audit Log dashboard to view the most recent operations performed by users.
Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Trail dashboard in the cloud management console. To display the dashboard, log on to the console, open the navigation menu, and click Audit Trail .
Click Filters to display operations by Operation Type, User Name, Resource Name, Resource Type, Time Range, and other filter types so you can narrow your search. Filter Operation Type by CREATE, UPDATE, and DELETE actions.
Log Collector examines the selected project application and collects logs and configuration information useful for troubleshooting issues and finding out details about an app.
Create, edit, and delete log forwarding policies to help make collection more granular and then forward those Service Domain logs to the cloud.
monitoring.us-west-2.amazonaws.com
Create a log collector for log forwarding by using the kps command line.
Nutanix has released the kps command line on its public Github channel. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Each sample YAML file defines a log collector. Log collectors can be:
See the most up-to-date sample YAML files and descriptions at https://github.com/nutanix/karbon-platform-services/tree/master/cli.
Create a log collector defined in a YAML file:
user@host$ kps create -f infra-logcollector-cloudwatch.yaml
This sample infrastructure log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).
To enable AWS CloudWatch log streaming, you must specify awsRegion, cloudwatchStream, and cloudwatchGroup.
kind: logcollector
name: infra-log-name
type: infrastructure
destination: cloudwatch
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name | Value or Subfield Name / Description | Value or Subfield Name / Description |
---|---|---|
kind |
logcollector
|
Specify the resource type |
name | infra-log-name | Specify the unique log collector name |
type |
infrastructure
|
Log collector for infrastructure |
destination | cloudwatch | Cloud destination type |
cloudProfile | cloud-profile-name | Specify an existing Karbon Platform Services cloud profile |
awsRegion |
For example,
us-west-2
or
monitoring.us-west-2.amazonaws.com
|
Valid AWS region name or CloudWatch endpoint fully qualified domain name |
cloudwatchGroup | cloudwatch-group-name | Log group name |
cloudwatchStream | cloudwatch-stream-name | Log stream name |
filterSourceCode |
|
Specify the log conversion code |
This sample project log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).
kind: logcollector
name: project-log-name
type: project
project: project-name
destination: cloud-destination type
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name | Value or Subfield Name / Description | Value or Subfield Name / Description |
---|---|---|
kind |
logcollector
|
Specify the resource type |
name | project-log-name | Specify the unique log collector name |
type |
project
|
Log collector for specific project |
project | project-name | Specify the project name |
destination | cloud-destination type | Cloud destination type such as CloudWatch |
cloudProfile | cloud-profile-name | Specify an existing Karbon Platform Services cloud profile |
awsRegion |
For example,
us-west-2
or
monitoring.us-west-2.amazonaws.com
|
Valid AWS region name or CloudWatch endpoint fully qualified domain name |
cloudwatchGroup | cloudwatch-group-name | Log group name |
cloudwatchStream | cloudwatch-stream-name | Log stream name |
filterSourceCode |
|
Specify the log conversion code |
Real-Time Log Monitoring built into Karbon Services provides real-time log monitoring and lets you view application and data pipeline log messages securely in real time.
Viewing the most recent log messages as they occur helps you see and troubleshoot application or data pipeline operations. Messages stream securely over an encrypted channel and are viewable only by authenticated clients (such as an existing user logged on to the Karbon Services cloud platform).
The cloud management console shows the most recent log messages, up to 2 MB.
View the most recent real-time logs for applications and data pipelines.
The Karbon Platform Services real-time log monitoring console is a terminal-style display that streams log entries as they occur.
After you do the steps in Displaying Real-Time Logs, a terminal-style window is displayed in one or more tabs. Each tab is streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application.
Your application or data pipeline function generates the log messages. That is, the console shows log messages that you have written into your application or function.
If your Karbon Platform Services Service Domain is connected and your application or function is not logging anything, the console might show a No Logs message. In this case, the message means that the application or function is idle and not generating any log messages.
You might see one or more error messages in the following cases. As a result, Real-Time Log Monitoring cannot retrieve any logs.
Simplifies authentication when you use the Karbon Platform Services API by enabling you to manage your keys from the Karbon Platform Services management console. This topic also describes API key guidelines.
As a user (infrastructure or project), you can manage up to two API keys through the Karbon Platform Services management console. After logging on to the management console, click your user name in the management console, then click Manage API Keys to create, disable, or delete these keys.
Read more about the Karbon Platform Services API at nutanix.dev. For Karbon Platform Services Developers describes related information and links to resources for Karbon Platform Services developers.
Example API request using an API key.
After you create an API key, use it with your Karbon Platform Services API HTTPS Authorization requests. In the request, specify an Authorization header including Bearer and the key you generated and copied from the Karbon Platform Services management console.
For example, here is an example Node JS code snippet:
var http = require("https");
var options = {
"method": "GET",
"hostname": "karbon.nutanix.com",
"port": null,
"path": "/v1.0/applications",
"headers": {
"authorization": "Bearer API_key"
}
};
Create one or more API keys through the Karbon Platform Services management console.
Create one or more API keys through the Karbon Platform Services management console.
The Alerts page and the Alerts Dashboard panel show any alerts triggered by Karbon Platform Services depending on your role.
To see alert details:
Click Filters to sort the alerts by:
An Alert link is available on each Apps & Data and Infrastructure page.
Information and links to resources for Karbon Platform Services developers.
This section contains information about Karbon Platform Services development.
The Karbon Platform Services public Github repository https://github.com/nutanix/karbon-platform-services includes sample application YAML files, instructions describing external client access to services, Karbon Platform Services kps CLI samples, and so on.
Nutanix has released the kps command line on its public Github repository. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.
Karbon Platform Services supports the Traefik open source router as the default Kubernetes Ingress controller and NGINX (ingress-nginx) as a Kubernetes Ingress controller. For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/ingress.
Kafka is available as a data service through your Service Domain. Clients can manage, publish and subscribe to topics using the native Kafka protocol. Data Pipelines can use Kafka as destination. Applications can also use a Kafka client of choice to access the Kafka data service.
For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/kafka. See alsoKafka as a Service.
Enable a container application to run with elevated privileges.
For information about installing the kps command line, see For Karbon Platform Services Developers.
Karbon Platform Services enables you to develop an application which requires elevated privileges to run successfully. By using the kps command line, you can set your Service Domain to enable an application running in a container to run in privileged mode.
Configure your Service Domain to enable a container application to run with elevated privileges.
user@host$ kps config create-context context_name --email user_email_address --password password
user@host$ kps config get-contexts
user@host$ kps get svcdomain -o yaml
user@host$ kps update svcdomain svc_domain_name --set-privileged
Successfully updated Service Domain:
svc_domain_name
true
.
user@host$ kps get svcdomain svc_domain_name -o yaml
kind: edge
name: svc_domain_name
connected: true
.
.
.
profile:
privileged: true
enableSSH: true
effectiveProfile:
privileged: true
enableSSH: true
effectiveProfile
privileged
set to
true
indicates that Nutanix
Support has enabled this feature. If the setting is
false
,
contact Nutanix Support to enable this feature. In this example, Nutanix has
also enabled SSH access to this Service Domain (see
Secure Shell (SSH) Access to Service
Domains
in
Karbon Platform Services Administration
Guide
).
After elevating privilege as described in Setting Privileged Mode, elevate the application privilege. This sample enables USB device access for an application running in a container on elevated Service Domain
Add a tag similar to the following in the Deployment section in your application YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: usb
annotations:
sherlock.nutanix.com/privileged: "true"
apiVersion: v1
kind: ConfigMap
metadata:
name: usb-scripts
data:
entrypoint.sh: |-
apk add python3
apk add libusb
pip3 install pyusb
echo Read from USB keyboard
python3 read-usb-keyboard.py
read-usb-keyboard.py: |-
import usb.core
import usb.util
import time
USB_IF = 0 # Interface
USB_TIMEOUT = 10000 # Timeout in MS
USB_VENDOR = 0x627
USB_PRODUCT = 0x1
# Find keyboard
dev = usb.core.find(idVendor=USB_VENDOR, idProduct=USB_PRODUCT)
endpoint = dev[0][(0,0)][0]
try:
dev.detach_kernel_driver(USB_IF)
except Exception as err:
print(err)
usb.util.claim_interface(dev, USB_IF)
while True:
try:
control = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, USB_TIMEOUT)
print(control)
except Exception as err:
print(err)
time.sleep(0.01)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: usb
annotations:
sherlock.nutanix.com/privileged: "true"
spec:
replicas: 1
selector:
matchLabels:
app: usb
template:
metadata:
labels:
app: usb
spec:
terminationGracePeriodSeconds: 0
containers:
- name: alpine
image: alpine
volumeMounts:
- name: scripts
mountPath: /scripts
command:
- sh
- -c
- cd /scripts && ./entrypoint.sh
volumes:
- name: scripts
configMap:
name: usb-scripts
defaultMode: 0766
Define environment variables for an individual Service Domain. After defining them, any Kubernetes app that specifies that Service Domain can access them as part of a container spec in the app YAML.
As an infrastructure administrator, you can set environment variables and associated values for each Service Domain, which are available for use in Kubernetes apps. For example:
As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.
How to set environment variables for a Service Domain.
user@host$ kps config create-context context_name --email user_email_address --password password
user@host$ kps config get-contexts
user@host$ kps get svcdomain -o yaml
my-svc-domain
, for example, set the
Service Domain environment variable. In this example, set a secret variable
named
SD_PASSWORD
with a value of
passwd1234
.
user@host$ kps update svcdomain my-svc-domain --set-env '{"SD_PASSWORD":"passwd1234"}'
user@host$ kps get svcdomain my-svc-domain -o yaml
kind: edge
name: my-svc-doamin
connected: true
.
.
.
env: '{"SD_PASSWORD": "passwd1234"}'
kps update
svcdomain my-svc-domain --set-env '{"
variable_name
":
"
variable_value
"}'
command.
user@host$ kps update svcdomain svc_domain_name --unset-env
user@host$ kps update svcdomain svc_domain_name --unset-env '{"variable_name":"variable_value"}'
Example: how to use existing environment variables for a Service Domain in application YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: some.container.registry.com/myapp:1.7.9
ports:
- containerPort: 80
env:
- name: KAFKA_ENDPOINT
value: some.kafka.endpoint
- name: KAFKA_KEY
- value: placeholder
command:
- sh
- c
- "exec node index.js $(KAFKA_KEY)"
Logically grouped Service Domain, data sources, and other items. Applying a category to an entity applies any values and attributes associated with the category to the entity.
Built-in Karbon Platform Services platform feature to publish data to a public cloud like Amazon Web Services or Google Cloud Platform. Requires a customer-owned secured public cloud account and configuration in the Karbon Platform Services management console.
Cloud provider service account (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is transmitted for further processing.
Credentials and location of the Docker container registry hosted on a cloud provider service account. Can also be an existing cloud profile.
Path for data that includes input, processing, and output blocks. Enables you to process and transform captured data for further consumption or processing.
Data service such as Kafka as a Service or Real-Time Stream Processing as a Service.
A collection of sensors, gateways, or other input devices to associate with a node or Service Domain (previously known as an edge). Enables you to manage and monitor sensor integration and connectivity.
A node minimally consists of a node (also known as edge device) and a data source. Any location (hospital, parking lot, retail store, oil rig, factory floor, and so on) where sensors or other input devices are installed and collecting data. Typical sensors measure (temperature, pressure, audio, and so on) or stream (for example, an IP-connected video camera).
Code used to perform one or more tasks. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.
User who creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, data sources and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them.
A collection of infrastructure (Service Domain, data source, project users) plus code and data (Kubernetes apps, data pipelines, functions, run-time environments), created by the infrastructure administrator for use by project users.
User who views and uses projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and applications. This user has project-specific CRUD permissions: the project user can create, read, update, and delete assigned applications, scripts, data pipelines, and other project users.
A run-time environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.
Intelligent Platform as a Service (PaaS) providing the Karbon Platform Services Service Domain infrastructure (consisting of a full software stack combined with a hardware device). It enables customers to deploy intelligent applications (powered by AI/artificial intelligence) to process and transform data ingested by sensors. This data can be published selectively to public clouds.
Browser-based console where you can manage the Karbon Platform Services platform and related infrastructure, depending on your role (infrastructure administrator or project user).
Software as a Service (SaaS)/Platform as a Service (PaaS) based management platform and cloud IoT services. Includes the Karbon Platform Services management console.
Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.