This is one stop global knowledge base where you can learn about all the products, solutions and support features.
On this page
Cloud Manager collects log information for both MongoDB processes and its agents. For MongoDB processes, you can access both real-time logs and on-disk logs.
mongod
and
mongos
processes.
The MongoDB Agent issues the
getLog
command with every
monitoring ping. This command collects log entries from RAM cache of
each MongoDB process.
Cloud Manager enables real-time log collection by default. You can disable log collection for either all MongoDB deployments in a Cloud Manager project or for individual MongoDB deployments . If you disable log collection, Cloud Manager continues to display previously collected log entries.
The four buttons are listed in the following order, left to right: Shards , Configs , Mongos , and BIs .
Process | Displays |
---|---|
Shards | mongod processes that host your data. |
Configs | mongod processes that run as config servers to store a sharded cluster’s metadata. |
Mongos | mongos processes that route data in a sharded cluster. |
BIs | BI processes that access data in a sharded cluster. |
The tab displays log information. If the tab is not displayed, see Enable or Disable Log Collection for a Deployment to enable log collection.
If you turn off log collection, existing log entries remain in the Logs tab, but Cloud Manager does not collect new entries.
Cloud Manager collects on-disk logs even if the MongoDB instance is not
running. The MongoDB Agent collects the logs from the location you
specified in the MongoDB
systemLog.path
configuration option. The
MongoDB on-disk logs are a subset of the real-time logs and therefore
less verbose.
Note
This option isn’t available for deployed MongoDB processes if the
systemLog.destination
property is set to
syslog
.
You can configure log rotation for the on-disk logs. Cloud Manager rotates logs by default.
This procedure rotates both system and audit logs for Cloud Manager.
Cloud Manager can rotate and compress logs for clusters that the MongoDB Agent manages. If the MongoDB Agent only monitors a cluster, it ignores that cluster’s logs.
Important
If you’re running MongoDB Enterprise version 5.0 or later and MongoDB Agent 11.11.0.7355 or later, you can:
If you’re running earlier versions of MongoDB Enterprise or the MongoDB Agent, Cloud Manager:
MongoDB Community users can rotate, compress, and delete the server logs only.
Note
When using this feature, disable any platform-based log-rotation
services like
logrotate
. If the MongoDB Agent only monitors the
cluster, that cluster may use platform-based services.
Toggle System Log Rotation to ON to rotate server logs.
MongoDB Enterprise users running MongoDB Enterprise version 5.0 or later and MongoDB Agent 11.11.0.7355 and later can also toggle Audit Log Rotation to ON to rotate audit logs and configure audit log rotation separately.
If you’re running earlier versions of MongoDB Enterprise or the MongoDB Agent, setting System Log Rotation to ON also rotates audit logs.
Set log rotation to OFF if you don’t want Cloud Manager to rotate its logs. Log rotation is OFF by default.
After you enable log rotation, Cloud Manager displays additional log rotation settings.
Cloud Manager rotates the logs on your MongoDB hosts per the following settings:
Field | Necessity | Action | Default |
---|---|---|---|
Size Threshold (MB) | Required | Cloud Manager rotates log files that exceed this maximum log file size. |
1000
|
Time Threshold (Hours) | Required | Cloud Manager rotates logs that exceed this duration. |
24
|
Max Uncompressed Files | Optional |
Log files can remain uncompressed until they exceed this number of files. Cloud Manager compresses the oldest log files first.
If you leave this setting empty, Cloud Manager will use the default
of
|
5
|
Max Percent of Disk | Optional |
Log files can take up to this percent of disk space on your MongoDB host’s log volume. Cloud Manager deletes the oldest log files once they exceed this disk threshold.
If you leave this setting empty, Cloud Manager will use the default of
|
2%
|
Total Number of Files | Optional |
Total number of log files. If a number is not specified, the
total number of log files defaults to
0
and is determined
by other
Rotate Logs
settings.
|
0
|
When you are done, click Save to review your changes.
Otherwise, click Cancel and you can make additional changes.
Cloud Manager collects logs for all your MongoDB Agents.
The page displays logs for the type of agent selected in the View drop-down list. The page filters logs according to any filters selected in through the gear icon.
To display logs for a different type of agent, use the View drop-down list.
To display logs for a specific hosts or MongoDB processes, click the gear icon and make your selections.
To clear filters, click the gear icon and click Remove Filters .
To download the selected logs, click the gear icon and click Download as CSV File .
Note
To view logs for a specific agent, you can alternatively click the Agents tab’s All Agents list and then click view logs for the agent.
If you use Automation to manage your cluster, follow this procedure to configure rotation of the Agent log files.
Note
If you haven’t enabled Automation, see the following documentation for information about how to manually configure logging settings in the agent configuration files:
Click the pencil icon to edit the Monitoring Agent or Backup Agent log settings:
Name | Type | Description |
---|---|---|
Linux Log File Path | string |
Conditional: Logs on a Linux host. The path to which the agent writes its logs on a Linux host. The suggested value is: /var/log/mongodb-mms-automation/monitoring-agent.log
|
Windows Log File Path | string |
Conditional: Logs on a Windows host. The path to which the agent writes its logs on a Windows host. The suggested value is: %SystemDrive%\MMSAutomation\log\mongodb-mms-automation\monitoring-agent.log
|
Rotate Logs | Toggle | A toggle to select if the logs should be rotated. |
Size Threshold (MB) | integer |
The size where the logs rotate automatically. The default value
is
1000
.
|
Time Threshold (Hours) | integer |
The duration of time when the logs rotate automatically. The
default value is
24
.
|
Max Uncompressed Files | integer |
Optional.
The greatest number of log files, including the
current log file, that should stay uncompressed. The suggested
value is
5
.
|
Max Percent of Disk | integer |
Optional.
The greatest percentage of disk space on your
MongoDB hosts that the logs should consume. The suggested
value is
2%
.
|
Total Number of Files | integer |
Optional.
The total number of log files. If a number is not specified,
the total number of log files defaults to
0
and is determined by other
Rotate Logs
settings.
|
When you are done, click Save .
Otherwise, click Cancel and you can make additional changes.
On this page
New
Cloud Manager provides a new organizations and projects hierarchy to help you manage your Cloud Manager deployments. Groups are now known as projects. You can create many projects in an organization.
In the organizations and projects hierarchy, an organization can contain many projects (previously referred to as groups). Under this structure, you can:
Groups are now projects. Previously, users managed deployments by groups, where each group was managed separately even if a user belonged to multiple groups.
If you have existing groups, organizations have been automatically created for your groups (now projects), and your groups have been placed under these organizations.
If your groups share the same billing settings, they have been placed in the same organization.
Deployments are now associated with projects. As before, deployments must have unique names within projects. See Projects and Edit Project Settings .
You can create teams of users and then assign teams of users to projects. See Cloud Manager Access .
You can view and accept invitations to organizations and projects. See Invitations to Organizations and Projects .
On this page
This guide shows you how to configure federated authentication using Okta as your IdP .
After integrating Okta and Cloud Manager, you can use your company’s credentials to log in to Cloud Manager and other MongoDB cloud services.
Note
If you are using Okta’s built-in MongoDB Cloud app, you can use Okta’s documentation.
If you are creating your own SAML app, use the procedures described here.
To use Okta as an IdP for Cloud Manager, you must have:
Throughout the following procedure, it is helpful to have one browser tab open to your Federation Management Console and one tab open to your Okta account.
.cer
extension instead of
.cert
.
In the FMC dashboard, fill in the data fields with the following values:
Field | Value |
---|---|
Configuration Name | A descriptive name of your choosing. |
Issuer URI and Single Sign-On URL | Click the Fill With Placeholder Values button to the right of the text fields. You will get the real values from Okta in a later step. |
Identity Provider Signature Certificate |
Click the
Choose File
button to upload the
You can either:
|
Request Binding | HTTP POST |
Response Signature Algorithm | SHA-256 |
Click the Next button.
In this step, copy values from the Cloud Manager FMC to the Okta Create SAML Integration page.
Okta Data Field | Value | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Single sign on URL |
Use the Assertion Consumer Service URL from the Cloud Manager FMC. Checkboxes:
|
||||||||||||||
Audience URI (SP Entity ID) | Use the Audience URI from the Cloud Manager FMC. | ||||||||||||||
Default RelayState |
Optionally, add a RelayState URL to your IdP to send users to a URL you choose and avoid unnecessary redirects after login. You can use:
|
||||||||||||||
Name ID format | Unspecified | ||||||||||||||
Application username | |||||||||||||||
Update application username on | Create and update |
Click the Click Show Advanced Settings link in the Okta configuration page and ensure that the following values are set:
Okta Data Field | Value |
---|---|
Response | Signed |
Assertion Signature | Signed |
Signature Algorithm | RSA-SHA256 |
Digest Algorithm | SHA256 |
Assertion Encryption | Unencrypted |
Leave the remaining Advanced Settings fields in their default state.
Scroll down to the Attribute Statements (Optional) section and create three attributes with the following values:
Name | Name Format | Value |
---|---|---|
Unspecified | user.email | |
firstName | Unspecified | user.firstName |
lastName | Unspecified | user.lastName |
Important
The values in the Name column are case-sensitive. Enter them exactly as shown.
Note
These values may be different if Okta is connected to an Active Directory. For the appropriate values, use the Active Directory fields that contain a user’s first name, last name, and full email address.
Click the Next button in the Okta configuration.
Select the radio button marked I’m an Okta customer adding an internal app .
Click the Finish button.
On the Okta application page, click the View Setup Instructions button in the middle of the page.
Note
The Okta setup instructions appear in a new browser tab.
In the Cloud Manager FMC , click the Finish button to return to the Identity Providers page. Click the Modify button for your newly created IdP .
Fill in the following text fields:
FMC Data Field | Value |
---|---|
Issuer URI | Use the Identity Provider Issuer value from the Okta Setup Instructions page. |
Single Sign-on URL | Use the Identity Provider Single Sign-On URL value from the Okta Setup Instructions page. |
Close the Okta setup instructions browser tab.
Click the Next button on the Cloud Manager FMC page.
Click the Finish button the FMC Edit Identity Provider page.
Mapping your domain to the IdP lets Cloud Manager know that users from your domain should be directed to the Login URL for your identity provider configuration.
When users visit the Cloud Manager login page, they enter their email address. If the email domain is associated with an IdP, they are sent to the Login URL for that IdP.
Important
You can map a single domain to multiple identity providers. If you do, users who log in using the MongoDB Cloud console are automatically redirected to the first matching IdP mapped to the domain.
To log in using an alternative identity provider, users must either:
Use the Federation Management Console to map your domain to the IdP :
Click Add a Domain .
On the Domains screen, click Add Domain .
Enter the following information for your domain mapping:
Field | Description |
---|---|
Display Name | Name to easily identify the domain. |
Domain Name | Domain name to map. |
Click Next .
Note
You can choose the verification method once. It cannot be modified. To select a different verification method, delete and recreate the domain mapping.
Select the appropriate tab based on whether you are verifying your domain by uploading an HTML file or creating a DNS TXT record:
Upload an HTML file containing a verification key to verify that you own your domain.
mongodb-site-verification.html
file
that Cloud Manager provides.
<https://host.domain>/mongodb-site-verification.html
.
Create a DNS TXT record with your domain provider to verify that you own your domain. Each DNS record associates a specific Cloud Manager organization with a specific domain.
Click DNS Record .
Click Next .
Copy the provided TXT record. The TXT record has the following form:
mongodb-site-verification=<32-character string>
Log in to your domain name provider (such as GoDaddy.com or networksolutions.com).
Add the TXT record that Cloud Manager provides to your domain.
Return to Cloud Manager and click Finish .
The Domains screen displays both unverified and verified domains you’ve mapped to your IdP . To verify your domain, click the target domain’s Verify button. Cloud Manager shows whether the verification succeeded in a banner at the top of the screen.
After successfully verifying your domain, use the Federation Management Console to associate the domain with Okta:
Important
Before you begin testing, copy and save the Bypass SAML Mode URL for your IdP . Use this URL to bypass federated authentication in the event that you are locked out of your Cloud Manager organization.
While testing, keep your session logged in to the Federation Management Console to further ensure against lockouts.
To learn more about Bypass SAML Mode , see Bypass SAML Mode .
Use the Federation Management Console to test the integration between your domain and Okta:
Example
If your verified domain is
mongodb.com
, enter
alice@mongodb.com
.
If you mapped your domain correctly, you’re redirected to your IdP to authenticate. If authenticating with your IdP succeeds, you’re redirected back to Cloud Manager.
Note
You can bypass the Cloud Manager log in page by navigating directly to your IdP ’s Login URL . The Login URL takes you directly to your IdP to authenticate.
Use the Federation Management Console to assign your domain’s users access to specific Cloud Manager organizations:
Click View Organizations .
Cloud Manager displays all organizations where you are an
Organization
Owner
.
Organizations which are not already connected to the Federation Application have Connect button in the Actions column.
Click the desired organization’s Connect button.
From the Organizations screen in the management console:
Click the Name of the organization you want to map to an IdP .
On the Identity Provider screen, click Apply Identity Provider .
Cloud Manager directs you to the Identity Providers screen which shows all IdPs you have linked to Cloud Manager.
For the IdP you want to apply to the organization, click Modify .
At the bottom of the Edit Identity Provider form, select the organizations to which this IdP applies.
Click Next .
Click Finish .
You can configure the following advanced options for federated authentication for greater control over your federated users and authentication flow:
Note
The following advanced options for federated authentication require you to map an organization .
All users you assigned to the Okta application can log in to Cloud Manager using their Okta credentials on the Login URL . Users have access to the organizations you mapped to your IdP .
Important
You can map a single domain to multiple identity providers. If you do, users who log in using the MongoDB Cloud console are automatically redirected to the first matching IdP mapped to the domain.
To log in using an alternative identity provider, users must either:
If you selected a default organization role, new users who log in to Cloud Manager using the Login URL have the role you specified.
On this page
The Automation uses an automation configuration to determine the desired state of a MongoDB deployment and to effect changes as needed. If you modify the deployment through the Cloud Manager web interface, you never need manipulate this configuration.
If you are using the Automation without Cloud Manager, you can construct and distribute the configuration manually.
Optional fields are marked as such.
A field that takes a
<number>
as its value can take integers and
floating point numbers.
This lists the version of the automation configuration.
"version" : "<integer>"
Name | Type | Necessity | Description |
---|---|---|---|
version | integer | Required | Revision of this automation configuration file. |
Cloud Manager downloads automatic versions and runs starting scripts in the directory set in options.downloadBase .
"options" : {
"downloadBase" : "<string>",
}
Name | Type | Necessity | Description |
---|---|---|---|
options | object | Required | Path for automatic downloads of new versions. |
options.downloadBase | string | Required | Directory on Linux and UNIX platforms for automatic version downloads and startup scripts. |
The
mongoDbVersions[n]
array defines specification objects for
the MongoDB instances found in the
processes
array. Each
MongoDB instance in the
processes
array must have a
specification object in this array.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
"mongoDbVersions[n]" : [
{
"name" : "<string>",
"builds" : [
{
"platform" : "<string>",
"url" : "<string>",
"gitVersion" : "<string>",
"modules" : [ "<string>", ... ],
"architecture" : "<string>",
"bits" : "<integer>",
"win2008plus" : "<Boolean>",
"winVCRedistUrl" : "<string>",
"winVCRedistOptions" : [ "<string>", ... ],
"winVCRedistDll" : "<string>",
"winVCRedistVersion" : "<string>"
},
...
],
},
...
]
|
Name | Type | Necessity | Description |
---|---|---|---|
mongoDbVersions[n] | array of objects | Required | Specification objects for the MongoDB instances found in the processes array. Each MongoDB instance in processes must have a specification object in mongoDbVersions[n] . |
mongoDbVersions[n].name | string | Required | Name of the specification object. The specification object is attached to a MongoDB instance through the instance’s processes.version parameter in this configuration. |
mongoDbVersions[n].builds[k] | array of objects | Required | Builds available for this MongoDB instance. |
mongoDbVersions[n].builds[k].platform | string | Required | Platform for this MongoDB instance. |
mongoDbVersions[n].builds[k].url | string | Required | URL from which to download MongoDB for this instance. |
mongoDbVersions[n].builds[k].gitVersion | string | Required | Commit identifier that identifies the state of the code used to build the MongoDB process. The MongoDB buildInfo command returns the gitVersion identifier. |
mongoDbVersions[n].builds[k].modules | array | Required | List of modules for this version. Corresponds to the modules parameter that the buildInfo command returns. |
mongoDbVersions[n].builds[k].architecture | string | Required | Processor’s architecture. Cloud Manager accepts amd64 or ppc64le . |
mongoDbVersions[n].builds[k].bits | integer | Deprecated | Processor’s bus width. Don’t remove or make modifications to this parameter. |
mongoDbVersions[n].builds[k].win2008plus | Boolean | Optional | Set to true if this is a Windows build that requires either Windows 7 later or Windows Server 2008 R2 or later. |
mongoDbVersions[n].builds[k].winVCRedistUrl | string | Optional | URL from which the required version of the Microsoft Visual C++ redistributable can be downloaded. |
mongoDbVersions[n].builds[k].winVCRedistOptions | array of strings | Optional | String values that list the command-line options to be specified when running the Microsoft Visual C++ redistributable installer. Each command-line option is a separate string in the array. |
mongoDbVersions[n].builds[k].winVCRedistDll | string | Optional | Name of the Microsoft Visual C++ runtime DLL file that the agent checks to determine if a new version of the Microsoft Visual C++ redistributable is needed. |
mongoDbVersions[n].builds[k].winVCRedistVersion | string | Optional | Minimum version of the Microsoft Visual C++ runtime DLL that must be present to skip over the installation of the Microsoft Visual C++ redistributable. |
agentVersion specifies the version of the MongoDB Agent.
Note
While you can update the MongoDB Agent version through this configuration property, you should use the Update Agent Versions endpoint to ensure your versions are up to date.
"agentVersion" : {
"name" : "<string>",
"directoryUrl" : "<string>"
}
Name | Type | Necessity | Description |
---|---|---|---|
agentVersion | object | Optional | Version of the MongoDB Agent to run. If the running version does not match this setting, the MongoDB Agent downloads the specified version, shuts itself down, and starts the new version. |
agentVersion.name | string | Optional | Desired version of the MongoDB Agent. |
agentVersion.directoryUrl | string | Optional | URL from which to download the MongoDB Agent. |
The monitoringVersions array specifies the version of the Monitoring Agent. Cloud Manager has made this parameter obsolete. To update the monitoring log settings, use the Update Monitoring Configuration Settings endpoint.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
"monitoringVersions" : [
{
"name" : "<string>",
"hostname" : "<string>",
"urls" : {
"<platform1>" : {
"<build1>" : "<string>",
...,
"default" : "<string>"
},
...
},
"baseUrl" : "<string>",
"logPath" : "<string>",
"logRotate" : {
"sizeThresholdMB" : <number>,
"timeThresholdHrs" : <integer>,
"numUncompressed": <integer>,
"percentOfDiskspace" : <number>,
"numTotal" : <integer>
}
},
...
]
|
Name | Type | Necessity | Description |
---|---|---|---|
monitoringVersions | array of objects | Optional | Objects that define version information for each Monitoring Agent. |
monitoringVersions.name | string | Required |
Version of the Monitoring Agent. See also MongoDB Compatibility Matrix Important This property is read-only. Any modifications made to this property are not reflected when updating the Monitoring Agent through the API . To update the Monitoring Agent version, use this endpoint . |
monitoringVersions.hostname | string | Required | FQDN of the host that runs the Monitoring Agent. If the Monitoring Agent is not running on the host, Cloud Manager installs the agent from the location specified in monitoringVersions.urls . |
monitoringVersions.urls | object | Required | Platform- and build-specific URL s from which to download the Monitoring Agent. |
monitoringVersions.urls.<platform> | object | Required | Label that identifies an operating system and its version. The field contains an object with key-value pairs, where each key is either the name of a build or default and each value is a URL for downloading the Monitoring Agent. The object must include the default key set to the default download URL for the platform. |
monitoringVersions.baseUrl | string | Required |
Base URL
used for the
mmsBaseUrl
setting.
|
monitoringVersions.logPath | string | Optional | Directory where the agent stores its logs. The default is to store logs in /dev/null . |
monitoringVersions.logRotate | object | Optional | Enables log rotation for the MongoDB logs for a process. |
monitoringVersions.logRotate.sizeThresholdMB | number | Required | Maximum size in MB for an individual log file before rotation. |
monitoringVersions.logRotate.timeThresholdHrs | integer | Required | Maximum time in hours for an individual log file before rotation. |
monitoringVersions.logRotate.numUncompressed | integer | Optional | Maximum number of total log files to leave uncompressed, including the current log file. The default is 5 . In earlier versions of Cloud Manager, this field was named maxUncompressed . The earlier name is still recognized, though the new version is preferred. |
monitoringVersions.logRotate.percentOfDiskspace | number | Optional | Maximum percentage of total disk space all log files should take up before deletion. The default is .02 . |
monitoringVersions.logRotate.numTotal | integer | Optional | Total number of log files. If a number is not specified, the total number of log files defaults to 0 and is determined by other monitoringVersions.logRotate settings. |
The backupVersions array specifies the version of the Backup Agent. Cloud Manager has made this parameter obsolete. To update the backup log settings, use the Update Backup Configuration Settings endpoint.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
"backupVersions[n]" : [
{
"name" : "<string>",
"hostname" : "<string>",
"urls" : {
"<platform1>" : {
"<build1>" : "<string>",
...,
"default" : "<string>"
},
...
},
"baseUrl" : "<string>",
"logPath" : "<string>",
"logRotate" : {
"sizeThresholdMB" : "<number>",
"timeThresholdHrs" : "<integer>",
"numUncompressed": "<integer>",
"percentOfDiskspace" : "<number>",
"numTotal" : "<integer>"
}
},
...
]
|
Name | Type | Necessity | Description |
---|---|---|---|
backupVersions[n] | array of objects | Optional | Objects that define version information for each Backup Agent. |
backupVersions[n].name | string | Required |
Version of the Backup Agent. See also MongoDB Compatibility Matrix Important This property is read-only. Any modifications made to this property are not reflected when updating the Backup Agent through the API . To update the Backup Agent version, see this endpoint . |
backupVersions[n].hostname | string | Required | FQDN of the host that runs the Backup Agent. If the Backup Agent is not running on the host, Cloud Manager installs the agent from the location specified in backupVersions[n].urls . |
backupVersions[n].urls | object | Required | Platform- and build-specific URL s from which to download the Backup Agent. |
backupVersions[n].urls.<platform> | object | Required | Label that identifies an operating system and its version. The field contains an object with key-value pairs, where each key is either the name of a build or default and each value is a URL for downloading the Backup Agent. The object must include the default key set to the default download URL for the platform. |
backupVersions[n].baseUrl | string | Required | Base URL used for the mothership and https settings in the Custom Settings . For example, for “baseUrl”=https://cloud.mongodb.com , the backup configuration fields would have these values: mothership=api-backup.mongodb.com and https”=true . |
backupVersions[n].logPath | string | Optional | Directory where the agent stores its logs. The default is to store logs in /dev/null . |
backupVersions[n].logRotate | object | Optional | Enables log rotation for the MongoDB logs for a process. |
backupVersions[n].logRotate.sizeThresholdMB | number | Required | Maximum size in MB for an individual log file before rotation. |
backupVersions[n].logRotate.timeThresholdHrs | integer | Required | Maximum time in hours for an individual log file before rotation. |
backupVersions[n].logRotate.numUncompressed | integer | Optional | Maximum number of total log files to leave uncompressed, including the current log file. The default is 5 . |
backupVersions[n].logRotate.percentOfDiskspace | number | Optional | Maximum percentage of total disk space all log files should take up before deletion. The default is .02 . |
backupVersions[n].logRotate.numTotal | integer | Optional | If a number is not specified, the total number of log files defaults to 0 and is determined by other backupVersion.logRotate settings. |
The processes array determines the configuration of your MongoDB instances. Using this array, you can:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
"processes": [{
"<args>": {},
"alias": "<string>",
"authSchemaVersion": "<integer>",
"backupRestoreUrl": "<string>",
"cluster": "<string>",
"defaultRWConcern": {
"defaultReadConcern": {
"level": "<string>"
},
"defaultWriteConcern": {
"j": "<boolean>",
"w": "<string>",
"wtimeout": "<integer>"
}
}
"disabled": "<Boolean>",
"featureCompatibilityVersion": "<string>",
"hostname": "<string>",
"lastCompact" : "<dateInIso8601Format>",
"lastRestart" : "<dateInIso8601Format>",
"lastResync" : "<dateInIso8601Format>",
"lastKmipMasterKeyRotation" : "<dateInIso8601Format>",
"logRotate": {
"sizeThresholdMB": "<number>",
"timeThresholdHrs": "<integer>",
"numUncompressed": "<integer>",
"percentOfDiskspace": "<number>",
"numTotal": "<integer>"
},
"manualMode": "<Boolean>",
"name": "<string>",
"numCores": "<integer>",
"processType": "<string>",
"version": "<string>"
}]
|
Name | Type | Necessity | Description |
---|---|---|---|
processes | array | Required |
Contains objects that define the
mongos
and
mongod
instances
that Cloud Manager monitors. Each object defines a different instance.
|
processes[n].args2_6 | object | Required |
MongoDB configuration object for MongoDB versions 2.6 and later. See also Supported configuration options . |
processes[n].alias | string | Optional | Hostname alias (often a DNS CNAME) for the host on which the process runs. If an alias is specified, the MongoDB Agent prefers this alias over the hostname specified in processes.hostname when connecting to the host. You can also specify this alias in replicaSets.host and sharding.configServer . |
processes[n].authSchemaVersion | integer | Required |
Schema version of the user credentials for MongoDB database users. This should match all other elements of the processes array that belong to the same cluster.
See also Upgrade to SCRAM-SHA-1 in the MongoDB 3.0 release notes. |
processes[n].backupRestoreUrl | string | Optional |
Delivery URL for the restore. Cloud Manager sets this when creating a restore. See also Automate Backup Restoration through the API . |
processes[n].cluster | string | Conditional |
Name of the sharded cluster. Set this value to the same value in
the
sharding.name
parameter in the
sharding
array for
the
|
defaultRWConcern.defaultReadConcern.level | string | Optional |
Consistency and isolation properties set for the data read from replica sets and replica set shards. MongoDB Atlas accepts the following values:
|
defaultRWConcern.defaultWriteConcern.j | boolean | Optional | Flag that indicates whether the write acknowledgement must be written to the on-disk journal. |
defaultRWConcern.defaultWriteConcern.w | string | Optional |
Desired number of mongod instances that must acknowledge a write operation in a replica sets and replica set shards. MongoDB Atlas accepts the following values:
|
defaultRWConcern.defaultWriteConcern.wtimeout | number | Optional | Desired time limit for the write concern expressed in milliseconds. Set this value when you set defaultRWConcern.defaultWriteConcern.w to a value greater than 1 . |
processes[n].disabled | Boolean | Optional | Flag that indicates if this process should be shut down. Set to true to shut down the process. |
processes[n].featureCompatibilityVersion | string | Required |
Version of MongoDB with which this process has feature compatibility. Changing this value can enable or disable certain features that persist data incompatible with MongoDB versions earlier or later than the featureCompatibilityVersion you choose.
See also setFeatureCompatibilityVersion |
processes[n].hostname | string | Required | Name of the host that serves this process. This defaults to localhost . |
processes[n].lastCompact | string | Optional |
Timestamp in ISO 8601 date and time format in UTC when Cloud Manager last reclaimed free space on a cluster’s disks. During certain operations, MongoDB might move or delete data but it doesn’t free the currently unused space. Cloud Manager reclaims the disk space in a rolling fashion across members of the replica set or shards. To reclaim this space:
To remove any ambiguity as to when you intend to reclaim the
space on the cluster’s disks, specify a time zone with your
ISO 8601 timestamp. For example, to set
processes.lastCompact
to 28 January 2021 at 2:43:52 PM US Central Standard Time, use
|
processes[n].lastRestart | string | Optional | Timestamp in ISO 8601 date and time format in UTC when Cloud Manager last restarted this process. If you set this parameter to the current timestamp, Cloud Manager forces a restart of this process after you upload this configuration. If you set this parameter for multiple processes in the same cluster, the Cloud Manager restarts the selected processes in a rolling fashion across members of the replica set or shards. |
processes[n].lastResync | string | Optional |
Timestamp in ISO 8601 date and time format in UTC of the last initial sync process that Cloud Manager performed on the node. To trigger the init sync process on the node immediately, set this value to the current time as an ISO 8601 timestamp. Warning
Use this parameter with caution. During
initial sync,
Automation removes the entire contents of the node’s
If you set this parameter:
See also Initial Sync |
processes[n].lastKmipMasterKeyRotation | string | Optional | Timestamp in ISO 8601 date and time format in UTC when Cloud Manager last rotated the master KMIP key. If you set this parameter to the current timestamp, Cloud Manager rotate the key after you upload this configuration. |
processes[n].logRotate | object | Optional | MongoDB configuration object for rotating the MongoDB logs of a process. |
processes[n].logRotate. numTotal | integer | Optional | Total number of log files that Cloud Manager retains. If you don’t set this value, the total number of log files defaults to 0 . Cloud Manager bases rotation on your other processes.logRotate settings. |
processes[n].logRotate. numUncompressed | integer | Optional | Maximum number of total log files to leave uncompressed, including the current log file. The default is 5 . |
processes[n].logRotate. percentOfDiskspace | number | Optional |
Maximum percentage of total disk space that Cloud Manager can use to store the log files expressed as decimal. If this limit is exceeded, Cloud Manager deletes compressed log files until it meets this limit. Cloud Manager deletes the oldest log files first. The default is 0.02 . |
processes[n].logRotate. sizeThresholdMB | number | Required | Maximum size in MB for an individual log file before Cloud Manager rotates it. Cloud Manager rotates the log file immediately if it meets the value given in either this sizeThresholdMB or the processes.logRotate.timeThresholdHrs limit. |
processes[n].logRotate. timeThresholdHrs | integer | Required |
Maximum duration in hours for an individual log file before the next rotation. The time is since the last rotation. Cloud Manager rotates the log file once the file meets either this timeThresholdHrs or the processes.logRotate.sizeThresholdMB limit. |
processes[n].manualMode | Boolean | Optional |
Flag that indicates if MongoDB Agent automates this process.
|
processes[n].name | string | Required | Unique name to identify the instance. |
processes[n].numCores | integer | Optional | Number of cores that Cloud Manager should bind to this process. The MongoDB Agent distributes processes across the cores as evenly as possible. |
processes[n].processType | string | Required |
Type of MongoDB process being run. Cloud Manager accepts
mongod
or
mongos
for this parameter.
|
processes[n].version | string | Required | Name of the mongoDbVersions specification used with this instance. |
clusterWideConfigurations specifies the parameters to set across a replica set or sharded cluster without requiring a rolling restart .
1 2 3 4 5 6 7 8 9 |
"clusterWideConfigurations" : {
"<replicaSetID/clusterName>": {
"changeStreamOptions": {
"preAndPostImages": {
"expireAfterSeconds": <integer>
}
}
}
}
|
Name | Type | Necessity | Description |
---|---|---|---|
replicaSetID/clusterName | object | Optional | The change stream options to apply to the replica set or sharded cluster. MongoDB Agent only checks if this configuration is in a valid JSON format but doesn’t check the values for correctness. |
changeStreamOptions.preAndPostImages.expireAfterSeconds | number | Required |
Retention policy of change stream pre- and post-images in seconds. If you omit the value, the cluster retains the pre- and post-images until it removes the corresponding change stream events from the oplog. If you remove this value, MongoDB Agent only removes this parameter from its automation configuration, but not from the server. See also changeStreamOptions. |
The replicaSets array defines each replica set’s configuration. This field is required for deployments with replica sets.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
"replicaSets":
[
{
"_id": "<string>",
"protocolVersion": "<string>",
"members":
[
{
"_id": "<integer>",
"host": "<string>",
"arbiterOnly": "<boolean>",
"buildIndexes": "<boolean>",
"hidden": "<boolean>",
"priority": "<number>",
"tags": "<object>",
"secondaryDelaySecs": "<integer>",
"votes": "<number>"
},{
"_id": "<integer>",
"host": "<string>",
"arbiterOnly": "<boolean>",
"buildIndexes": "<boolean>",
"hidden": "<boolean>",
"priority": "<number>",
"tags": "<object>",
"secondaryDelaySecs": "<integer>",
"votes": "<number>"
},{
"_id": "<integer>",
"host": "<string>",
"arbiterOnly": "<boolean>",
"buildIndexes": "<boolean>",
"hidden": "<boolean>",
"priority": "<number>",
"tags": "<object>",
"secondaryDelaySecs": "<integer>",
"votes": "<number>"
}
],
"force":
{
"currentVersion": "<integer>"
}
}
]
|
Name | Type | Necessity | Description |
---|---|---|---|
replicaSets | array | Optional |
Configuration of each replica set . The MongoDB Agent uses the values in this array to create valid replica set configuration documents . The agent regularly checks that replica sets are configured correctly. If a problem occurs, the agent reconfigures the replica set according to its configuration document. The array can contain the following top-level fields from a replica set configuration document: _id ; version ; and members . See also replSetGetConfig |
replicaSets[n]._id | string | Required | The name of the replica set. |
replicaSets[n].protocolVersion | string | Optional | Protocol version of the replica set. |
replicaSets[n].members | array | Optional |
Objects that define each member of the replica set. The members.host field must specify the host’s name as listed in processes.name . The MongoDB Agent expands the host field to create a valid replica set configuration. See also replSetGetConfig. |
replicaSets[n].members[m]._id | integer | Optional | Any positive integer that indicates the member of the replica set. |
replicaSets[n].members[m].host | string | Optional | Hostname, and port number when applicable, that serves this replica set member. |
replicaSets[n].members[m].arbiterOnly | boolean | Optional | Flag that indicates whether this replica set member acts as an arbiter. |
replicaSets[n].members[m].buildIndexes | boolean | Optional |
Flag that indicates whether the
mongod
process builds indexes
on this replica set member.
|
replicaSets[n].members[m].hidden | boolean | Optional | Flag that indicates whether the replica set allows this member to accept read operations. |
replicaSets[n].members[m].priority | number | Optional | Relative eligibility for Cloud Manager to select this replica set member as a primary. Larger number increase eligibility. This value can be between 0 and 1000, inclusive for data-bearing nodes. Arbiters can have values of 0 or 1. |
replicaSets[n].members[m].tags | object | Optional | List of user-defined labels and their values applied to this replica set member. |
replicaSets[n].members[m].secondaryDelaySecs | integer | Optional | Amount of time in seconds that this replica set memberr should lag behind the primary. |
replicaSets[n].members[m].votes | number | Optional | Quantity of votes this replica set member can cast for a replica set election. All data bearing nodes can have 0 or 1 votes. Arbiters always have 1 vote. |
replicaSets[n].force | object | Optional |
Instructions to the MongoDB Agent to force a replica set to use the Configuration Version specified in replicaSets.force.CurrentVersion . With this object, the MongoDB Agent can force a replica set to accept a new configuration to recover from a state in which a minority of its members are available. |
replicaSets[n].force.currentVersion | integer | Optional |
Configuration Version that the MongoDB Agent forces the replica set to use. Set to -1 to force a replica set to accept a new configuration. Warning Forcing a replica set reconfiguration might lead to a rollback of majority-committed writes. Proceed with caution. Contact MongoDB Support if you have questions about the potential impacts of this operation. |
The sharding array defines the configuration of each sharded cluster. This parameter is required for deployments with sharded clusters.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
"sharding" : [
{
"managedSharding" : <boolean>,
"name" : "<string>",
"configServerReplica" : "<string>",
"collections" : [
{
"_id" : "<string>",
"key" : [
[ "shard key" ],
[ "shard key" ],
...
],
"unique" : <boolean>
},
...
],
"shards" : [
{
"_id" : "<string>",
"rs" : "<string>",
"tags" : [ "<string>", ... ]
},
...
],
"tags" : [
{
"ns" : "<string>",
"min" : [
{
"parameter" : "<string>",
"parameterType" : "<string>",
"value" : "<string>"
}
],
"max" : [
{
"parameter" : "<string>",
"parameterType" : "<string>",
"value" : "<string>"
}
],
"tag" : "<string>"
},
...
]
},
...
]
|
Name | Type | Necessity | Description |
---|---|---|---|
sharding | array of objects | Optional | Objects that define the configuration of each sharded cluster . Each object in the array contains the specifications for one cluster. The MongoDB Agent regularly checks each cluster’s state against the specifications. If the specification and cluster don’t match, the agent will change the configuration of the cluster, which might cause the balancer to migrate chunks. |
sharding.managedSharding | boolean | Conditional | Flag that indicates whether Cloud Manager Automation manages all sharded collections and tags in the deployment |
sharding.name | string | Conditional |
Name of the cluster. This must correspond with the value in
processes.cluster
for a
mongos
.
|
sharding.configServerReplica | string | Conditional |
Name of the config server replica set . You can add this array parameter if your config server runs as a replica set. If you run legacy mirrored config servers that don’t run as a replica set, use sharding.configServer . |
sharding.configServer | array of strings | Conditional |
Names of the config server hosts. The host names match the names used in each host’s processes.name parameter. If your sharded cluster runs MongoDB 3.4 or later, use sharding.configServerReplica . Important MongoDB 3.4 removes support for mirrored config servers. |
sharding.collections | array of objects | Conditional | Objects that define the sharded collections and their shard keys . |
sharding.collections._id | string | Conditional | namespace of the sharded collection. The namespace is the combination of the database name and the name of the collection. For example, testdb.testcoll . |
sharding.collections.key | array of arrays | Conditional |
Collection’s shard keys . It contains:
|
sharding.collections.unique | boolean | Conditional | Flag that indicates whether MongoDB enforces uniqueness for the shard key. |
sharding.shards | array of objects | Conditional | Cluster’s shards . |
sharding.shards._id | string | Conditional | Name of the shard. |
sharding.shards.rs | string | Conditional | Name of the shard’s replica set. This is specified in the replicaSets._id parameter. |
sharding.shards.tags | array of strings | Conditional |
Zones assigned to this shard. You can add this array parameter if you use zoned sharding. |
sharding.tags | array of objects | Conditional | Definition of zones for zoned sharding. Each object in this array defines a zone and configures the shard key range for that zone. |
sharding.tags.ns | string | Conditional |
Namespace of the collection that uses zoned sharding. The namespace combines the database name and the name of the collection. Example testdb.testcoll |
sharding.tags.min | array | Conditional |
Minimum value of the shard key range. Specify the field name, field type, and value in a document of the following form. {
"field" : <string>,
"fieldType" : <string>,
"value" : <string>
}
To use a compound shard key, specify each field in a separate document, as shown in the example after this table. For more information on shard keys, see Shard Keys in the MongoDB manual. |
sharding.tags.max | array | Conditional |
Maximum value of the shard key range. Specify the field name, field type, and value in a document of the following form. {
"field" : <string>,
"fieldType" : <string>,
"value" : <string>
}
To use a compound shard key, specify each field in a separate document, as shown in the example after this table. For more information on shard keys, see Shard Keys in the MongoDB manual. |
sharding.tags.tag | string | Conditional | Name of the zone associated with the shard key range specified by sharding.tags.min and sharding.tags.max . |
Example
The sharding.tags Array with Compound Shard Key
The following example configuration defines a compound shard key range with a min value of { a : 1, b : ab } and a max value of { a : 100, b : fg } . The example defines the range on the testdb.test1 collection and assigns it to zone zone1 .
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
"tags" : [
{
"ns" : "testdb.test1",
"min" : [
{
"parameter" : "a",
"parameterType" : "integer",
"value" : "1"
},
{
"parameter" : "b",
"parameterType" : "string",
"value" : "ab"
}
],
"max" : [
{
"parameter" : "a",
"parameterType" : "integer",
"value" : "100"
},
{
"parameter" : "b",
"parameterType" : "string",
"value" : "fg"
}
],
"tag" : "zone1"
}
]
|
The balancer object is optional and defines balancer settings for each cluster.
1 2 3 4 5 |
"balancer": {
"<clusterName1>": {},
"<clusterName2>": {},
...
}
|
Name | Type | Necessity | Description |
---|---|---|---|
balancer | object | Optional | Parameters named according to clusters, each parameter containing an object with the desired balancer settings for the cluster. The object uses the stopped and activeWindow parameters, as described in the procedure to schedule the balancing window in this tutorial in the MongoDB manual. |
Cloud Manager doesn’t require the
auth
object. This object defines
authentication-related settings.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
{
"auth": {
"authoritativeSet": "<boolean>",
"autoUser": "<string>",
"autoPwd": "<string>",
"disabled": "<boolean>",
"deploymentAuthMechanisms": ["<string>", "<string>"],
"autoAuthMechanisms": ["<string>"],
"key": "<string>",
"keyfile": "<string>",
"newAutoPwd": "<string>",
"newKey": "<string>",
"usersDeleted": [{
"user": "<string>",
"dbs": ["<string>", "<string>"]
}],
"usersWanted": [{
"authenticationRestrictions": [{
"clientSource": ["(IP | CIDR range)", "(IP | CIDR range)"],
"serverAddress": ["(IP | CIDR range)", "(IP | CIDR range)"]
}],
"db": "<string>",
"initPwd": "<string>",
"otherDBRoles": {
"<string>": ["<string>", "<string>"]
},
"roles": [{
"db": "<string>",
"role": "<string>"
}],
"pwd": "<string>",
"user": "<string>"
}]
}
}
|
Name | Type | Necessity | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
auth | object | Optional |
Defines authentication-related settings. Note If you omit this parameter, skip the rest of this section. |
||||||||||||
auth.authoritativeSet | boolean | Conditional |
Sets whether or not Cloud Manager enforces a consistent set of managed MongoDB users and roles in all managed deployments in the project.
auth.authoritativeSet defaults to false . Required if “auth” : true . |
||||||||||||
auth.autoUser | string | Conditional |
Username that the Automation uses when connecting to an instance. Required if “auth” : true . |
||||||||||||
auth.autoPwd | string | Conditional |
Password that the Automation uses when connecting to an instance. Required if “auth” : true . |
||||||||||||
auth.disabled | boolean | Optional | Flag indicating if auth is disabled. If not specified, disabled defaults to false . | ||||||||||||
auth.deploymentAuthMechanisms | array of strings | Conditional |
Lists the supported authentication mechanisms for the processes in the deployment. Required if “auth” : true . Specify:
|
||||||||||||
auth.autoAuthMechanisms | array of strings | Conditional |
Sets the authentication mechanism used by the Automation. If not specified, disabled defaults to false . Required if “auth” : true . Note This parameter contains more than one element only when it’s configured for both SCRAM-SHA-1 and SCRAM-SHA-256. Specify:
|
||||||||||||
auth.key | string | Conditional |
Contents of the key file that Cloud Manager uses to authenticate to the MongoDB processes. Required if “auth” : true and “auth.disabled” : false . Note If you change the auth.key value, you must change the auth.keyfile value. |
||||||||||||
auth.keyfile | string | Conditional |
Path and name of the key file that Cloud Manager uses to authenticate to the MongoDB processes. Required if “auth” : true and “auth.disabled” : false . Note If you change the auth.keyfile value, you must change the auth.key value. |
||||||||||||
auth
.newAutoPwd
|
string | Optional |
New password that the Automation uses when connecting to an instance. To rotate passwords without losing the connection:
Note You can set this option only when you include SCRAM-SHA-1 or SCRAM-SHA-256 as one of the authentication mechanisms for the Automation in auth.autoAuthMechanisms . |
||||||||||||
auth.newKey | string | Optional |
Contents of a new key file that you want Cloud Manager to use to authenticate to the MongoDB processes. When you set this option, Cloud Manager rotates the key that the application uses to authenticate to the MongoDB processes in your deployment. When all MongoDB Agents use the new key, Cloud Manager replaces the value of auth.key with the new key that you provided in auth.newKey and removes auth.newKey from the automation configuration. |
||||||||||||
auth.usersDeleted | array of objects | Optional | Objects that define the authenticated users to be deleted from specified databases or from all databases. This array must contain auth.usersDeleted.user and auth.usersDeleted.dbs . | ||||||||||||
auth.usersDeleted[n].user | string | Optional | Username of user that Cloud Manager should delete. | ||||||||||||
auth.usersDeleted[n].dbs | array of strings | Optional | List the names of the databases from which Cloud Manager should delete the authenticated user. | ||||||||||||
auth.usersWanted | array of objects | Optional | Contains objects that define authenticated users to add to specified databases. Each object must have the auth.usersWanted[n].db , auth.usersWanted[n].user , and auth.usersWanted[n].roles parameters, and then have exactly one of the following parameters: auth.usersWanted[n].pwd , auth.usersWanted[n].initPwd , or auth.usersWanted[n].userSource . | ||||||||||||
auth.usersWanted[n].db | string | Conditional | Database to which to add the user. | ||||||||||||
auth.usersWanted[n].user | string | Conditional | Name of the user that Cloud Manager should add. | ||||||||||||
auth.usersWanted[n].roles | array | Conditional | List of the roles to be assigned to the user from the user’s database, which is specified in auth.usersWanted[n].db . | ||||||||||||
auth.usersWanted[n].pwd | string | Conditional |
32-character hex SCRAM-SHA-1 hash of the password currently assigned to the user. Cloud Manager doesn’t use this parameter to set or change a password. Required if:
|
||||||||||||
auth.usersWanted[n].initPwd | string | Conditional |
Cleartext password that you want to assign to the user. Required if:
|
||||||||||||
auth.usersWanted[n].userSource | string | Deprecated | No longer supported. | ||||||||||||
auth.usersWanted[n].otherDBRoles | object | Optional | If you assign the user’s database “auth.usersWanted[n].db” : “admin” , then you can use this object to assign the user roles from other databases as well. The object contains key-value pairs where the key is the name of the database and the value is an array of string values that list the roles be assigned from that database. | ||||||||||||
auth.usersWanted[n].authenticationRestrictions | array of documents | Optional |
Authentication restrictions that the host enforces on the user. Warning
If a user inherits multiple roles with incompatible authentications
restrictions, that user becomes unusable. For example, if a user
inherits one role in which the
For more information about authentication in MongoDB, see Authentication. |
||||||||||||
auth.usersWanted[n].authenticationRestrictions[k].clientSource | array of strings | Conditional | If present when authenticating a user, the host verifies that the given list contains the client’s IP address CIDR range. If the client’s IP address is not present, the host does not authenticate the user. | ||||||||||||
auth.usersWanted[n].authenticationRestrictions[k].serverAddress | array of strings | Conditional | Comma-separated array of IP addresses to which the client can connect. If present, the host verifies that Cloud Manager accepted the client’s connection from an IP address in the given array. If the connection was accepted from an unrecognized IP address, the host doesn’t authenticate the user. |
The ssl object enables TLS for encrypting connections. This object is optional.
"ssl" : {
"CAFilePath" : "<string>"
}
Name | Type | Necessity | Description |
---|---|---|---|
ssl | object | Optional |
Enables TLS for encrypting connections. To use TLS , choose a package that supports TLS . All platforms that support MongoDB Enterprise also support TLS . |
ssl.clientCertificateMode | string | Conditional | Indicates whether connections to Cloud Manager require a TLS certificate. The values are OPTIONAL and REQUIRE . |
ssl.CAFilePath | string | Conditional |
Absolute file path to the certificate used to authenticate through TLS on a Linux or UNIX host. Cloud Manager requires either ssl.CAFilePath or ssa.CAFilePathWindows if:
|
ssl.CAFilePathWindows | string | Conditional |
Absolute file path to the certificate used to authenticate through TLS on a Windows host. Cloud Manager requires either ssl.CAFilePath or ssa.CAFilePathWindows if:
|
ssl.autoPEMKeyFilePath | string | Conditional |
Absolute file path to the client private key (PEM) file that authenticates the TLS connection on a Linux or UNIX host. Cloud Manager requires either ssl.autoPEMKeyFilePath or ssa.autoPEMKeyFilePathWindows if you’re using TLS or X.509 authentication. |
ssl.autoPEMKeyFilePathWindows | string | Conditional |
Absolute file path to the client private key (PEM) file that authenticates the TLS connection on a Windows host. Cloud Manager requires either ssl.autoPEMKeyFilePath or ssa.autoPEMKeyFilePathWindows if you’re using TLS or X.509 authentication. |
ssl.autoPEMKeyFilePwd | string | Conditional | Password for the private key (PEM) file specified in ssl.autoPEMKeyFilePath or ssa.autoPEMKeyFilePathWindows . Cloud Manager requires this password if the PEM file is encrypted. |
The
roles
array is optional and describes user-defined roles.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
"roles" : [
{
"role" : "<string>",
"db" : "<string>",
"privileges" : [
{
"resource" : { ... },
"actions" : [ "<string>", ... ]
},
...
],
"roles" : [
{
"role" : "<string>",
"db" : "<string>"
}
]
"authenticationRestrictions" : [
{
"clientSource": [("<IP>" | "<CIDR range>"), ...],
"serverAddress": [("<IP>" | "<CIDR range>"), ...]
}, ...
]
},
...
]
|
Name | Type | Necessity | Description |
---|---|---|---|
roles | array of objects | Optional | Roles and privileges that MongoDB has assigned to a cluster’s user-defined roles. Each object describes a different user-defined role. Objects in this array contain the same fields as documents in the system roles collection, except for the _id field. |
roles[n].role | string | Conditional | Name of the user-defined role. |
roles[n].db | string | Conditional | Database to which the user-defined role belongs. |
roles[n].privileges | array of documents | Conditional | Privileges this role can perform. |
roles[n].privileges[k].resource | string | Conditional | Specifies the resources upon which the privilege actions apply. |
roles[n].privileges[k].actions | string | Conditional |
Actions permitted on the resource. See also Privilege Actions |
roles[n].roles | array of documents | Conditional | Roles from which this role inherits privileges. |
roles[n].authenticationRestrictions | array of documents | Optional |
Authentication restrictions that the MongoDB server enforces on this role. Warning
If a user inherits multiple roles with incompatible authentications
restrictions, that user becomes unusable. For example, if a user
inherits one role in which the
For more information about authentication in MongoDB, see Authentication. |
roles[n].authenticationRestrictions[k].clientSource | array of strings | Conditional | If present, when authenticating a user, the MongoDB server verifies that the client’s IP address is either in the given list or belongs to a CIDR range in the list. If the client’s IP address is not present, the MongoDB server does not authenticate the user. |
roles[n].authenticationRestrictions[k].serverAddress | array of strings | Conditional | Comma-separated array of IP addresses to which the client can connect. If present, the MongoDB server verifies that it accepted the client’s connection from an IP address in the given array. If the MongoDB server accepts a connection from an unrecognized IP address, the MongoDB server does not authenticate the user. |
The kerberos object is optional and defines a kerberos service name used in authentication.
"kerberos": {
"serviceName": "<string>"
}
Name | Type | Necessity | Description |
---|---|---|---|
kerberos | object | Optional | Key-value pair that defines the kerberos service name agents use to authenticate via kerberos. |
kerberos.serviceName | string | Required |
Label that sets:
|
The indexConfigs array is optional and defines indexes to be built for specific replica sets.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
"indexConfigs": [{
"key": [
["<string>", "<value>"]
],
"rsName": "<string>",
"dbName": "<string>",
"collectionName": "<string>",
"collation": {
"locale": "<string>",
"caseLevel": <boolean>,
"caseFirst": "<string>",
"strength": <number>,
"numericOrdering": <boolean>,
"alternate": "<string>",
"maxVariable": "<string>",
"normalization": <boolean>,
"backwards": <boolean>
},
"options": {
"<key>": "<value>"
}
}]
|
Name | Type | Necessity | Description | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
indexConfigs | array of objects | Optional | Specific indexes to be built for specific replica sets. | |||||||||
indexConfigs.key | array of arrays | Required | Keys in the index. This “array of arrays” contains a single array if the index has just one key. | |||||||||
indexConfigs.rsName | string | Required | Replica set on which MongoDB builds the index. | |||||||||
indexConfigs.dbName | string | Required | Database that MongoDB indexes. | |||||||||
indexConfigs.collectionName | string | Required | Collection that MongoDB indexes. | |||||||||
indexConfigs.collation | object | Optional |
Language-specific rules to use when sorting and matching strings if the index uses collation. If you include the indexConfigs.collation object, you must include the indexConfigs.collation.locale parameter. All other parameters are optional. If you don’t include the indexConfigs.collation object, the index can’t include collation. |
|||||||||
indexConfigs.collation.locale | string | Required |
Locale that the ICU defines. The MongoDB Server Manual lists the supported locales in its Collation Locales and Default Parameters section. To specify simple binary comparison, set this value to simple . |
|||||||||
indexConfigs.collation.caseLevel | boolean | Optional |
Flag that indicates how the index uses case comparison. If you set this parameter to true , the index uses case comparison. This parameter applies only if you set indexConfigs.collation.strength to 1 or 2 . See also Collation |
|||||||||
indexConfigs.collation.caseFirst | string | Optional |
Sort order of case differences during tertiary level comparisons. The MongoDB Server Manual lists the possible values in its Collation section. |
|||||||||
indexConfigs.collation.strength | number | Optional |
Level of comparison to perform. Corresponds to ICU Comparison Levels. The MongoDB Server Manual lists the possible values in its Collation section. |
|||||||||
indexConfigs.collation.numericOrdering | boolean | Optional |
Flag that indicates how to compare numeric strings.
The default is false . See also Collation |
|||||||||
indexConfigs.collation.alternate | string | Optional |
Setting that determines how collation should consider whitespace and punctuation as base characters during comparisons. The MongoDB Server Manual lists the possible values in its Collation section. |
|||||||||
indexConfigs.collation.maxVariable | string | Optional |
Characters the index can ignore. This parameter applies only if indexConfigs.collation.alternate is set to shifted . The MongoDB Server Manual lists the possible values in its Collation section. |
|||||||||
indexConfigs.collation.normalization | boolean | Optional |
Flag that indicates if the text should be normalized. If you set this parameter to true , collation:
The default is false . See also Collation |
|||||||||
indexConfigs.collation.backwards | boolean | Optional |
Flag that indicates how the index should handle diacritic strings. If you set this parameter to true , strings with diacritics sort from the back to the front of the string. The default is false . See also Collation |
|||||||||
indexConfigs.options | document | Required | Index options that the MongoDB Go Driver supports. |
Note
Groups and projects are synonymous terms. Your
{PROJECT-ID}
is the
same as your project id. For existing groups, your group/project id
remains the same. This page uses the more familiar term group when
referring to descriptions. The endpoint remains as stated in the
document.
If you encounter an error when issuing a request to the Cloud Manager Administration API, Cloud Manager returns one of the following error codes:
Error | HTTP Code | Description |
---|---|---|
|
402 | Group has an unpaid invoice that is more than 30 days old. |
|
400 |
Acknowledgement comment too long. It must not exceed
<number>
characters.
|
|
409 |
The address
<address>
is already on the whitelist.
|
|
404 |
No alert configuration with ID
<ID>
exists in group
<group>
.
|
|
404 |
No alert with ID
<ID>
exists in group
<group>
.
|
|
401 | API Keys cannot create groups . |
|
401 | API Keys cannot create organizations . |
|
400 | No API Key with ID {API-KEY-ID} exists. |
|
400 | API Key whitelists are only accessible by the API Key itself or by a user administrator. |
|
404 | The specified IP address does not exist in the corresponding API Key whitelist. |
|
400 |
The attribute
<attribute>
cannot be negative or zero.
|
|
400 |
The attribute
<attribute>
cannot be negative.
|
|
400 |
The attribute
<attribute>
is read-only and cannot be
changed by the user.
|
|
400 |
Authentication mechanism
<mechanism>
requires SSL.
|
|
404 |
No automation configuration exists for group
<group>
.
|
|
404 |
No backup configuration exists for cluster
<cluster>
in
group
<group>
.
|
|
400 |
User
<username>
is not in group
<group>
.
|
|
400 |
No user with username
<username>
exists.
|
|
400 | Should not specify both the IP address and the CIDR block. |
|
400 |
The specified username
<username>
is not allowed.
|
|
400 |
The specified address cannot be added to whitelists.
Cloud Manager does not allow certain IP addresses to be
whitelisted, such as
0.0.0.0/32
.
|
|
403 | Adding a global role is not supported. |
|
403 | Current user is not authorized to change group name. |
|
409 | Cannot close account while the group has active backups; please terminate all backups. |
|
402 | Cannot close account because there are failed invoices. |
|
403 | Cannot individually delete a snapshot that is part of a cluster snapshot. |
|
403 | Cannot remove the last owner from the group. If you are trying to close the group by removing all users, please delete the group instead. |
|
403 | Cannot demote the last owner of the organization. |
|
403 | Cannot demote the last owner of the group. |
|
400 | Cannot distribute subnets. There must be at least one subnet available. |
|
403 |
Cannot download a log collection request job in the
EXPIRED
state.
|
|
403 |
Cannot download a log collection request job in the
IN_PROGRESS
state.
|
|
403 | Cannot extend duration of logs that have already expired. |
|
409 | Cannot get backup configuration without cluster being monitored. |
|
500 |
Cannot get volume size limits for volume type
<type>
.
|
|
403 |
Cannot modify host
<host>
because it is managed by
Automation.
|
|
409 |
Cannot modify backup configuration for individual shard; use
cluster ID
<ID>
for entire cluster.
|
|
400 |
Cannot remove caller’s IP address
<address>
from
whitelist.
|
|
409 | Username and password cannot be manually set for a managed cluster. |
|
400 | Cluster checkpoint interval can only be set for sharded clusters, not replica sets. |
|
400 |
Username and password fields are only supported for
authentication mechanism
MONGODB_CR
or
PLAIN
.
|
|
400 |
Cannot change password unless authentication mechanism is
MONGODB_CR
or
PLAIN
.
|
|
400 | Setting the point in time window is not allowed. |
|
400 | Setting the reference point time of day is not allowed. |
|
409 |
Cannot start backup unless the cluster is in the
INACTIVE
or
STOPPED
state.
|
|
402 | Cannot start backup without providing billing information. |
|
409 | Cannot start restore job for deleted cluster snapshot. |
|
409 | Cannot start restore job for deleted snapshot. |
|
409 | Cannot start restore job for incomplete cluster snapshot. |
|
409 | Cannot stop backup unless the cluster is in the STARTED state. |
|
409 | Cannot terminate backup unless the cluster is in the STOPPED state. |
|
404 |
No checkpoint with ID
<ID>
exists for cluster
<cluster>
.
|
|
404 |
No cluster with ID
<ID>
exists in group
<group>
.
|
|
404 |
No restore job with ID
<ID>
exists for config server
<config
server>
.
|
|
404 |
No snapshot with ID
<ID>
exists for config server
<config
server>
.
|
|
400 |
Metric
<metric>
requires a database name to be provided.
|
|
404 |
No database with name
<name>
exists on host
<host>
.
|
|
400 | The limit check failed while trying to add the requested resource. Please try again. |
|
400 |
Failed to send an invitation to
<username>
to join
<group>
.
|
|
400 |
Metric
<metric>
requires a device name to be provided.
|
|
404 |
No device with name
<name>
exists on host
<host>
.
|
|
400 | The domain name for the machine is too long. Try shortening the hostname prefix. |
|
400 | Two or more of the IP addresses being added to the whitelist are the same. |
|
400 | Email and/or SMS must be enabled for group notifications. |
|
400 | Email and/or SMS must be enabled for user notifications. |
|
400 | Expiration date for log collection request job must be in the future. |
|
400 | Expiration date for log collection request job can only be as far as 6 months in the future. |
|
402 | Cannot close account due to a charge failure. |
|
403 | Feature not supported by current account level. |
|
400 | Timestamp must be whole number of seconds. |
|
400 |
The specified event type
<type>
can only be used for
global alerts.
|
|
409 |
A group with name
<name>
already exists.
|
|
404 |
No group with
API
Key
<key>
exists.
|
|
400 |
The specified group ID
<ID>
does not match the URL.
|
|
404 |
No group with name
<name>
exists.
|
|
404 |
No group with ID
<ID>
exists.
|
|
404 |
No last ping exists for host
<host>
in group
<group>
.
|
|
404 |
No host with ID
<ID>
exists in group
<group>
.
|
|
404 |
No host with hostname and port
<name:port>
exists in
group
<group>
.
|
|
400 | Instance must be created with exactly one SSH-enabled security group. |
|
400 |
An invalid agent type name
<name>
was specified.
|
|
404 |
An invalid alert configuration ID
<ID>
was specified.
|
|
404 |
An invalid alert ID
<ID>
was specified.
|
|
400 |
An invalid alert status
<status>
was specified.
|
|
400 |
Invalid attribute
<attribute>
specified.
|
|
400 |
Invalid authentication mechanism
<mechanism>
.
|
|
400 |
An invalid authentication type name
<name>
was specified.
|
|
404 |
An invalid checkpoint ID
<ID>
was specified.
|
|
400 | Cluster checkpoint interval must be 15, 30, or 60 minutes. |
|
404 |
An invalid cluster ID
<ID>
was specified.
|
|
400 | Daily snapshot retention period must be between 1 and 365 days. |
|
400 |
An invalid directory name
<name>
was specified.
|
|
400 | An invalid email address was specified. |
|
400 |
An invalid enumeration value
<value>
was specified.
|
|
400 |
Event type
<type>
not supported for alerts.
|
|
400 | Backup configuration cannot specify both included namespaces and excluded namespaces. |
|
400 |
An invalid granularity
<granularity>
was specified.
|
|
404 |
An invalid group ID
<ID>
was specified.
|
|
400 | Group name cannot contain “10gen-” or “-10gen”. |
|
400 |
An invalid group name
<name>
was specified.
|
|
400 |
A group tag must be a string (alphanumeric, periods,
underscores, and dashes) of length
<MAX_TAG_LENGTH>
characters or less.
|
|
400 |
Invalid host port
<number>
.
|
|
400 |
Invalid hostname prefix
<prefix>
. It must contain only
alphanumeric characters and hyphens, may not begin or end
with a hyphen (“-“), and must not be more than 63 characters
long.
|
|
400 |
Invalid hostname
<name>
.
|
|
400 |
Invalid instance count
<number>
. It must be between
<number>
and
<number>
.
|
|
400 |
Invalid instance type
<type>
. It must be one of the
listed instance types returned in the machine configuration
options.
|
|
400 |
The IOPS value
<number>
is not valid. The maximum ratio
between the IOPS value and the volume size is 30 : 1.
|
|
400 |
The IOPS value
<number>
is not valid. It must be between
the minimum and maximum values returned in the machine
configuration options.
|
|
404 |
An invalid restore job ID
<ID>
was specified.
|
|
400 |
Received JSON for the
<attribute>
attribute does not
match expected format.
|
|
400 | Received JSON does not match expected format. |
|
404 |
An invalid key ID
<ID>
was specified.
|
|
400 | Log request size must be a positive number. |
|
404 |
An invalid machine ID
<ID>
was specified.
|
|
400 | The specified machine image is invalid. |
|
404 |
An invalid metric name
<name>
was specified.
|
|
400 |
The username
<username>
is not a valid MongoDB login.
|
|
400 | Monthly snapshot retention period must be between 1 and 36 months. |
|
400 |
An invalid mount location
<location>
was specified. The
mount location must be equal to or a parent of
<location>
.
|
|
400 |
Operator
<operator>
is not compatible with event type
<type>
.
|
|
400 | An invalid period was specified. |
|
400 |
Invalid parameter combination specified for provider
<provider>
.
|
|
400 |
Invalid query parameter
<parameter>
specified.
|
|
400 | Snapshot schedule reference hour must be between 0 and 23, inclusive. |
|
400 | Snapshot schedule reference minute must be between 0 and 59, inclusive. |
|
400 | Snapshot schedule timezone offset must conform to ISO-8601 time offset format, such as “+0000”. |
|
400 |
No region
<region>
exists for provider
<provider>
.
|
|
400 |
Role
<role>
is invalid for group
<group>
.
|
|
400 |
Invalid root volume size
<number>
. It must be between the
minimum and maximum values returned in the machine
configuration options.
|
|
400 |
Security group
<group>
is invalid. It must be one of the
security groups returned in the machine configuration
options.
|
|
404 |
An invalid snapshot ID
<ID>
was specified.
|
|
400 | Snapshot interval must be 6, 8, 12, or 24 hours. |
|
400 | Snapshot retention period must be between 1 and 5 days. |
|
400 | An invalid SSH key was specified. |
|
404 |
An invalid user ID
<ID>
was specified.
|
|
400 | The specified username is not a valid email address. |
|
400 |
No user
<username>
exists.
|
|
400 |
Invalid volume name
<name>
. It must be one of the listed
volume names returned in the machine configuration options.
|
|
400 |
Invalid or unavailable VPC
<VPC>
or subnet
<subnet>
.
|
|
400 | Weekly snapshot retention period must be between 1 and 52 weeks. |
|
404 |
An invalid maintenance window ID
<ID>
was specified.
|
|
400 |
No zone
<zone>
exists for region
<region>
.
|
|
403 |
IP address
<address>
is not allowed to access this
resource.
|
|
404 |
No last ping exists for group
<group>
.
|
|
409 | Cannot set HTTP link expiration time after snapshot deletion time. |
404 | No job with the given ID exists in this group. | |
|
400 |
No machine configuration parameters exist for provider
<provider>
.
|
|
404 |
No maintenance window with ID
<ID>
exists in group
<group>
.
|
|
400 | Maintenance window configurations must specify a start date before their end date. |
|
400 |
Maximum number of users per group (
<number>
) in
<ID>
exceeded while trying to add users.
|
|
400 |
Maximum number of users per organization (
<number>
) in
<ID>
exceeded while trying to add users.
|
|
400 |
Maximum number of teams per group (
<number>
) in
<ID>
exceeded while trying to add teams.
|
|
400 | Maximum number of Cloud Manager users per team exceeded while trying to add users. Teams are limited to 250 users. |
|
400 | Maximum number of teams per organization exceeded while trying to add team. Organizations are limited to 250 teams. |
|
400 | The metric threshold should only be specific for host metric alerts. |
|
404 | No alert configuration ID was found. |
|
400 |
The required attribute
<attribute>
was not specified.
|
|
400 |
The attributes
<attribute>
and
<attribute>
must be
specified for authentication type
<type>
.
|
|
400 |
Authentication mechanism
<mechanism>
requires username
and password.
|
|
400 | Maintenance window configurations must specify at least one alert type. |
|
400 | Maintenance window configurations must specify an end date. |
|
400 | Maintenance window configurations must specify a start date. |
|
400 | A metric threshold must be specified for host metric alerts. |
|
400 | At least one notification must be specified for an alert configuration. |
|
400 |
Either the
<attribute>
attribute or the
<attribute>
attribute must be specified.
|
|
400 |
Either the
<attribute>
attribute, the
<attribute>
attribute, or the
<attribute>
attribute must be
specified.
|
|
400 |
The required attribute
<attribute>
was incorrectly
specified or omitted.
|
|
400 | Username cannot be changed without specifying password. |
|
400 |
The required query parameter
<parameter>
was not
specified.
|
|
400 | Group notifications cannot specify an empty list of roles. |
|
409 | Changing the storage engine will require a resync, so a sync source must be provided. |
|
400 | A threshold must be specified for member health alerts. |
|
409 | Multiple groups exist with the specified name. |
|
400 |
Either the
<parameter>
query parameter or the
<parameter>
query parameter but not both should be
specified.
|
|
409 | A suitable checkpoint could not be found for the specified point-in-time restore. |
|
401 | No current user. |
|
403 | The API is not supported for the Free Tier of Cloud Manager. |
|
409 |
No group SSH key exists for group
<group>
.
|
|
402 |
No payment information was found for group
<group>
.
|
|
400 |
Could not retrieve availability zones from
<account>
account.
|
|
400 |
Could not retrieve available instance types from
<account>
account.
|
|
400 |
Could not retrieve security groups from
<account>
account.
|
|
404 |
No SSH keys found in group
<group>
.
|
|
400 | The specified metric requires a nonzero delay for all notifications. |
|
404 |
Host
<host>
is not an SCCC config server.
|
|
404 |
Metric
<metric>
is neither a database nor a disk metric.
|
|
401 | The currently logged in user does not have the global user administrator. |
|
401 |
The currently logged in user does not have the user
administrator role in group
<group>
.
|
|
401 | The current user is not in the group, or the group does not exist. |
|
401 |
The currently logged in user does not have the administrator
role in organization
<organization>
.
|
|
400 | Only sharded clusters and replica sets can be patched. |
|
401 |
The currently logged in user does not have the user
administrator role for any group, team, or organization
containing user
<username>
.
|
|
400 | Notifications must have an internal of at least 5 minutes. |
|
400 | At least one notification is a type that is only available for global alert configurations. |
|
400 |
A log collection request job can only be restarted if it is
in the
FAILED
state.
|
|
404 |
No organization with ID
<ID>
exists.
|
|
401 |
Account failed to authenticate with
<credentials>
.
|
|
404 |
No provider configuration with ID
<ID>
exists for
provider
<provider>
.
|
|
404 |
No provider configuration exists for provider
<provider>
.
|
|
404 |
No provider
<provider>
exists.
|
|
404 |
Provider
<provider>
not currently supported.
|
|
404 |
No provision machine job with ID
<ID>
exists in group
<group>
.
|
|
409 |
Provisioned machine with ID
<ID>
could not terminate
because a MongoDB process, Monitoring, or Backup
is currently running on the machine.
|
|
404 |
No provisioned machine with ID
<ID>
exists in group
<group>
.
|
|
500 | Unable to retrieve configuration options from the provider. |
|
429 |
Resource
<resource>
is limited to
<number>
requests
every
<number>
minutes.
|
|
400 |
Rate limit of
<number>
invitations per
<number>
minutes exceeded.
|
|
404 |
Cannot find resource
<resource>
.
|
|
404 |
No restore job with ID
<ID>
exists in group
<group>
.
|
|
404 |
No restore job with ID
<ID>
exists for cluster
<cluster>
.
|
|
400 |
Group-specific role
<role>
requires a group ID.
|
|
400 |
Global role
<role>
cannot be specified with a group ID.
|
|
400 |
Role
<role>
cannot be specified with an organization ID.
|
|
400 |
Role
<role>
requires an organization ID.
|
|
403 | Roles specified for user. |
|
404 |
No snapshot with ID
<ID>
exists for cluster
<cluster>
.
|
|
409 |
An SSH key with the name
<name>
already exists.
|
|
404 |
No SSH key with name
<name>
exists.
|
|
404 |
No SSH key with ID
<ID>
exists.
|
|
400 | A threshold should only be present for member health alerts. |
|
400 | At most one group notification can be specified for an alert configuration. |
|
400 |
Groups are limited to
<MAX_TAGS_PER_GROUP>
tags.
|
|
400 |
Mode
TOTAL
is no longer supported.
|
|
500 | Unexpected error. |
|
400 | Threshold units cannot be converted to metric units. |
|
Automation agent version is less than the accepted minimum version. | |
|
400 | The specified delivery method is not supported. |
|
403 | Operation not supported for current configuration. |
|
403 | Operation not supported for current plan. |
|
400 |
Notification type
<type>
is unsupported.
|
|
403 |
Setting the backup state to
<state>
is not supported.
|
|
409 | Cluster checkpoint interval not supported by the Backup version; please upgrade . |
|
409 | Excluded namespaces are not supported by this Backup version; please upgrade. |
|
409 | Included namespaces are not supported by this Backup version; please upgrade . |
|
409 |
A user with username
<username>
already exists.
|
|
404 |
No user with ID
<ID>
exists.
|
|
404 |
User
<username>
is not in group
<group>
.
|
|
401 | Current user is not authorized to perform this action. |
|
404 |
No user with username
<username>
exists.
|
|
400 |
Volume encryption is not available on instances of type
<type>
.
|
|
400 |
Volume optimization is not available on instances of type
<type>
.
|
|
400 | The specified password is not strong enough. |
|
400 | Webhook URL must be set in the group before adding webhook notifications. |
|
401 |
Cannot access whitelist for user
<username>
, which is not
currently logged in.
|
|
404 |
IP address
<address>
not on whitelist for user
<username>
.
|
On this page
Cloud Manager provides a wizard for adding your existing MongoDB deployments to monitoring and management. The wizard prompts you to:
Install an Automation if it doesn’t already exist
Identify the sharded cluster , the replica set , or the standalone to add. You can choose to add the deployment to Monitoring or to both Monitoring and Automation .
If you are adding a deployment that you intend to live migrate to Atlas, you need to add the deployment (and its credentials) only for Monitoring .
Deployments must have unique names within the projects.
Important
Replica set, sharded cluster, and shard names within the same project must be unique. Failure to have unique names for the deployments will result in broken backup snapshots.
Automation doesn’t support all MongoDB options. To review which options are supported, see MongoDB Settings that Automation Supports .
If you enable TLS , the FQDN for the host serving a MongoDB process must match the SAN for the TLS certificate on that host.
Caution
To prevent man-in-the-middle attacks, keep the scope of TLS certificates as narrow as possible. Although you can use one TLS certificate with many SANs , or a wildcard TLS certificate on each host, you should not. To learn more, see RFC 2818, section 3.1 .
Set up a preferred hostname if you:
To learn more, see the Preferred Hostnames setting in Project Settings .
If you are adding an existing MongoDB process that runs as a Windows Service to Automation, Automation:
If the Cloud Manager project has MongoDB authentication settings enabled for its deployments, the MongoDB deployment you import must support the project’s authentication mechanism.
We recommend that you import to a new destination project that has no running processes and doesn’t have authentication enabled.
If the source cluster uses authentication, and the destination Cloud Manager project doesn’t have any existing managed processes, Cloud Manager enables authentication in the destination project, imports the existing keyfile from the source cluster, and uses it to authenticate the user that conducts the import process.
If the source cluster and the destination Cloud Manager project both use authentication, and the project has processes, Cloud Manager attempts to use existing authentication settings in the destination project during the import process. For the import process to succeed, authentication credentials on the source cluster and the Cloud Manager destination project must be the same.
To ensure that import is successful, before you start the import process, add the Cloud Manager destination project’s credentials on the source cluster. To learn more, see Rotate Keys for Replica Set or Rotate Keys for Sharded Clusters.
If your MongoDB deployment requires authentication, when you add the deployment to Cloud Manager for monitoring, you must provide the necessary credentials .
mms-automation
user to the database processes
you imported and add the user’s credentials to Cloud Manager.
To learn more, see Add Credentials for Automation .
Adding a MongoDB deployment to automation may affect the security settings of the Cloud Manager project and the MongoDB deployment.
Automation enables the Project Security Setting . If the MongoDB deployment requires authentication but the Cloud Manager project doesn’t have authentication settings enabled, when you add the MongoDB deployment to automation, Cloud Manager updates the project’s security settings to the security settings of the newly imported deployment.
The import process only updates the Cloud Manager project’s security setting if the project’s security setting is currently disabled. The import process doesn’t disable the project’s security setting or change its enabled authentication mechanism.
Automation Imports MongoDB Users and Roles . The following statements apply to situations where a MongoDB deployment requires authentication or the Cloud Manager project has authentication settings enabled.
If the MongoDB deployment contains users or user-defined roles, you can choose to import these users and roles for Cloud Manager to manage. The imported users and roles are Synced to all managed deployments in the Cloud Manager project.
Yes
,
Cloud Manager deletes from the MongoDB deployments those users and roles
that are
not
imported.
No
,
Cloud Manager stops managing non-imported users and roles in the project. These
users and roles remain in the MongoDB deployment. To manage these
users and roles, you must connect directly to the MongoDB deployment.
If you don’t want the Cloud Manager project to manage specific users and roles, use the Authentication & Users and Authentication & Roles pages to remove these users and roles during import before you confirm and deploy the changes. To learn more, see Manage or Unmanage MongoDB Users .
If the imported MongoDB deployment already has
mms-backup-agent
and
mms-monitoring-agent
users in its
admin
database, the import
process overrides these users’ roles with the roles for
mms-backup-agent
and
mms-monitoring-agent
users as set in the Cloud Manager project.
Automation Applies to All Deployments in the Project . The project’s updated security settings, including all users and roles managed by the Cloud Manager project, apply to all deployments in the project, including the imported MongoDB deployment.
Cloud Manager restarts all deployments in the project with the new setting, including the imported MongoDB deployment. After import, all deployments in the project use the Cloud Manager automation keyfile upon restart.
The deployment that you import must use the same keyfile as the existing processes in the destination project or the import process may not proceed. To learn more, see Authentication Credentials on Source and Destination Clusters .
If the existing deployments in the project require a different security profile from the imported process, create a new project into which you can import the source MongoDB deployment.
The following examples apply to situations where the MongoDB deployment requires authentication or the Cloud Manager project has authentication settings enabled.
If you import the MongoDB users and custom roles, once the Cloud Manager project begins to manage the MongoDB deployment, the following happens, regardless of the Enforce Consistent Set value:
Synced
set to
Yes
.
If you don’t import the MongoDB users and custom roles, once the Cloud Manager project begins to manage the MongoDB deployment, the following happens:
If
Enforce Consistent Set
is set to
Yes
:
Synced
set to
Yes
.
If
Enforce Consistent Set
is set to
No
:
Synced
set to
Yes
.
If
mongod
is enabled as a service on the deployment,
a race condition might result where
systemd
starts
mongod
on reboot,
rather than the Automation. To prevent this issue, ensure the
mongod
service is disabled before you add your deployment to Automation:
sudo systemctl is-enabled mongod.service
sudo systemctl disable mongod.service
If the Cloud Manager project doesn’t have authentication settings enabled but the MongoDB process requires authentication, add the MongoDB Agent user for the Cloud Manager project with the appropriate roles. The import process displays the required roles for the user. The added user becomes the project’s MongoDB Agent user.
If the Cloud Manager project has authentication settings enabled, add the Cloud Manager project’s MongoDB Agent user to the MongoDB process.
To find the MongoDB Agent user, click Deployments , then Security , then Users .
To find the password for the Cloud Manager project’s MongoDB Agent user, use one of the following methods:
Follow the steps in the Add MongoDB Processes procedure to launch the wizard in the UI. When you reach the modal that says Do you want to add automation to this deployment? :
Use the Automation Configuration Resource endpoint:
curl --user "{username}:{apiKey}" --digest \
--header "Accept: application/json" \
--include \
--request GET "<host>/api/public/v1.0/groups/<Group-ID>/automationConfig"
Open the
mmsConfigBackup
file in your preferred text editor and find the
autoPwd
value.
Example
If the Cloud Manager project has
Username/Password
mechanism selected for its authentication settings, add the
project’s Cloud Manager MongoDB Agents User
mms-automation
to
the
admin
database in the MongoDB deployment to import.
db.getSiblingDB("admin").createUser(
{
user: "mms-automation",
pwd: <password>,
roles: [
'clusterAdmin',
'dbAdminAnyDatabase',
'readWriteAnyDatabase',
'userAdminAnyDatabase',
'restore',
'backup'
]
}
The import process requires that the authentication credentials and keyfiles are the same on the source and destination clusters. To learn more, see Authentication Credentials on Source and Destination Clusters .
Important
If you are adding a sharded cluster, you must create this user through the mongos and on every shard. That is, create the user both as a cluster wide user through mongos as well as a shard local user on each shard.
To add existing MongoDB processes to Cloud Manager:
After you add existing MongoDB process to Cloud Manager, you might have to add authentication credentials for the new deployments if authentication is enabled for the project into which you imported the deployment. See Authentication Use Cases to learn in which situations you must add Automation, Monitoring, or Backup credentials for your new deployment.
If you are adding a deployment that you intend to live migrate to Atlas, you need to add the deployment (and its credentials) only for Monitoring .
Select the authentication mechanism that you want to use:
To add credentials for a deployment that will use Automation but didn’t use it before you imported it to Cloud Manager:
The MongoDB Agent user performs automation tasks for your MongoDB databases. Make sure this MongoDB user has the proper privileges .
Setting | Value |
---|---|
MongoDB Agent Username | Enter the MongoDB Agent username. |
MongoDB Agent Password | Enter the password for MongoDB Agent Username. |
Setting | Value |
---|---|
MongoDB Agent LDAP Username | Enter the LDAP username. |
MongoDB Agent LDAP Password | Enter the password for MongoDB Agent’s LDAP Username. |
MongoDB Agent LDAP Group DN |
Enter the Distinguished Name for the MongoDB Agent’s LDAP Group. Note Provide the MongoDB Agent’s LDAP Group DN only if you use LDAP Authorization. Each MongoDB Agent should have and use its own LDAP Group DN . |
The required values depend upon whether you are connecting to a Linux-served KDC or Windows Active Directory Server.
Setting | Value |
---|---|
Monitoring Kerberos Principal | Kerberos Principal. |
Monitoring Keytab Path | Absolute file Ppath to the MongoDB Agent’s Keytab. |
Monitoring LDAP Group DN |
Enter the Distinguished Name for the MongoDB Agent’s LDAP Group. The LDAP Group DN is then created as a role in MongoDB to grant the MongoDB Agent the appropriate privileges. Note You only need to provide the LDAP Group DN if you use LDAP Authorization. |
Setting | Value |
---|---|
MongoDB Agent Username | Active Directory user name. |
MongoDB Agent Password | Active Directory password. |
Domain | NetBIOS name of a domain in Active Directory Domain Services. Must be in all capital letters. |
Setting | Value |
---|---|
MongoDB Agent Username | Enter the LDAP v3 distinguished name derived from the MongoDB Agent’s PEM Key file. |
TLS/SSL CA File Path | The path on disk that contains the trusted certificate authority (CA) certificates in PEM format. These certificates verify the server certificate returned from any MongoDB instances running with TLS / SSL . You must enter at least one TLS / SSL CA file path. |
MongoDB Agent PEM Key file |
If your MongoDB deployment requires client certificates, on
the line for the appropriate operating system, provide the
path and
.pem
filename for the client certificate used by
the MongoDB Agent’s
PEM
Key file on the server.
You must enter a value for at least one MongoDB Agent PEM
Key File.
|
MongoDB Agent PEM Key Password | Provide the password to the PEM Key file if it was encrypted. |
MongoDB Agent LDAP Group DN |
Enter the Distinguished Name for the MongoDB Agent’s LDAP Group. Note You only need to provide MongoDB Agent’s LDAP Group DN if you use LDAP Authorization. |
To add credentials for a deployment that will not use Automation but will use Monitoring:
Setting | Value |
---|---|
Monitoring Username | Enter the Monitoring username. |
Monitoring Password | Enter the password for Monitoring Username. |
Setting | Value |
---|---|
Monitoring LDAP Username | Enter the LDAP username. |
Monitoring LDAP Password | Enter the password for Monitoring’s LDAP Username. |
Monitoring LDAP Group DN |
Enter the Distinguished Name for the Monitoring’s LDAP Group. Note Provide the Monitoring’s LDAP Group DN only if you use LDAP Authorization. Each Monitoring should have and use its own LDAP Group DN . |
The required values depend upon whether you are connecting to a Linux-served KDC or Windows Active Directory Server.
Setting | Value |
---|---|
Monitoring Kerberos Principal | Kerberos Principal. |
Monitoring Keytab Path | Absolute file Ppath to the Monitoring’s Keytab. |
Monitoring LDAP Group DN |
Enter the Distinguished Name for the Monitoring’s LDAP Group. The LDAP Group DN is then created as a role in MongoDB to grant the Monitoring the appropriate privileges. Note You only need to provide the LDAP Group DN if you use LDAP Authorization. |
Setting | Value |
---|---|
Monitoring Username | Active Directory user name. |
Monitoring Password | Active Directory password. |
Domain | NetBIOS name of a domain in Active Directory Domain Services. Must be in all capital letters. |
Setting | Value |
---|---|
Monitoring Username | Enter the LDAP v3 distinguished name derived from the Monitoring’s PEM Key file. |
Monitoring PEM Key file | Provide the path and filename for the Monitoring’s PEM Key file on the server on the line for the appropriate operating system. |
Monitoring PEM Key Password | Provide the password to the PEM Key file if it was encrypted. |
Monitoring LDAP Group DN |
Enter the Distinguished Name for the Monitoring’s LDAP Group. Note You only need to provide the Monitoring’s LDAP Group DN if you use LDAP Authorization. |
To add credentials for a deployment that will not use Automation but will use Backup:
Setting | Value |
---|---|
Backup Username | Enter the Backup username. |
Backup Password | Enter the password for Backup Username. |
Setting | Value |
---|---|
Backup LDAP Username | Enter the LDAP username. |
Backup LDAP Password | Enter the password for Backup’s LDAP Username. |
Backup LDAP Group DN |
Enter the Distinguished Name for the Backup’s LDAP Group. Note Provide the Backup’s LDAP Group DN only if you use LDAP Authorization. Each Backup should have and use its own LDAP Group DN . |
The required values depend upon whether you are connecting to a Linux-served KDC or Windows Active Directory Server.
Setting | Value |
---|---|
Monitoring Kerberos Principal | Kerberos Principal. |
Monitoring Keytab Path | Absolute file Ppath to the Backup’s Keytab. |
Monitoring LDAP Group DN |
Enter the Distinguished Name for the Backup’s LDAP Group. The LDAP Group DN is then created as a role in MongoDB to grant the Backup the appropriate privileges. Note You only need to provide the LDAP Group DN if you use LDAP Authorization. |
Setting | Value |
---|---|
Backup Username | Active Directory user name. |
Backup Password | Active Directory password. |
Domain | NetBIOS name of a domain in Active Directory Domain Services. Must be in all capital letters. |
Setting | Value |
---|---|
Backup Username | Enter the LDAP v3 distinguished name derived from the Backup’s PEM Key file. |
Backup PEM Key file | Provide the path and filename for the Backup’s PEM Key file on the server on the line for the appropriate operating system. |
Backup PEM Key Password | Provide the password to the PEM Key file if it was encrypted. |
Backup LDAP Group DN |
Enter the Distinguished Name for the Backup’s LDAP Group. Note You only need to provide Backup’s LDAP Group DN if you use LDAP Authorization. |