Create a Cloud Storage Sync Job
You can create Cloud Storage sync jobs by navigating to Integration > Object.
Before you can start leveraging Cloud Storage Sync, you will need to set up the source directory. See Set Up Cloud Storage to learn more.
Note: You can schedule the sync job to run at periodic intervals. Any file in the local directory that is newly added or modified between the intervals will be transferred to the cloud at the next interval. Any file that was not transferred successfully at the last interval will be transferred again at the next interval.
To create a cloud storage sync job:
- Navigate to Integration > Object.
Click Add sync job. The Add Cloud Sync Job dialog box displays.
- Enter a name for the job.
- Select a provider for the job.
Configure the parameters for the provider.
- Region: Enter the region name. See Amazon Simple Storage Service endpoints and quotas to learn more.
- Secret access key: Enter the appropriate secret access key. See AWS security credentials to learn more.
- Source: Enter the file path to the local directory. For example: /var/lib/customer/ftp-data/s3-sync. Make sure you have started the Litmus Edge FTP server and created the necessary folders for the file path. See Set Up Cloud Storage to learn more.
- Destination: Enter the remote S3 bucket name.
- Transfer mode: Select a transfer mode: Copy or Move.
- Run every: Configure the time interval for the sync job in minutes or hours.
- Access Key: Enter the access key. See the following resources to learn more.
- Access Name: Enter the storage account name.
- Source: Enter the file path to the local directory. For example: /var/lib/customer/ftp-data/azure-sync. Make sure you have started the Litmus Edge FTP server and created the necessary folders for the file path. See Set Up Cloud Storage to learn more.
- Destination: Enter the Azure Blob Storage container name.
- Transfer mode: Select a transfer mode: Copy or Move.
- Run every: Configure the time interval for the sync job in minutes or hours.
- JSON Key FIle: Pase or upload the JSON key file.
- Source: Enter the file path to the local directory. For example: /var/lib/customer/ftp-data/gcp-sync. Make sure you have started the Litmus Edge FTP server and created the necessary folders for the file path. See Set Up Cloud Storage to learn more.
- Destination: Enter the remote GCP bucket name.
- Transfer mode: Select a transfer mode: Copy or Move.
- Run every: Configure the time interval for the sync job in minutes or hours.
- Workspace URL: Enter the Databricks workspace URL. See Get identifiers for workspace objects to learn more.
- Authentication Option: You have two options.
- Personal Access Token: If you select this, then:
- Access Token: You can paste the access token here. See Databricks personal access token authentication to learn more.
- OAuth M2M: If you select this, then:
- Client Id: Enter the unique identifier for your application, used to authenticate with Databricks.
- Client Secret: Provide the secret key associated with the Client Id.
- Source: Enter the file path to the local directory. For example: /var/lib/customer/ftp-data/databricks-sync. Make sure you have started the Litmus Edge FTP server and created the necessary folders for the file path. See Set Up Cloud Storage to learn more.
- Destination: Enter the file path to Databricks directory.
- Transfer mode: Select a transfer mode: Copy or Move.
- Run every: Configure the time interval for the sync job in minutes or hours.
When done configuring the provider, click Save.
Once the job is created, you can enable or disable it. To manage the job, you must disable the job first.