DataStax Help Center

Schedule a Backup with Amazon AWS S3 Destination using Opscenter API

Following is an example of using the Opscenter 6.0 API, via curl, to create an Amazon S3 Destination in Opscenter and then schedule a backup job to use that S3 Destination.
Note that for any of these curl commands in which a JSON response is expected, you may wish to pipe the result through the following python command to format the JSON output to be readable. For example:

curl ... | python -m json.tool

Note also you may need to change your opscenter hostname (I’m using localhost) and cluster name (mine is named Test_Cluster).

S3 (or Local File System) Destination

The first step is to create your Amazon S3 destination.
List your destinations (see documentation):

curl -sS 'http://localhost:8888/Test_Cluster/backups/destinations'

The response may list many destinations, or only list the built-in on-server destination:

{
    "OPSC_ON_SERVER": {
        "path": "",
        "provider": "server"
    }
}

Create an S3 destination (see documentation):

curl -sS -X POST 'http://localhost:8888/Test_Cluster/backups/destinations' -d '{"provider":"s3","path":"TODO your S3 bucket name goes here","access_key":"TODO","access_secret":"TODO"}'

Alternatively, opscenter supports a destination on the local file system (LFS). Same endpoint, slightly different parameters, e.g.:

curl -sS -X POST 'http://localhost:8888/Test_Cluster/backups/destinations' -d '{"provider":"local","path":"TODO fully-qualified filesystem directory path name (FQPN)"}' 

JSON response looks something like

{
    "destination": [
        {
            "0e5e80ac744a48b196ec79d7fcbf8df0": {
                "access_key": "...omitted...",
                "access_secret": "...omitted...",
                "delete_this": false,
                "path": "...omitted...",
                "provider": "s3",
                "server_side_encryption": false,
                "throttle_bytes_per_second": "0"
            }
        }
    ],
    "request_id": "e30fd9e7-d850-4cfc-9da4-7d23c5ffabae"
}

Two things to note: the destination section contains the unique destination id (0e5e80ac744a48b196ec79d7fcbf8df0 in this case). And the request id, e30fd9e7-d850-4cfc-9da4-7d23c5ffabae, can be used to see if the destination creation was a success.
Check the status of the destination creation request (see documentation):

curl -sS 'http://localhost:8888/request/e30fd9e7-d850-4cfc-9da4-7d23c5ffabae/status' 

The response will indicate success and|or failure to create the destination on each of your agents.

{
    "cluster_id": null,
    "details": {
        "message": "",
        "subrequests": {
            "127.0.0.1": {
                "cluster_id": null,
                "details": "",
                "finished": 1472249753,
                "id": "ff56d742-e03b-42fd-867e-f4fd7304be60",
                "started": 1472249753,
                "state": "success"
            },
            "127.0.0.2": {
                "cluster_id": null,
                "details": "",
                "finished": 1472249753,
                "id": "e5c1196e-c266-479a-91dd-2ce589b4faac",
                "started": 1472249753,
                "state": "success"
            },
            "127.0.0.3": {
                "cluster_id": null,
                "details": "",
                "finished": 1472249753,
                "id": "1448327c-1144-4290-ac0a-a2be5cd90a43",
                "started": 1472249753,
                "state": "success"
            }
        }
    },
    "finished": 1472249753,
    "id": "e30fd9e7-d850-4cfc-9da4-7d23c5ffabae",
    "started": 1472249753,
    "state": "success"
}

Assuming the destination was created successfully, you can now schedule a backup job. Please note the destination creation must have been successful on all agents for any backups to work.

Backup Job Schedule

Schedule a recurring backup job which backs up to the S3 destination (see documentation):

curl -sSX POST 'http://localhost:8888/Test_Cluster/job-schedules' --data '{"first_run_date":"2016-09-27","first_run_time":"15:45:00","timezone":"GMT","interval":5,"interval_unit":"minutes","job_params":{"type":"backup","keyspaces":["foobar"],"cleanup_age":30,"cleanup_age_unit":"days","destinations":{"0e5e80ac744a48b196ec79d7fcbf8df0":{}}}}'

The result will be the i.d. of the newly-created job, e.g. 0b4d19a0-4a6c-4a2a-8d48-1aa8eb9383f6. You can see the status of that job (see documentation):

curl -sS 'http://localhost:8888/Test_Cluster/job-schedules/0b4d19a0-4a6c-4a2a-8d48-1aa8eb9383f6'

which in this example looks like this:

{
    "first_run_date": "2016-09-27",
    "first_run_time": "15:45:00",
    "id": "0b4d19a0-4a6c-4a2a-8d48-1aa8eb9383f6",
    "interval": 5,
    "interval_unit": "minutes",
    "job_params": {
        "cleanup_age": 30,
        "cleanup_age_unit": "days",
        "destinations": {
            "0e5e80ac744a48b196ec79d7fcbf8df0": {}
        },
        "keyspaces": [
            "foobar"
        ],
        "post_snapshot_script": null,
        "pre_snapshot_script": null,
        "type": "backup"
    },
    "last_run": "",
    "next_run": "2016-09-27 15:45:00 GMT",
    "timezone": "GMT"
}

The API provides full CRUD control of destinations and scheduled jobs. For example, you may wish to delete old or unused destinations, or edit existing scheduled backup jobs to use different or additional destination(s).

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments

Powered by Zendesk