Project backup

Creating a backup of all relevant data involves exporting all important information, then restoring into a new project.

Currently, a OpenShift Origin project back up and restore tool is being developed by Red Hat. See the following bug for more information:

Back up a project

Procedure

  1. To list all the relevant data to backup:

    $ oc get all
    NAME         TYPE      FROM      LATEST
    bc/ruby-ex   Source    Git       1
    
    NAME               TYPE      FROM          STATUS     STARTED         DURATION
    builds/ruby-ex-1   Source    Git@c457001   Complete   2 minutes ago   35s
    
    NAME                 DOCKER REPO                                     TAGS      UPDATED
    is/guestbook         10.111.255.221:5000/myproject/guestbook         latest    2 minutes ago
    is/hello-openshift   10.111.255.221:5000/myproject/hello-openshift   latest    2 minutes ago
    is/ruby-22-centos7   10.111.255.221:5000/myproject/ruby-22-centos7   latest    2 minutes ago
    is/ruby-ex           10.111.255.221:5000/myproject/ruby-ex           latest    2 minutes ago
    
    NAME                 REVISION   DESIRED   CURRENT   TRIGGERED BY
    dc/guestbook         1          1         1         config,image(guestbook:latest)
    dc/hello-openshift   1          1         1         config,image(hello-openshift:latest)
    dc/ruby-ex           1          1         1         config,image(ruby-ex:latest)
    
    NAME                   DESIRED   CURRENT   READY     AGE
    rc/guestbook-1         1         1         1         2m
    rc/hello-openshift-1   1         1         1         2m
    rc/ruby-ex-1           1         1         1         2m
    
    NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    svc/guestbook         10.111.105.84    <none>        3000/TCP            2m
    svc/hello-openshift   10.111.230.24    <none>        8080/TCP,8888/TCP   2m
    svc/ruby-ex           10.111.232.117   <none>        8080/TCP            2m
    
    NAME                         READY     STATUS      RESTARTS   AGE
    po/guestbook-1-c010g         1/1       Running     0          2m
    po/hello-openshift-1-4zw2q   1/1       Running     0          2m
    po/ruby-ex-1-build           0/1       Completed   0          2m
    po/ruby-ex-1-rxc74           1/1       Running     0          2m
  2. Export the project objects into a project.yaml file in yaml format:

    $ oc export all -o yaml > project.yaml

    Or, in json:

    $ oc export all -o json > project.json
  3. The above creates a yaml or json file with the project content. This, however, does not export all objects, such as role bindings, secrets, service accounts, or persistent volume claims. To export these, run:

    $ for object in rolebindings serviceaccounts secrets imagestreamtags podpreset cms egressnetworkpolicies rolebindingrestrictions limitranges resourcequotas pvcs templates cronjobs statefulsets hpas deployments replicasets poddisruptionbudget endpoints
    do
      oc export $object -o yaml > $object.yaml
    done
  4. Some exported objects can rely on specific metadata or references to unique IDs in the project. This is a limitation on the usability of the recreated objects.

    When using imagestreams, the image parameter of a deploymentconfig can point to a specific sha checksum of an image in the internal registry that would not exist in a restored environment. For instance, running the sample "ruby-ex" as oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git creates an imagestream ruby-ex using the internal registry to host the image:

    $ oc get dc ruby-ex -o jsonpath="{.spec.template.spec.containers[].image}"
    10.111.255.221:5000/myproject/ruby-ex@sha256:880c720b23c8d15a53b01db52f7abdcbb2280e03f686a5c8edfef1a2a7b21cee

    If importing the deploymentconfig as it is exported with oc export it fails if the image does not exist.

    To create those exports, use the project_export.sh in the openshift-ansible-contrib repository, which creates all the project objects in different files. The script creates a directory on the host running the script with the project name and a json file for every object type in that project.

    The code in the openshift-ansible-contrib repository referenced below is not explicitly supported by Red Hat but the Reference Architecture team performs testing to ensure the code operates as defined and is secure.

    The script runs on Linux and requires jq and the oc commands installed and the system should be logged in to the OpenShift Origin environment as a user that can read all the objects in that project.

    $ mkdir ~/git
    $ cd ~/git
    $ git clone https://github.com/openshift/openshift-ansible-contrib.git
    $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
    $ ./project_export.sh <projectname>

    For example:

    $ ./project_export.sh myproject
    Exporting namespace to project-demo/ns.json
    Exporting rolebindings to project-demo/rolebindings.json
    Exporting serviceaccounts to project-demo/serviceaccounts.json
    Exporting secrets to project-demo/secrets.json
    Exporting deploymentconfigs to project-demo/dc_*.json
    Patching DC...
    Exporting buildconfigs to project-demo/bcs.json
    Exporting builds to project-demo/builds.json
    Exporting imagestreams to project-demo/iss.json
    Exporting imagestreamtags to project-demo/imagestreamtags.json
    Exporting replicationcontrollers to project-demo/rcs.json
    Exporting services to project-demo/svc_*.json
    Exporting pods to project-demo/pods.json
    Exporting podpreset to project-demo/podpreset.json
    Exporting configmaps to project-demo/cms.json
    Exporting egressnetworkpolicies to project-demo/egressnetworkpolicies.json
    Exporting rolebindingrestrictions to project-demo/rolebindingrestrictions.json
    Exporting limitranges to project-demo/limitranges.json
    Exporting resourcequotas to project-demo/resourcequotas.json
    Exporting pvcs to project-demo/pvcs.json
    Exporting routes to project-demo/routes.json
    Exporting templates to project-demo/templates.json
    Exporting cronjobs to project-demo/cronjobs.json
    Exporting statefulsets to project-demo/statefulsets.json
    Exporting hpas to project-demo/hpas.json
    Exporting deployments to project-demo/deployments.json
    Exporting replicasets to project-demo/replicasets.json
    Exporting poddisruptionbudget to project-demo/poddisruptionbudget.json
  5. Once executed, review the files to verify that the content has been properly exported:

    $ cd <projectname>
    $ ls -1
    bcs.json
    builds.json
    cms.json
    cronjobs.json
    dc_ruby-ex.json
    dc_ruby-ex_patched.json
    deployments.json
    egressnetworkpolicies.json
    endpoint_external-mysql-service.json
    hpas.json
    imagestreamtags.json
    iss.json
    limitranges.json
    ns.json
    poddisruptionbudget.json
    podpreset.json
    pods.json
    pvcs.json
    rcs.json
    replicasets.json
    resourcequotas.json
    rolebindingrestrictions.json
    rolebindings.json
    routes.json
    secrets.json
    serviceaccounts.json
    statefulsets.json
    svc_external-mysql-service.json
    svc_ruby-ex.json
    templates.json
    $ less bcs.json
    ...

    Some files can be empty if the objects does not exist when the export is executed.

  6. If using imagestreams, the script modifies the deploymentconfig to use the image reference instead the image sha, creating a different json file than the exported using the _patched appendix:

    $ diff dc_hello-openshift.json dc_hello-openshift_patched.json
    45c45
    <             "image": "docker.io/openshift/hello-openshift@sha256:42b59c869471a1b5fdacadf778667cecbaa79e002b7235f8091540ae612f0e14",
    ---
    >             "image": "hello-openshift:latest",

The script does not support multiple container pods currently, use it with caution.

Restore Project

To restore a project, create the new project, then restore any exported files with oc create -f pods.json. However, restoring a project from scratch requires a specific order, because some objects are dependent on others. For example, the configmaps must be created before any pods.

Procedure

  1. If the project has been exported as a single file, it can be imported as:

    $ oc new-project <projectname>
    $ oc create -f project.yaml
    $ oc create -f secret.yaml
    $ oc create -f serviceaccount.yaml
    $ oc create -f pvc.yaml
    $ oc create -f rolebindings.yaml

    Some resources can fail to be created (for example, pods and default service accounts).

  2. If the project was initially exported using the project_export.sh script, the files are located in the projectname directory, and can be imported using the same project_import.sh script that performs the oc create process in the proper order:

    $ mkdir ~/git
    $ cd ~/git
    $ git clone https://github.com/openshift/openshift-ansible-contrib.git
    $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
    $ ./project_import.sh <projectname_path>

    For example:

    $ ls ~/backup/myproject
    bcs.json           dc_guestbook_patched.json        dc_ruby-ex_patched.json  pvcs.json          secrets.json
    builds.json        dc_hello-openshift.json          iss.json                 rcs.json           serviceaccounts.json
    cms.json           dc_hello-openshift_patched.json  ns.json                  rolebindings.json  svcs.json
    dc_guestbook.json  dc_ruby-ex.json                  pods.json                routes.json        templates.json
    
    $ ./project_import.sh ~/backup/myproject
    namespace "myproject" created
    rolebinding "admin" created
    rolebinding "system:deployers" created
    rolebinding "system:image-builders" created
    rolebinding "system:image-pullers" created
    secret "builder-dockercfg-mqhs6" created
    secret "default-dockercfg-51xb9" created
    secret "deployer-dockercfg-6kvz7" created
    Error from server (AlreadyExists): error when creating "myproject//serviceaccounts.json": serviceaccounts "builder" already exists
    Error from server (AlreadyExists): error when creating "myproject//serviceaccounts.json": serviceaccounts "default" already exists
    Error from server (AlreadyExists): error when creating "myproject//serviceaccounts.json": serviceaccounts "deployer" already exists
    error: no objects passed to create
    service "guestbook" created
    service "hello-openshift" created
    service "ruby-ex" created
    imagestream "guestbook" created
    imagestream "hello-openshift" created
    imagestream "ruby-22-centos7" created
    imagestream "ruby-ex" created
    error: no objects passed to create
    error: no objects passed to create
    buildconfig "ruby-ex" created
    build "ruby-ex-1" created
    deploymentconfig "guestbook" created
    deploymentconfig "hello-openshift" created
    deploymentconfig "ruby-ex" created
    replicationcontroller "ruby-ex-1" created
    Error from server (AlreadyExists): error when creating "myproject//rcs.json": replicationcontrollers "guestbook-1" already exists
    Error from server (AlreadyExists): error when creating "myproject//rcs.json": replicationcontrollers "hello-openshift-1" already exists
    pod "guestbook-1-c010g" created
    pod "hello-openshift-1-4zw2q" created
    pod "ruby-ex-1-rxc74" created
    Error from server (AlreadyExists): error when creating "myproject//pods.json": object is being deleted: pods "ruby-ex-1-build" already exists
    error: no objects passed to create

    AlreadyExists errors can appear, because some objects as serviceaccounts and secrets are created automatically when creating the project.

  3. If you are using buildconfigs, the builds are not triggered automatically and the applications are not executed:

    $ oc get bc
    NAME      TYPE      FROM      LATEST
    ruby-ex   Source    Git       1
    $ oc get pods
    NAME                      READY     STATUS    RESTARTS   AGE
    guestbook-1-plnnq         1/1       Running   0          26s
    hello-openshift-1-g4g0j   1/1       Running   0          26s

    To trigger the builds, run the oc start-build command:

    $ for bc in $(oc get bc -o jsonpath="{.items[*].metadata.name}")
    do
        oc start-build ${bc}
    done

    The pods will deploy once the build completes.

  4. To verify the project was restored:

    $ oc get all
    NAME         TYPE      FROM      LATEST
    bc/ruby-ex   Source    Git       2
    
    NAME               TYPE      FROM          STATUS                    STARTED              DURATION
    builds/ruby-ex-1   Source    Git           Error (BuildPodDeleted)   About a minute ago
    builds/ruby-ex-2   Source    Git@c457001   Complete                  55 seconds ago       12s
    
    NAME                 DOCKER REPO                                     TAGS      UPDATED
    is/guestbook         10.111.255.221:5000/myproject/guestbook         latest    About a minute ago
    is/hello-openshift   10.111.255.221:5000/myproject/hello-openshift   latest    About a minute ago
    is/ruby-22-centos7   10.111.255.221:5000/myproject/ruby-22-centos7   latest    About a minute ago
    is/ruby-ex           10.111.255.221:5000/myproject/ruby-ex           latest    43 seconds ago
    
    NAME                 REVISION   DESIRED   CURRENT   TRIGGERED BY
    dc/guestbook         1          1         1         config,image(guestbook:latest)
    dc/hello-openshift   1          1         1         config,image(hello-openshift:latest)
    dc/ruby-ex           1          1         1         config,image(ruby-ex:latest)
    
    NAME                   DESIRED   CURRENT   READY     AGE
    rc/guestbook-1         1         1         1         1m
    rc/hello-openshift-1   1         1         1         1m
    rc/ruby-ex-1           1         1         1         43s
    
    NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    svc/guestbook         10.111.126.115   <none>        3000/TCP            1m
    svc/hello-openshift   10.111.23.21     <none>        8080/TCP,8888/TCP   1m
    svc/ruby-ex           10.111.162.157   <none>        8080/TCP            1m
    
    NAME                         READY     STATUS      RESTARTS   AGE
    po/guestbook-1-plnnq         1/1       Running     0          1m
    po/hello-openshift-1-g4g0j   1/1       Running     0          1m
    po/ruby-ex-1-h99np           1/1       Running     0          42s
    po/ruby-ex-2-build           0/1       Completed   0          55s

    The services and pods IPs are different, because they are assigned dynamically at creation time.

PVC backup

This topic describes how to synchronize persistent data from inside of a container to a server and then restore the data onto a new persistent volume claim.

Depending on the provider that is hosting the OpenShift Origin environment, the ability to launch third party snapshot services for backup and restore purposes also exists. As OpenShift Origin does not have the ability to launch these services, this guide does not describe these steps.

Consult any product documentation for the correct backup procedures of specific applications. For example, copying the mysql data directory itself would not be a usable backup. Instead, run the specific backup procedures of the associated application and then synchronize any data. This includes using snapshot solutions provided by the OpenShift Origin hosting platform.

Backup persistent volume claims

Procedure

  1. View the project and pods:

    $ oc get pods
    NAME           READY     STATUS      RESTARTS   AGE
    demo-1-build   0/1       Completed   0          2h
    demo-2-fxx6d   1/1       Running     0          1h
  2. Describe the desired pod to find the volumes currently being used by a persistent volume:

    $ oc describe pod demo-2-fxx6d
    Name:			demo-2-fxx6d
    Namespace:		test
    Security Policy:	restricted
    Node:			ip-10-20-6-20.ec2.internal/10.20.6.20
    Start Time:		Tue, 05 Dec 2017 12:54:34 -0500
    Labels:			app=demo
    			deployment=demo-2
    			deploymentconfig=demo
    Status:			Running
    IP:			172.16.12.5
    Controllers:		ReplicationController/demo-2
    Containers:
      demo:
        Container ID:	docker://201f3e55b373641eb36945d723e1e212ecab847311109b5cee1fd0109424217a
        Image:		docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
        Image ID:		docker-pullable://docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
        Port:		8080/TCP
        State:		Running
          Started:		Tue, 05 Dec 2017 12:54:52 -0500
        Ready:		True
        Restart Count:	0
        Volume Mounts:
          */opt/app-root/src/uploaded from persistent-volume (rw)*
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-8mmrk (ro)
        Environment Variables:	<none>
    ...omitted...

    The above shows that the persistent data is currently located in the /opt/app-root/src/uploaded directory.

  3. Copy the data locally:

    $ oc rsync demo-2-fxx6d:/opt/app-root/src/uploaded ./demo-app
    receiving incremental file list
    uploaded/
    uploaded/ocp_sop.txt
    uploaded/lost+found/
    
    sent 38 bytes  received 190 bytes  152.00 bytes/sec
    total size is 32  speedup is 0.14

    The ocp_sop.txt file has been pulled down to the local system to be backed up by backup software or to another backup mechanism.

    The steps above can also be used in the event that a pod starts without needing to use a pvc, but then decides a pvc is necessary. This would allow for the data to be preserved and the restore procedures to be used to populate the new storage.

Restore Persistent Volume Claims

This topic describes two methods for restoring data. The first involves deleting the file, then placing the file back in the expected location. The second example shows migrating persistent volume claims. The migration would occur in the event that the storage needs to be moved or in a disaster scenario when the backend storage no longer exists.

Check with the restore procedures for the specific application on any steps required to restore data to the application.

Restoring Files to an Existing PVC

Procedure

  1. Delete the file:

    $ oc rsh demo-2-fxx6d
    sh-4.2$ ls */opt/app-root/src/uploaded/*
    lost+found  ocp_sop.txt
    sh-4.2$ *rm -rf /opt/app-root/src/uploaded/ocp_sop.txt*
    sh-4.2$ *ls /opt/app-root/src/uploaded/*
    lost+found
  2. Replace the file from the server containing the rsync backup of the files that were in the pvc:

    $ oc rsync uploaded demo-2-fxx6d:/opt/app-root/src/
  3. Validate that the file is back on the pod by using oc rsh to connect to the pod and view the contents of the directory:

    $ oc rsh demo-2-fxx6d
    sh-4.2$ *ls /opt/app-root/src/uploaded/*
    lost+found  ocp_sop.txt

Restoring Data to a New PVC

The following steps assume that a new pvc has been created.

Procedure

  1. Overwrite the currently defined claim-name:

    $ oc volume dc/demo --add --name=persistent-volume \
    		--type=persistentVolumeClaim --claim-name=filestore \ --mount-path=/opt/app-root/src/uploaded --overwrite
  2. Validate that the pod is using the new PVC:

    $ oc describe dc/demo
    Name:		demo
    Namespace:	test
    Created:	3 hours ago
    Labels:		app=demo
    Annotations:	openshift.io/generated-by=OpenShiftNewApp
    Latest Version:	3
    Selector:	app=demo,deploymentconfig=demo
    Replicas:	1
    Triggers:	Config, Image(demo@latest, auto=true)
    Strategy:	Rolling
    Template:
      Labels:	app=demo
    		deploymentconfig=demo
      Annotations:	openshift.io/container.demo.image.entrypoint=["container-entrypoint","/bin/sh","-c","$STI_SCRIPTS_PATH/usage"]
    		openshift.io/generated-by=OpenShiftNewApp
      Containers:
       demo:
        Image:	docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
        Port:	8080/TCP
        Volume Mounts:
          /opt/app-root/src/uploaded from persistent-volume (rw)
        Environment Variables:	<none>
      Volumes:
       persistent-volume:
        Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        *ClaimName:	filestore*
        ReadOnly:	false
    ...omitted...
  3. Now that the new pvc is being used by the deployment configuration, use oc rsync to place the files onto the new pvc:

    $ oc rsync uploaded demo-3-2b8gs:/opt/app-root/src/
    sending incremental file list
    uploaded/
    uploaded/ocp_sop.txt
    uploaded/lost+found/
    
    sent 181 bytes  received 39 bytes  146.67 bytes/sec
    total size is 32  speedup is 0.15
  4. Validate that the file is back on the pod by using oc rsh to connect to the pod and view the contents of the directory.

    $ oc rsh demo-3-2b8gs
    sh-4.2$ *ls /opt/app-root/src/uploaded/*
    lost+found  ocp_sop.txt

Pruning images and Containers

See the Pruning Resources topic for information about pruning collected data and older versions of objects.