The Splendors and Miseries of CaaS – Experiences with Openshift3

Container as a Service (CaaS) is increasingly popular cloud service (usually categorized under Platform as a Service family of cloud services). It can provide easy ways how to deploy web applications leveraging Linux container technologies usually most popular Docker containers.  Recent addition to this family is Openshift v3 from RedHat.   Openshift is available as an open source  software (Openshift Origin) or as a hosted service (OpenShift Online).  I already used previous version of Openshift service (v2), as described in my previous article. In this article I’ll share my recent experiences with Openshift v3 service (also called NextGen).

Openshift NextGen Generally

Epithet NextGen was probably given to stress that new version of Openshift is radically different form the previous version. That’s absolutely true, while previous version was basically proprietary solution, NextGen is based on more common technologies – Docker and Kubernetes particularly,  so one can expect to enjoy all goodies of Docker ecosystem. Apart of basic support to run Docker containers, Openshift provides a set of PaaS services, based on pre-configured templates for most common web application technologies (Node.js, Python WSGI, Java  Servlets, PHP, …) and Continuous Integration & Deployment (Jenkins pipelines , or more simple Openshift “Source to Image(S2I)” build, which is described later).

Currently Openshift Online is offered in two variants Starter(free, but limited to 1GB memory and 2 virtual CPUs and forced hibernation of containers) and Pro. But you can easily test Openshift also locally on you computer with Minishift or run Openshift locally in Docker container.  I found Minishift  particularly useful, when I played with it.

Basic unit of deployment in Openshift is a pod, which contains one or more Docker containers, which run together. Pod is then abstracted as a sevice. Service can run several pod instances load balanced (through HAProxy) – e.g. service enables horizontal scaling of pods.

There are many possibilities how to get Docker images to be run in pods:

  • Easiest way is to use existing image available in some registry ( e.g. Docker Hub).  Such image can be imported into Openshift as an image-stream, which is then deployed in a pod using defined deployment configuration (memory, cpus, volumes, environment variables, …). But not every docker image will run in Openshift out of hand, there are some limitations of which I’ll speak later.
  • Build an image and then deploy it. Openshift is quite flexible in ways how to create new image:
    • Dockerfile build –  use ‘classic’ docker build from Dockerfile
    • Source to Image (S2I) build – use special base image (builder) to create new image using  provided source code (from git repo)
    • Custom build – similar to Source to Image but even more flexible
    • Pipeline build –  building image using Jenkins pipeline

Openshift can be managed either from web UI or console application oc. Both require a bit of understanding of Openshift architecture to get started with them.

Openshift has a good documentation, so I refer reader to it for details.

Deploying Apps from Docker Image

Disregarding all fanciness of PaaS finally it’s about four basic things:
a) get my  application, which is running fine  locally in my development environment, to the Web quickly and painlessly
b) have it running there reliably and securely
c) scale easily as traffic grows (ideally automatically within some given limits)
d) have possibility to update it easily and instantly (possibly automatically)

I did have a couple of applications, that did run locally in Docker container, so I was quite exited that I can easily just push the existing Docker images to Openshift and they will happily run there forever.  Unfortunately ( or maybe fortunately because I did learn  quite few things) it was naive expectation and it was not so straightforward. Locally in my containers I did not care very much about security, so all apps run as root in local containers. But Openshift is much more careful, not only that you cannot run containers as root, but basically they are run with arbitrary uid.  So basically it meant for me to rewrite completely Docker files. And I needed quite a few iterations to finds where access rights can cause problem for arbitrary user. For these experiments Minishift was quite valuable as I can quickly push images to it (enable access to Minishift Docker registry as described here).

So my recommended flow for preparing images for Openshift is:

  1. Create and image that runs as non-root user and test it in local Docker
  2. Now run it  as arbitrary user (random uid and 0 as guid) – fix any access issues
  3. Optionally test locally in Minishift
  4. Deploy to Openshift

Most resistant to run as arbitrary user was Postgresql ( Openshift has it’s own great image for Postgresql, but unfortunately it’s missing Postgis, which one of my applications required, so no shortcut here either).

Concerning application updates Openshift deployment is updated automatically when new images is available (in image-stream, for remote repository it means triggering update of the image-stream) . Default is rolling deployment –   olds pods continue running until new pods are ready to take over, then old pods are deleted. If deployment fails it can be rolled out to previous working deployment.

Openshift also provides manual and automatic scaling – by adding pods to services either as result of admin actions or as results of reaching some load threshold.

Deploying Apps from Source

Building and deploying apps from source works great for “standard” applications (like Python Flask)  with already available “Source to Image” builders or templates.  In this case new deployment means just linking git repository to appropriate builder (with Add to project in web UI or oc new-app builder_image~repo_url). In my case none of  my applications was “standard” so pre-built Docker images were easiest approach (although I thing I could make my python app to build and run with default S2I builder image with some more tweaking).

If default S2I builders are not enough (for instance we do not have one for Rust), we can relatively easily create our own, which I have done here as an exercise for my toy Rust application .

S2I build is running these steps (bit simplified):

  1. Inject source code into builder image (download from repo, tar and then untar in the image)
  2. With builder image run assemble command
  3. Commit resulting container to new image
  4. Run that new image with run command

So the S2I builder image is basically a Docker image with 2 shell scripts – assemble and run.

So here is an example of S2I builder image for toy Rust project:

with this assemble script:

and this run script:

Once we have builder image we can deploy application – locally in minishift we just need to push builder image to Minishift Docker repository (above is the link how to enable access to this repository), for hosted service we should push image to Docker Hub and then import it to Openshift with:

and then create new application with oc command:

This will create build configuration, run a build and deploy new application image created by the build.

Later if application source changes in git repo, we can either manually trigger rebuild or create a webhook in git repo that will start rebuild ( webhook will work only in hosted Openshift as public hook URL is needed).

If we change build image, again application will be automatically rebuild and re-deployed.

Deploying Apps from Template

For complex applications a template can be created, that deploys application. Template is YAML or JSON files that defines objects to be deployed and parameters that should be supplied during deployment. Here is example of template for Django application with  Postgresql database:

 

Conclusion

I only explored few basic possibilities of the Openshift platform,  it’s promising, but not without surprises for unaware user. Platform is relatively complex and requires good understanding of  underlying concepts.  Naive reuse of  locally working Docker image is not enough, one must think ahead about security related limitations of the platform. Building from source works like charm for standard web application, but for more complex application individual approach (creating own builder images or templates or using Jenkins pipelines) is needed. This requires more efforts and deeper dive into the platform, but can fully automate deployment of any complex application.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">