Before, I dwell into what is docker and micro services architecture, let’s look at the evolution of how applications were designed, developed and deployed over the years.

Development and Deployment History

cloud_dockerIt all started with a single tier architecture, moved on a to a two tier architecture, the classic 3 tier architecture and various N Tier architecture’s for scaling applications. Frameworks such as Apache Struts provided a kick start, the simplicity of dependency injection with Spring framework gained momentum, AJAX simplified creating asynchronous web applications and environments like Node.js ( JavaScript based) simplified event-driven programming using callbacks without the overhead of using multiple threads.

Application development was divided into well-defined modules and tools like ANT and Maven helped in dependency management and building up a application from these loosely coupled modules. Application dependency was versioned into maven dependency files and modules explicitly stated which version of modules they depend on.

Applications were deployed and tested over multiple environment – development, unit test environment, pre-production and production and as each of these environments were different, there were significant effort required to ensure new changes doesn’t break existing one and existing applications. For instance, deploying a new version of web application would require stopping, deploying a application like a brand new one (hot deployment being turned off in production). OGSI technologies tried to simply this a bit. We also saw multiple teams using the same production instance, might even cause other applications to fail, for instance’s deploying a new library at an application level and your web application started failing as it’s not compatible with it. In Short, changes were never totally isolated and deploying new changes and making application live in production required a significant effort. I still remember one of my client’s deployment, where deploying a common application’s from third party organization might stretch an entire weekend for other applications to confirm if the API was working and it was all or none effort at the end.  How often you have heard of this statement – “It works in my machine”, “Its a different environment, need to look into it”

As we move towards the devOps adoption, deployment cycles cutting down from months to weeks to day(s) and build applications for the cloud, the earlier development and deployment approaches provides a significant hindrance on time to market for these new generation of applications.

Deployment Challenges 

‘With challenges, comes new Innovate Solutions” – and what are the challenges we faced over these years –

  1. How do I ensure the application that I develop runs on any environment as-is, be it Laptop, data centers or cloud.
  2. How do I manage the dependency and isolate my application changes.
  3. How do I build and compose the end application’s from these loosely couple application/modules. How to build a plug-n-play architecture.
  4. How do I roll out and isolate each module changes. Virtually, I am thinking of a dedicated virtual server run time for my application, minus the overhead and cost of a physical server.
  5. How do I build a scalable application? Can I scale each modules independently?

Well, luckily we have an answer now. The above problems can be solved using a container technology like Docker and designing the application into a small set of independent deployable services (micro services).

Stay tuned, In my next article, I will talk about the How part – how docker solves the problem and what it means to designing a mirco services architecture.

Tags : dockerfuture-devmicro services

The author Naveen