TNS
VOXPOP
What’s Slowing You Down?
What is your biggest inhibitor to shipping software faster?
Complicated codebase and technical debt.
0%
QA, writing tests, and debugging.
0%
Waiting for PR review or stakeholder approval.
0%
I'm always waiting due to long build times.
0%
Rework due to unclear or incomplete specifications.
0%
Inadequate tooling or infrastructure.
0%
Other.
0%
Cloud Native Ecosystem / Microservices

Best Practices for Developing Cloud-Native Applications and Microservice Architectures

Mar 17th, 2015 8:48am by and
Featued image for: Best Practices for Developing Cloud-Native Applications and Microservice Architectures
Feature image via Flickr Creative Commons.
Amy Feldman is a senior solution marketing manager for HP Helion. With a passion for applications, her focus is on cloud-native application development, DevOps and Cloud Foundry. She has worked in IT, development and marketing.
A technologist and community organizer based in Seattle, Colin Henry has devoted his career to the study and practice of building the best software possible for the enterprise, and leading teams for companies like Simply Measured, Apptio, and Opsware. He is currently building the engineering team for Cloud Foundry at HP. Colin holds a Bachelor of Science in Computer Science from Bloomsburg University of Pennsylvania and is an alumnus of the Software Product Management program at the University of Washington.

Developers are interested in developing new applications that are scalable, portable, resilient, and update easily. In order to achieve this they often times look to adopt cloud services, a microservice architecture and utilize twelve-factor app methodologies. However, it’s not as easy as just lifting and shifting your application to the cloud or splitting your application into smaller containers. It has to be designed, architected and written in a way that takes full advantage of these new technologies. Where do you start?  

Be Micro

First be micro, which requires you to rewrite your application as microservices where each service does one thing really well. By breaking the application into smaller services it makes it easier to update and scale the services, which are key for a modern cloud native application.  

In order to best explain this, let’s take a look at a sample application called Northwind — an order processing app which provides a model and dataset to illustrate concepts in a typical order transaction process. For our purposes, this application has been written in Google’s Go (golang) programming language. 

A traditional application would have everything in one process or single endpoint as shown below:

image01

For many applications, a monolithic design is okay and unavoidable due to its original architectural design and dependencies. However, in order to take advantage of cloud economies of scale, the application should be redesigned. One way to do this is to split the application into individual services or microservices. By splitting the application into smaller services, greater flexibility and scalability of the application are allowed. Developers are able to change code easily for each individual service without impacting other services. Also, when the application is deployed into production, these individual components are able to scale independently, depending on certain performance characteristics.

image02

In the above example, the same Northwind sample application is broken down into individual components, as well as modernized to be a “single page application,” allowing the application to redraw any part of the UI without requiring a server roundtrip to retrieve the HTML. This is achieved by separating the data from the presentation of the data by having, in this example, an API service to handle the data and user requests.

Be Explicit

Next, be explicit about your code dependencies and your relationship with backend services.  When developing cloud-native applications, it is important to have consistent libraries and systems used across dev/test and production, by explicitly declaring and isolating dependencies.  This is typically done through a dependency declaration manifest file, such as a Ruby Gemfile, .NET Solutions file or a JAVA Maven file. In Cloud Foundry, buildpacks provide framework and runtime dependencies for the application. Below is an example of dependencies for the Northwind order service application written in golang, such as the web services, environment variables and the messaging service.   

image03

These are all part of the Cloud Foundry golang buildpack, and when the application is pushed to Cloud Foundry, Cloud Foundry will pull the latest revision and deploy those libraries along with the Northwind application. Since Cloud Foundry takes care of managing these dependencies, it helps to ensure consistency across varying development, test and production environments.  When developing cloud-native applications it is key to isolate dependencies, and Cloud Foundry takes the guess work out of managing dependencies with it’s implementation of buildpacks.

Just as its important to explicitly declare your dependencies, it is also important to explicitly define your relationship with backing services (database, queue, cache or other microservices) as attached services. This allows you to easily swap out services without having to make changes to the code. In Cloud Foundry you can easily use existing backing services, or quickly create your own service brokers using the command cf create-service. Once created, you then easily bind these services to the application with the command cf bind-serviceAPPLICATION SERVICE_INSTANCE, or by declaring these backing services in the Cloud Foundry manifest file. The manifest file contains a variety of environment variables, from how many instances to create and how much memory to allocate, to config details of which services applications should be used.   

image04

In this example, when we deploy to push the Northwind application to Cloud Foundry, it will read the information from the manifest file, and in some cases prompt the user for any additional configuration information. Cloud Foundry will then deploy the dependencies, such as runtimes and other backing services, such as RabbitMQ. Instead of having each developer deploy their own environment, Cloud Foundry makes it easy to quickly deploy consistent environments across the lifecycle of an application.  

Be Stateless

Another best practice is to be stateless by getting environment specific information from the environment. In general, configuration variables such as hostname and password should be environment specific and not repository specific as these tend to change between development, test and production. By storing configuration in environment, variables simplify configuration, reducing errors during deployment.

In the previous section, we declared a go-cfenv dependency providing convenience functions and structures that map to Cloud Foundry environment variable primitives. This makes it easy to write the config variables for the Northwind sample application as seen below:  

image05

Also, to be stateless, you should treat your logs as an event stream and not a file to be written to. Logs are key to understanding the health of any application and should be treated as event streams. Anyone who has managed applications or written code understands that logs can be complex and a pain. Cloud Foundry simplifies logs with its loggregator, which collects STDOUT and STDERR from applications. Since our sample application is written in golang, this makes logging even easier, as golang writes errors to STDOUT, which is then collected by the Cloud Foundry loggregator.  

image00

Be Temporal

Finally, when developing cloud-native applications it is important to be temporal — shifting your mindset about your application can be difficult. You should be able to create and kill the application at any time without warning and adopt a process-oriented mind set so you can scale horizontally when asked.

It’s scary to think of killing your application, but in a cloud environment your application should be able to gracefully shutdown and gracefully startup. However, since you’ve made your application stateless, as described above, Cloud Foundry can startup and shutdown your app at will without any loss of state or data.

Lastly, scaling out an application is key to handling user load and concurrent requests. In Cloud Foundry, scaling an application creates or destroys instances of the application. Incoming requests to the application are automatically load balanced across all application instances, which handles tasks in parallel with every instance.  

Combining good design with coding best practices will make it easier to develop cloud-native applications. So the next time you look at designing and developing your application for the cloud, keep in mind a microservice architecture, twelve-factor application methodologies and Cloud Foundry.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.