Serverless Architecture

Serverless Architecture

In this post, we will learn how to design and architect a web application to be serverless. Sounds crazy right! yeah, when I heard even I thought the same, how can a web application which is deployed and run on servers now run without servers? This sentence has more of a conceptual analogy which requires us to understand the basic concepts of applications architecture. To put it in simple words, it’s one step more to micro-services. To begin with let us get some context before understanding it.

When we are building an application we first design its architecture, and there are many architectural paradigms with which we can build our applications. So let us briefly understand the evolution of some of the basic architectures.

Monolithic architecture

During the initial phase of the software industry, applications and its required data used to live on a single machine. Application logic used to communicate with the back-end data directly and was deployed as a single unit making it 1 tier architecture. Initially, there were no individual modules such as user interface, business logic, data access logic etc, so everything was tightly coupled, it was extremely hard to change each component. So to overcome this 2 tier architecture was introduced, where presentation and business logic were bifurcated making presentation layer independent of the data layer. But later in time with the availability of high computational servers and database management systems, people started making use of it by moving computation onto the servers and displayed only resultant information onto end-user devices. As database management systems were separately available, application logic was divided into 2 layers namely business logic and data access logic. Hence 3 tier architecture was introduced separating presentation, business and data access layers. This bifurcation was done only at an architectural level and deployment of the application was again done as a single unit in the servers making it Monolithic 3 tier architecture.

Monolithic 3 tier architecture was popular and was being used by many people, but as the no of customers using the application increased, applications were also needed to scale so to serve the growing customers. In order to do that, solution architects provided two solutions, either scale it vertically(Increase the capacity of the server) which has limitation after certain limit or scale it horizontally by adding load-balancer in front to spin up new machines which seem a better solution. You can find below the glimpse of typical 3 tier architecture.

If you observe servers in the business layer of any kind of applications, they do the majority of the computation and has many components performing different tasks such as security, authentication, authorization, communicating with data access servers etc. And these components are tightly coupled performing different functionalities. And all the different components are deployed as a single unit, which will make it very difficult to keep up because if one part fails then the entire application needs to be redeployed. So many people find it hard as it has many negative impacts on the speed with which each part has to be changed and scaled. The size of the application can slow down the startup time as the application has a lot of modules. So this holistic application architecture deployed as a single unit in the server makes it very error prone, and if there is any bug in a small single module, the entire application will be down. So to avoid such hassles, service oriented architecture or multi-tier architecture was invented.  

Service oriented/ multi-tier architecture

This is most widely used and common type of architecture nowadays, where the business and data access layers are divided into many components, and this decoupling is done based on separation of concerns technique. Each part is responsible for it’s own functionality and each part is deployed individually into smaller servers. They communicate with each other through web API’s. This makes any organization to hire domain specific engineers to work on each component and not to bother about other components except for API’s to be integrated. 

Each part, specific to particular functionality will be deployed onto a separate server, making it easier to develop, deploy and test each component without affecting other components in the application. You can also scale up individual part by putting up the load-balancer in front of it to manage the traffic. And even if there is an issue with a single component, then only that corresponding server needs to be updated, unlike earlier architecture where an entire application needs to be redeployed. 

You might be thinking that we kept on dividing an application into smaller chunks performing different and independent tasks. But some people went little further and said why should we just have smaller servers serving many functionalities, instead we will divide them into smaller and modular functions amplifying the purpose of service oriented architecture and introduced Microservices architecture.

Microservices Architecture

This architectural style is getting a lot of attention in the last couple of years as it has tremendous advantages over previous traditional style architectures. The main purpose of this architecture is to divide our application into a set of smaller components, wait a minute! its nothing but the service oriented architecture right!  what is the difference? well, these components execute a minimal amount of work upon each service call compared to SOA components. So micro services have higher decoupling and higher scalability chances. Each service can be written in different language as they are independent of each other and can communicate using either REST based API or RPC. Each microservice can be deployed and scaled independently. Let’s see why dividing into such smaller modules will make it better

Assume we are building an application and below are the smaller and important subset of requirements given.

  • Needs to store a certain amount of data which has a lot of relations within it and for which the schema is fixed and will never change.
    • We need to choose relational database management system such as MySQL, Oracle DB for this need.
  • Needs to store a certain amount of data where properties might change and volume is very huge. Schema always changes in time.
    • We need to choose MySQL databases either of MongoDB, Cassandra, CouchDB or any other for this need.
  • Needs to have caching mechanism.
    • Need either Redis, Memcached.
  • Should be deployed in a virtual private network.
  • Should have Authentication and Authorization.
    • Should be integrated with LDAP server.

Let’s use Microservices architectural style to design application with above-mentioned requirements and see how it is beneficial.

As per the requirements, some services need a relational database and some need nonrelational database, so as you can see in the above diagram there are 2 relational DB and one nonrelational DB. Service 4 is communicating with the Dynamo DB which is nonrelational and service 5 is communicating with relational MySQL DB. Hence dividing the application into smaller and modular services made it easy to share individual scheme depending upon their needs instead of sharing the common schema to the entire application. This way even if there is a change in technology being used in the application code of service 4, it won’t affect service 5 or its connected database.

We can deploy all our services into the servers encapsulated within a virtual private network. if you are deploying your application in AWS then it provides an option to deploy in our own VPN blocking outside world to directly interact with the services. We can enable only API gateway to have internet connectivity and also configure it to allow specific IP’s by modifying inbound rules. We can use API gateway as a dumb proxy server which has to only route the requests to corresponding servers within a VPN. It can also act like load balancer to address changing real-time traffic.

So to deploy our applications, many service providers are available in the market such as Amazon web service(AWS), Google cloud service, Azure etcI would prefer AWS as it is user-friendly and highly documented for each service that they give. We can use following AWS services as described below

  • EC2 instance for deploying our microservices.
  • S3 for static storage
  • DynamoDB for non-relational database.
  • RDS for a relational database.
  • We can deploy Redis or Memcached onto Ec2 instance and configure it to behave as a caching mechanism, or we can use an elastic cache of AWS.
  • We can deploy all above services into a virtual private network and attach internet facing load balancer to act as API Gateway/Proxy server.
  • We can use SQS service for communicating between services.

There are lot more services provided by AWS, we can make use of them based on the need basis of our application requirements. But all these services need to be monitored by the developer or DevOps team regularly and need to discuss lot of questions such as

  • Which machine to choose to be cost-effective and high performing as per our need, because each configuration has different cost based on region? Cost information is at this link
  • Based on the expected traffic and server capacity, we need to always monitor our scaling group for load balancing and based on what all factors should we calculate that?
  • How to percolate our code or configuration changes to all the services already running?
  • Which OS should we choose for running our services?
  • When should we scale out services?
  • How will our application handle if one or many of our services fail?
  • Unless and until we buy a reserved instances, we won’t be guaranteed that high configured machines will be available to us. So what do we need to do when required servers not available?

So even with all such amazing services, we need to address above mentioned questions and much more. It doesn’t mean that there are no answers or no streamlined processes or tools, but why should application developer have to worry about all these headaches? developers need to concentrate on the business logic instead of worrying about maintaining infrastructure. For addressing this issue, Amazon came up with an amazing idea and provided a service called AWS Lambda, AWS Lambda lets you run code without provisioning or managing servers, You pay only for the compute time you consume – there is no charge when your code is not running.

We started with developing and deploying applications as a single unit and later started modularizing by dividing into smaller components. And each such components were deployed independently onto servers and communicating with each other through API’s. Due to many headaches with provisioning and managing servers we are making it server-less i.e we will just write functions performing the core business logic and let it do the work. AWS Lambda service will take care of identifying and executing correct functions.

You must be understanding now what it means to be serverless. This kind of architecture is completely inclined towards deploying your application in Amazon web services. If you are planning to use any other infrastructure provider then you shouldn’t be architect your application in this way. So let’s understand what is AWS Lambda and how it works.

AWS Lambda

As per the documentation on the AWS website, it says “AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for almost any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app”.

In simple words, you can think of Lambda as a serverless, event-driven computational service. As per the microservices architecture, you would have written your application into smaller and modular functions, so just put them into AWS Lambdas and let AWS take care of invoking, executing, and scaling. AWS Lambda mainly consists of 4 components namely

  • AWS Lambda function: This is where we write our core logic to be executed, it is executed upon many events such as client/another service API request, S3 push or pull, SQS message etc. We can write our code in C#, javascript, java or python. And these functions are called as handler functions, and you can call these handler functions on many event sources and it can call another function as well.
  • Lambda event source: Event source is the one where we tell AWS to execute our lambda function, either it can be when an object is pushed or pulled from S3, or when there is a request from the client to the API gateway, or it could be a events in the cloud watch of AWS services. There are many more AWS services to be mentioned which can be configured to be the event source of lambda functions.
  • AWS Lambda service: This is the service provided by the AWS for provisioning and managing the code functions which we write. they take care of scaling the servers automatically without being explicitly configured by the developers. Also, they provide the API to trigger the execution of the lambda function. So no matter how many no of request comes from the market, they take care of executing them parallelly along with logging and monitoring in cloud watch for graphical viewing.
  • Function networking environment: All the lambda functions which we define should be deployed into the AWS environment, and we do have 2 options to do that
    • Default VPC: A default network environment provided within VPC where all our functions are exposed to the internet and do not have access to the VPC deployed assets.
    • Custome VPC: All functions are deployed within a context of our own VPC. Here all the functions will be able to privately communicate with each other within VPC, and we can define which VPC routing tables along with NAT gateway.

AWS Lambda allows us to design our application into smaller and modular functions which are responsible for the intended purpose only. And it also allows us to eliminate the headache of managing and monitoring servers in the production stage without worrying about the scalability.

Let me know in the comment section if you find the post interesting or if there are any updates which need to be made. In the next post let us try to build an application highlighting the usage of lambda functions.

3 thoughts on “Serverless Architecture

  1. Good article but a small correction:
    Look for the line
    “Needs to have caching mechanism”
    – You suggested Rabbitmq along with memcache and redis. I think rabbitmq is just for queing and not for caching.

  2. Great work Mahatesh. I understood the different architecture compared with old and new cloud based one.

Comments are closed.

Comments are closed.