What is Sidecar pattern?

Sai Prasanth NG
Sai Prasanth NG
Partner February 07, 2019
#systemarchitecture

Scenario

Let’s assume we have built the following microservices based e-commerce application. All the services are written in the same language say, Ruby.

E-commerce application architecture

We want to push all the logs to a LogParser service. What we can do is we can write log_exporter.rb script which runs at regular intervals and pushes the content of log file to the LogParser service. We can place this script in each individual service.

As time passes we realize that we need greater throughput for the Product service as it is called multiple times to get a list of products. We switch the implementation of the Product service from Ruby to GoLang. Now we have another version of the script log_exporter.rb in GoLang. Problem is that as we use more languages in our microservices based application, we have to maintain another version of the exporter script in that language. In the future, if we want to make any change to our script, we have to make changes to the scripts in all the languages. Another problem is that we are writing the logic to export logs in the application, even though it has nothing to do with the application logic.

One way we can solve this problem is with the help of the sidecar pattern.

What is the sidecar pattern?

In this pattern, there are two containers. One of them is application container which contains the main application logic. Without this container, the application doesn’t exist. The second container is called the sidecar container, usually, the application container doesn’t have knowledge of the existence of such a container. Sidecar container is used to provide additional functionality to the application layer.

Both the application container and the sidecar container are deployed on the same pod. Due to this arrangement both the containers share a number of resources like a local file system and network. The lifecycle of this sidecar container is tied to the application container.

Kubernets pod with a sidecar container

Using sidecar pattern

The logs in our architecture are on the local file system. We can deploy the script to export these logs into a separate container. This container can be deployed in the same pod with each application container irrespective of in which language the application logic is written in. Since the sidecar container is deployed in the same pod, it has access to the log files on the local storage.

Logs exporting sidecar

Advantages of using sidecar pattern

  • The functionality of the application container can be enhanced without tight coupling.
  • Every container has a single responsibility.
  • Even if anything fails in the sidecar container, the application container will not be affected.
  • We can reuse the container irrespective of the language the application logic is written in.
  • Because of the close proximity with the application container, there should be less latency.

When to use sidecar pattern

  • When a feature or service must be co-located on the same host
  • If you want a service to share the same lifecycle as the main application but can be updated independently.

When not to use sidecar pattern

  • If the application container is small, it will lead to unnecessary complications and costs.
  • When there shouldn’t be any latency between the communication of the sidecar and the application container. Ex: When there shouldn’t be any latency in exporting the logs.
  • When there is very less scope for latency
  • When you want to scale the sidecar independently from the application container.

Points to keep in mind during implementation

Parameterize the containers

It is important to make the sidecar container modular and reusable. For example in our scenario to export logs, the container could take 2 parameters as input, one for the location of the log files and the 2nd as the frequency. This will help to reuse the container along different services.

docker run -e=FREQUENCY=60 -e=LOG_PATH=/path/to/log/file

Creating and maintaining the API for the container

The API interface is used to configure the sidecar container, therefore it is important that they are well defined.

Any changes to the API can cause breaking changes or non-breaking changes too. Example of a breaking change would be changing the API parameter name from “FREQUENCY” to “TIME_PERIOD”.  Example of a non-breaking change would be if we changed accepting the frequency values from seconds to minutes. This change would not break anything but would not cause the sidecar to work as expected.

Documenting the operation of the container

Documenting will help other people in understanding how to use the sidecar. The best place to document would be the docker file.

# The FREQUENCY value determines the interval in seconds for the sidecar to read and upload the logs
ENV FREQUENCY=”60”
# The LOG_PATH determines the location of the log file for the sidecar to read. 
ENV LOG_PATH=”/path/to/log/file”

Examples

Adding HTTPS to a legacy service

With the help of sidecar pattern, we can add HTTPS to a legacy service which for some reason we are not able to build on the new build process. We can run the legacy software only on the localhost and add an NGINX sidecar. This sidecar container can act as a proxy for all the https requests to the legacy service. Since the main container and the sidecar container are on the same pod, the sidecar container can forward the requests to the legacy service running on localhost. Therefore the unencrypted traffic exists only in the internal network.

Sidecar container to add HTTPS to legacy application
 

Git sync

We can implement a sidecar which always pulls the latest code from the master branch at regular intervals and we can configure the main application server to autoload the latest changes

Sidecar container to sync the codebase

Connecting to Google Cloud SQL

We can connect to the Google Cloud SQL from our application running on Google Kubernetes Engine by adding a Cloud SQL Proxy Docker image as a sidecar. With this approach the proxy container is in the same pod as the application, this enables the application to connect to the proxy using localhost thereby increasing the security.

References