Contents:
Introduction
- Recreate Deployment
- Rolling Update Deployment
- Blue-Green Deployment
- Dark deployments or A/B Deployments
- Canary Deployment
Introduction
Deploying applications on Kubernetes can be a daunting task, especially if you don’t have the right deployment strategy in place. However, with the right strategy, deploying applications on Kubernetes can be made more manageable and efficient. In this blog, we will discuss the best practices and deployment strategies for Kubernetes to help you deploy your applications successfully.

Recreate

The Recreate deployment strategy in Kubernetes is a simple strategy that involves replacing the existing instances of an application with new instances of the updated version all at once. This means that the entire application will be offline during the deployment process, which can result in downtime for users.
To use the Recreate deployment strategy, the deployment manifest needs to be updated to specify the new version of the application. Then, the updated manifest is applied to Kubernetes, which will terminate the existing instances of the application and create new instances of the updated version.
spec: replicas: 3 strategy: type: Recreate template: ...
In contrast to other deployment strategies such as Rolling Updates and Canary Deployments, the Recreate strategy has no in-built capabilities for ensuring that the updated application is functioning correctly before taking it live. This means that testing and quality assurance must be done beforehand to ensure the updated version is free of issues that may cause downtime.
Rolling Updates

Rolling updates are a popular deployment strategy used in Kubernetes. This strategy allows for the application to be updated without any downtime. During a rolling update, Kubernetes updates the pods in the deployment one at a time, ensuring that the application is running throughout the deployment process, and there is no interruption in the user experience. Best practices for implementing rolling updates in Kubernetes include:
- Use Readiness Probes – Readiness probes allow Kubernetes to check the state of the pod before sending traffic to it, ensuring that only the healthy pods receive traffic.
- Gradual Rollout – Gradually rolling out updates allows you to monitor the update process and detect any issues that may arise, allowing you to take action before the update is fully rolled out.
... spec: containers: - name: my-app readinessProbe: httpGet: path: /healthz port: 80 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 ... --- #OR --- ... spec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 25% template: ...
The strategy
field defines the deployment strategy. In this case, the type of the strategy is RollingUpdate
. The maxUnavailable
field specifies the maximum number of Pods that can be unavailable during the update process, which is set to 1. The maxSurge
field specifies the maximum number of Pods that can be created above the desired number of Pods, which is also set to 1.
With these settings, Kubernetes will replace the old Pods with new ones one at a time. This means that at any point during the deployment process, there will always be at least two instances of the application running. Once all the old instances have been replaced with the new ones, the Rolling Update is complete, and the application is fully updated with zero downtime.
Blue-Green Deployment

Blue-Green deployment is another popular deployment strategy that is used to ensure that there is zero downtime during the deployment process. This strategy involves deploying two identical environments, one blue and the other green. The blue environment represents the current stable version, while the green environment represents the new version. Best practices for implementing blue-green deployments in Kubernetes include:
- Automate Environment Switching – Automating the switching of environments ensures that the deployment process is fast and efficient, reducing the risk of errors during the process.
- Continuous Integration and Deployment (CI/CD) Pipeline – Implementing a CI/CD pipeline streamlines the deployment process, ensuring that updates are thoroughly tested and deployed seamlessly.
Kubernetes natively supports rolling updates and recreating deployment strategies but does not have a native blue-green deployment strategy. However, blue-green deployment can be implemented in Kubernetes using a combination of Kubernetes services and deployment strategies.
The most common approach to implementing blue-green deployment in Kubernetes is to create two separate environments (blue and green) with identical application deployments behind different services. Traffic is directed to one of the environments through the corresponding service. When a new version of the application is ready to deploy, a new deployment is created in the inactive environment, and the corresponding service is updated to direct traffic to the new version. Once the new version has been fully tested and verified, traffic is shifted entirely to the new version by updating the service selector to point to the new environment.
There are also tools like Istio, Linkerd, and Flagger, which can automate the process of blue-green deployment and make it easier to implement. These tools use Kubernetes-native service meshes and allow for the automatic shifting of traffic between the two environments based on metrics and conditions set by the user.
Note: Here is an example YAML for a Blue-Green deployment strategy: this is just to understand the concept and this sample YAML which requires, CRD from argocd and it rollout-controller to be setup in the cluster.
apiVersion: argoproj.io/v1alpha1
kind: Rollout
...
strategy:
type: BlueGreen
blueGreenStrategy:
activeService: my-app
previewService: my-app-preview
prePromotionAnalysis:
enabled: true
postPromotionAnalysis:
enabled: true
The strategy
field defines the deployment strategy. In this case, the type of the strategy is Blue-Green
. The activeService
field specifies the name of the service that will be serving traffic to the active environment, which is set to my-app
. The previewService
field specifies the name of the service that will be serving traffic to the preview environment, which is set to my-app-preview
.
The prePromotionAnalysis
field specifies whether to run analysis before the new environment is promoted to the active environment. In this example, it is enabled. The postPromotionAnalysis
field specifies whether to run analysis after the new environment has been promoted to the active environment. In this example, it is also enabled.
With these settings, Kubernetes will gradually shift traffic from the active environment to the preview environment. Once the new environment has been fully tested and analyzed, Kubernetes will promote the preview environment to the active environment by updating the service selector. This means that all traffic will be redirected to the new environment with zero downtime.
Dark deployments or A/B Deployments

A dark deployment is another variation on the canary (that incidentally can also be handled by Flagger). The difference between a dark deployment and a canary is that dark deployments deal with features in the front-end rather than the backend as is the case with canaries.
Another name for dark deployment is A/B testing. Rather than launch a new feature for all users, you can release it to a small set of users. The users are typically unaware they are being used as testers for the new feature, hence the term “dark” deployment.
With the use of feature toggles and other tools, you can monitor how your user is interacting with the new feature and whether it is converting your users, or whether they find the new UI confusing and other types of metrics.
Besides weighted routing, Flagger can also route traffic to the canary based on HTTP match conditions. In an A/B testing scenario, you’ll be using HTTP headers or cookies to target a certain segment of your users. This is particularly useful for front-end applications that require session affinity.
Canary Deployment

Canary deployment is a deployment strategy that involves rolling out the new version of an application to a small subset of users or traffic before rolling it out to the entire user base. Canary deployment allows you to test the new version in a live environment, ensuring that it works as expected before fully rolling it out. Best practices for implementing canary deployments in Kubernetes include:
- Monitor and Analyze Metrics – Monitoring and analyzing metrics during the canary deployment process helps you detect any issues and make informed decisions about the deployment process.
- Validation checks: Once the canary deployment passes the validation checks, it can be promoted to the production environment, and all users or traffic can be directed to the new version of the software.
Kubernetes does not have a native canary deployment strategy. However, it is commonly implemented in Kubernetes using tools like Istio, Linkerd, and Flagger, which use service meshes and automation to simplify the process. These tools allow for automatic shifting of traffic based on metrics and conditions set by the user, and they provide monitoring and alerting capabilities to quickly identify any issues with the new version.
Canary deployment is commonly implemented in Kubernetes using tools like Istio, Linkerd, and Flagger, which use service meshes and automation to simplify the process. These tools allow for automatic shifting of traffic based on metrics and conditions set by the user, and they provide monitoring and alerting capabilities to quickly identify any issues with the new version.
Note: Here is an example YAML for a Canary deployment strategy: this is just to understand the concept and this sample YAML which requires, CRD from istio and it istiod to be setup in the cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- name: http
port: 80
targetPort: 80
type: ClusterIP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- myapp.example.com
http:
- route:
- destination:
host: myapp
subset: v1
weight: 90
- destination:
host: myapp
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: myapp
spec:
host: myapp
subsets:
- name: v1
labels:
app: myapp
version: v1
- name: v2
labels:
app: myapp
version: v2
This YAML file creates a Deployment with two replicas of the “myapp” container image, a Service that points to the Deployment, and a VirtualService and DestinationRule that implement the canary deployment strategy using Istio. The VirtualService directs 90% of traffic to the stable version (v1) and 10% of traffic to the new version (v2), while the DestinationRule defines subsets for each version of the application.
Conclusion
Deploying applications on Kubernetes can be challenging, but with the right deployment strategy and best practices in place, it can be made more manageable and efficient. By implementing rolling updates, blue-green deployments, and canary deployments, you can ensure that your applications are deployed seamlessly and with minimal downtime.