Ocado has sped up the time-to-market, Bloomberg hit 95 percent hardware utilization rate, and Amadeus moved all production workloads to run on a single operating model. All this was made possible with Kubernetes. Why do large companies need Kubernetes and how does it take IT technologies to the next level? Let’s figure it out!
According to RightScale research, the utilization of Kubernetes is skyrocketing. Such successful companies as Airbnb, The New York Times, J.P.Morgan, Adidas, Tinder, Spotify, and many others already utilize Kubernetes. This is not surprising, because when launching digital services, even large businesses are forced to use a startup approach. They need to quickly develop and bring new products to market while maintaining the quality of their customer service – this is the only way to survive in a highly competitive environment. In addition, it is important for companies to immediately receive additional resources when the load on services increases, and not to overpay for them during declines in user activity.
To cope with increasing loads on servers and maximize resource utilization, companies have started to turn to microservices and containerization. Kubernetes (K8s) is used to manage them. K8s is an orchestration tool (orchestration is a highly automated process for managing related entities such as groups of virtual machines or containers) that simplifies infrastructure management and makes it more flexible: applications can be easily moved between different clouds and back-end environments. K8s ensures deployment and delivery optimization, guarantees high availability, disaster recovery, and container scalability.
Basic Kubernetes tasks are:
- deploying containers and operations to run the required configuration. These operations include restarting stopped containers, moving them to allocate resources for new containers, etc.
- scaling and running multiple containers simultaneously on a large number of hosts;
- balancing multiple containers during startup. For this, Kubernetes uses an API to logically group containers. This makes it possible to define pools, set their placement, and evenly distribute the load.
Containers have taken over the IT industry and have become ubiquitous in record time. They introduce a wide range of advantages, including a compatible, lightweight runtime through OS-level virtualization, reduced maintenance expenses, and highly efficient application scaling. Moreover, containers enable you to package and deploy apps sequentially across different infrastructures.
Benefits of K8s are:
- Service discovery and load balancing. Containers can run on their own IP addresses or use a common DNS name for an entire group. K8s can load balance and distribute network traffic to maintain a stable deployment.
- Automatic storage management. The user can specify which storage to use for the default deployment (internal, external cloud provider, etc.).
- Automatic implementation and rollback of changes. The user can make any additions to the running container configuration. If this breaks the stability of the deployment, K8s will itself roll back the changes to a stable version.
- Automatic resource allocation. Kubernetes itself allocates space and RAM from a specific cluster of nodes to provide each container with sufficient memory.
- Manage passwords and settings. K8s can serve as an application for securely processing confidential information related to the operation of applications – passwords, OAuth tokens, SSH keys. Depending on the application, data and settings can be updated without recreating the container.
- Self-healing when a failure occurs. With the help of special metrics and tests, the system can quickly identify damaged or unresponsive containers. Failed containers are recreated and restarted on the same Pod.
Kubernetes is essential for continuous integration and software delivery, which is consistent with the DevOps approach. By “packaging” the software environment into a container, the microservice can be deployed very quickly on a production server, safely interacting with other applications.
Grit is a social commerce platform for creating photos, videos, texts, sharing the content, and following other Grit pages to sell products and services. By deciding to build their own application, the company was faced with an urgent need for more scalable infrastructure and a solid foundation for future development.
We resolved this concern by developing an on-premise Kubernetes cluster for the company. Thus, the future application built on K8s will have the following benefits:
- multi-cloud flexibility
- quick and reliable deployment
- cost/resource reduction
- limitless scalability.
One of the most amazing Kubernetes features is that it is open-source, so deploying an application is equally easy in the cloud or on-premise. If an app works in Kubernetes it has no vendor lock which happens when apps are tied to a cloud provider. This means that it can be easily moved between different Kubernetes clusters. For instance, if your product was deployed on AWS Beanstalk you won’t be able to migrate to Google instantly, since it has its own Google App Engine which is not compatible with Beanstalk.
Vendor lock is always a drawback: if a company is tied to one provider, migration will be a challenge in the case that prices for services increase. By deciding to deploy its product on Kubernetes, Grit has eliminated the possibility of vendor lock: in case of errors, it will be much easier to find an engineer to fix the problem. Since Grit doesn’t tie itself to one particular service provider, they have preemptively taken care of the vendor lock-in problem.
Note: recalling what happened to Parler, we can surely state that vendor lock-in is best avoided. After Amazon found 98 posts on the site that emboldened violence, it decided to remove the social network from its web hosting service. Parler had to move its infrastructure within one day, but it took two whole weeks to find another hosting and move its services. If the company used Helm charts, which allow the user to launch apps in Kubernetes or any other provider, they could’ve done that much faster.
Doxy is a simple, free, and secure telemedicine solution. It is a platform on which you can create your own private room and start practicing telemedicine. It originally operated on Docker Swarm, Kubernetes’ main competitor. The company decided to move to Kubernetes because it is more customizable and provides deployment options not available in Docker Swarm. This is why Kubernetes is the most widely used orchestration tool right now.
Migrating from Docker Swarm to Kubernetes is much easier than migrating from virtual servers. When it comes to financial matters, it all depends on your needs. If you have one application running on one server, transferring it to Kubernetes is unprofitable. With a large multi-server infrastructure, it makes sense to move to Kubernetes if your goal is to save money and make it easier to maintain.
Issues that Doxy was able to solve with the help of Kubernetes:
- better observability: it became easier to add monitoring of all app operations;
- apps are easier to maintain and scale;
- easy to support dozens of testing environments.
Another issue we were able to resolve for Doxy was the removal of the third-party vendor that provided the company with HIPAA compliance solutions from AWS. By building its own infrastructure on Kubernetes that ensured HIPAA compliance, Doxy was able to eliminate middlemen and therefore save money.
In the future, the company intends to add a multiregional setup so that the app is more widely available. This will speed up the application and make it compliant with local privacy laws.
Mvix provides end-to-end digital signage solutions. It was initially working on the AWS Beanstalk which allows teams to control virtual machines, deploy and manage apps, and conduct autoscale in the AWS cloud. It is a tool for the orchestration of virtual machines.
One of the reasons Mvix switched to Kubernetes was lengthy deployment time: deploying an application in Beanstalk takes 5-7 minutes, in Kubernetes, it takes around 1 minute. This is because setting up a virtual machine takes much more time than setting a container.
The key reason for moving to Kubernetes was the need to gather custom solutions (Docker images ) that require specific PHP components. Since the developers already used and tested Docker images on their side, it was easier for us to make the migration. In addition, we removed the difference of app version components (the local developer’s version and Beanstalk version), thus eliminating the possibility for bugs and errors.
The key possibilities that Kubernetes provides for Mvix:
- simple monitoring;
- unlimited scalability;
- logging aggregation;
- efficient system administration.
With Kubernetes on board, Mvix will be able to reduce costs on app production/releases due to more complete use of resources, have the ability to add more containers, and optimize the efficiency of internal funds allocation. Since the company is still growing and production requires more power, Kubernetes is exactly what it needs right now. The thing is, it was created specifically for light autoscaling and downscaling, meaning that in times of highest activities it can run more containers to provide service for more clients. With Kubernetes, this is much easier to accomplish than with any other solution on the market.
The client is an independent news firm that reports and analyzes the top global biotech and pharmaceutical R&D news. The organization’s WordPress site was hosted by a provider for $4,500 a month. Even though they had the most expensive website hosting services, they still couldn’t handle the load during busy hours.
At peak hours, (in this case – high traffic volume) the load on IT services changes. The system needs to scale: it needs additional resources to cope with the new load. Kubernetes can automatically scale the IT system depending on the needs of the application. This means that the app will almost instantly get the resources it needs during peak periods, and it will not waste resources during less busy times.
The client reached out to us for a solution. They decided to create a system for auto-deployment of changes in three stages: development, staging/QA, production. All of these operations are initiated manually by the manager, but running the code into production and deployment of changes is automated. We offered them a solution based on Kubernetes that matched the client’s needs. Due to increased productivity, the company was able to reduce their monthly bill to $1,500. On top of that, the website now operates without errors regardless of the number of visitors, which was impossible with the previous server provider.
So, in this particular use case, Kubernetes allowed the company to achieve:
- improved efficiency
- endless scalability
- reduced monthly expenditures
The company does not overpay for capacity when it does not need it, that is, it optimizes IT costs, including by improving utilization, and does not risk losing customers due to the fact that the application “freezes” as requests increase.
Kubernetes is the most advanced container orchestration tool available today. It speeds up the development process and shortens time to market which is an undeniable advantage for any business. To deploy and manage your system efficiently, you need a trusted partner by your side. OpsWorks Co. makes modern technologies available even to small businesses. We are always aware of the latest changes in the world of the Kubernetes tool ecosystem and help both small and big businesses implement it!