Just a few years after its introduction, DevOps has grown from being a movement on the fringes of IT to a necessity for anyone in charge of making IT decisions. It’s the buzzy new kid on the block, but it’s also one of the rare examples of the thing living up to the hype.
Where DevOps really shines is in its potential for automation. In the past, critical and repetitive tasks such as provisioning environments, maintaining technology, and deploying applications were normally done by hand. But this is fast changing with the advent of true automation in DevOps.
This is where Configuration Management Tools come in, offering a way for automation software to handle these iterative tasks, saving you and your team thousands of hours each year and, more importantly, eliminating issues caused by human error.
Configuration Management Explained
The need for efficiency and cost-reduction is what generated the boom in configuration management (CM) tools in DevOps. At their core, configuration management tools are tools and systems that make it easier and faster to implement DevOps.
There shouldn’t be any confusion here. DevOps is an approach or philosophy hinged on the marriage between software development and IT operations. It promotes communication, collaboration, and integration between teams from the two camps – Development and System Administration.
Configuration management tools come in by facilitating the execution of this methodology. Before the advent of DevOps and mature CM tools, sysadmins had to do provisioning on each machine and server, which was obviously very inefficient, tedious, and had a high chance of human error (e.g. configuration inconsistencies between development and production environments).
How Does Configuration Management Work?
To be more specific, configuration management involves the installation and updating of system packages and setting network configurations to make machines/servers ready for deployment.
The goal of CM tools is to maintain these systems in known, configured states. Configuration management also involves creating the description of the configured or desired states of these systems, and—as mentioned earlier—automating processes to maintain these desired states.
Why Use Configuration Management Tools?
Perhaps the biggest benefit of configuration management tools is their ability to create a consistent environment between operational systems and software. With CM tools, you no longer have to cross your fingers and hope a configuration is correct—the CM system will make sure it is correct.
And when combined with automation features, configuration management tools can dramatically improve efficiency, making it possible to configure even more targets with the same resources, sometimes even less.
And for growing organizations, configuration management allows you to scale your technology infrastructure and software systems without necessarily hiring more support staff to manage these systems. This means your infrastructure can grow without requiring you to spend more.
Configuration Management Tools We Trust
In this guide, we go over a few of the most popular configuration management tools in DevOps, providing a brief overview of each tool’s features and strengths, and how they stack next to the competition.
Puppet is an open-source server automation platform for configuration and management. IT managers can use the tool to record system components, leverage a steady stream of new information, and build a comprehensive catalog of dependences.
Puppet offers the technology to automate your entire enterprise, solving the problem of automation usually being siloed in one area or the other. The platform works on Windows, Linux, and Unix systems, allowing IT managers to perform a wide range of administrative tasks (e.g. adding new users, package installation, and updating servers) based on a centralized specification.
While Puppet is based on Ruby, users will usually be using Puppet’s proprietary language, which should be familiar to anyone who has worked with JSON.
How Puppet Works
Puppet uses your selected configuration state, indicated by “manifests,” as a guide for auditing and regulating your IT environment.
Puppet delivers an automatic way to inspect, deliver, operate, and future-proof all of your software, no matter where it runs.
The Puppet approach allows users to maintain control, security, consistency, and compliance across their infrastructure, all while keeping their DevOps modern and efficient. Users get to define how their apps and infrastructure look like using the Puppet declarative language, after which you can then share, test, and reinforce any changes across your cloud platforms and data center.
Most observers refer to Puppet as a tool built for sysadmins, with a more forgiving learning curve due to its model-driven design. Sysadmins who have spent most of their professional IT life at the command line should be able to quickly transition to the JSON data structures in Puppet’s manifests, as compared to working with Ruby syntax.
We Use Puppet Because…
Puppet releases an annual “State of DevOps” report, which is widely hailed as the one of, if not the best, resource for trends, insights, and predictions in the DevOps landscape. Puppet’s list of clients serves as a testament to the trust people place in the platform—names include the likes of NASA, Verizon, Intel, and Salesforce among many others.
But Puppet also offers the ability to scale the automation of IT infrastructure according to the needs of small to medium enterprises and startups, all while providing the visibility and reporting you will need to make informed decisions and show compliance.
Along with Puppet, Chef is widely considered to be one of the most trusted and recognized CM software vendors in the market. While it appears to offer the same features as its closest competitor, Chef has its own unique features and strengths.
For starters, while Chef is also open-source, it leans more towards the needs of DevOps users. Chef’s configurations, which are called “recipes,” are very similar to the “manifests” on Puppet, only this time around users will have to use Ruby to write procedural scripts instead of state models.
The Chef approach also revolves around grouping different recipes into “cookbooks”, which can be downloaded through Chef’s active and thriving community, aptly named the “Supermarket.” Yes, there’s a lot of food puns to go around.
Another Chef claim to fame is its strong support for IaC, made possible by its strong procedural approach.
How Chef Works
Chef is written in Ruby, with a command-line interface that also relies on a Ruby-based DSL. The Chef approach depends on a master and agent model, which means installing Chef requires a master server and a separate workstation to control the master.
Agents can be installed via the workstation using the platform’s “Knife” tool, which uses SSH for faster installation and deployment.
We Use Chef Because…
Although Chef trails Puppet by around four years or so, it has still managed to develop a broad client base with elite firms, which include Intuit, GE Capital, and Target among many others.
If your firm is a Ruby on Rails shop, there’s a good chance you’ll love Chef for the ease of using Chef’s domain-specific language, which ensures that everyone on your technology code understands the code. Chef also integrates with a wide range of cloud providers, including, but not limited to, OpenStack, HP Cloud, Google Compute Engine, Joyent Cloud, Rackspace, IBM SmartCloud, VMWare, and Amazon EC2 among many others.
Users can download any of the 3,000 cookbooks for IT automation on the Supermarket, which, while having a smaller spread than Puppet’s, should be useful enough for users. The Supermarket also contains plugins and tools, all of which will help users automate their IT processes and improve visibility.
Although it’s a relatively new player in the field, Ansible has managed to gain a strong foothold in the DevOps landscape, making its way into top Linux distros like Fedora.
Like most configuration management and automation solutions, Ansible started out as an open-source project designed to automate IT environments and infrastructure. As it became more popular for enterprise settings, its parent company, AnsibleWorks, expanded into commercial applications.
At present, Ansible’s solutions come in two products:
• Ansible Engine
• Ansible Tower (features the Ansible UI and dashboard)
Ansible’s reputation as the new kid on the block doesn’t seem to matter to DevOps professionals, who praise the platform for its simple management features and straightforward operations. Indeed, many of Ansible’s users will agree the platform has a very forgiving learning curve.
How Ansible Works
Ansible’s features allow you to automate your configuration management, application deployment, and cloud provisioning among several other IT processes. Built with multi-tier deployments in mind, Ansible models your technology infrastructure and defines how your systems work with one another, instead of managing each system as a silo.
Ansible does this by mapping and connecting to your nodes, sending them “Ansible modules”—small programs written as resource models for the system’s configuration state. Ansible then executes the models over SSH, removing them once done.
The platform’s library of modules can exist on any machine in your infrastructure. Ansible does not require agents, additional custom security software, servers, daemons, nor databases. At most, all you need is a terminal program of your choice, a text editor, and perhaps a version control system to track content changes.
We Use Ansible Because…
As mentioned earlier, Ansible’s most lauded attribute is its simplicity and ease of learning, allowing users to get up to speed and start being productive right away. The platform is supported by clear and easy-to-follow documentation, allowing users to learn the logic and workflow of the Ansible approach in less time as you would spend on say, Puppet or Chef.
Ansible does not have a dependency system, with tasks running sequentially, automatically stopping when encountering errors. In turn, this allows for faster and easier troubleshooting, especially in the beginning stages of implementing the platform in your organization.
And because Ansible was written in Python—unlike most CM tools on the market, which were built with Ruby—setting up the tool is fast and easy, thanks to Python being present by default on most Linux distros. Python also leans towards administration and scripting applications, so much so that most sysadmins are more likely to know Python over Ruby. Of course, Ansible modules to expand the tool’s functionality can be written in pretty much any language that returns data in JSON format.
Like Ansible, SaltStack was written in Python as a response to the growing dissatisfaction over Chef and Puppet’s restriction of users to Ruby, as well as their slugging speed when it came to application deployment.
In many ways, SaltStack combines the best features of Salt and Ansible. It’s not just based on Python, it also requires DevOps sysadmins and pros to write CLI commands in Python or its domain-specific language, PyDSL. Salt also uses a master server and deployable agents referred to as “minions,” which control and communicate with identified servers.
The only difference is that this is done using ZeroMq at the transport layer, thus making it faster than how it would be done on Chef or Puppet. SaltStack makes it possible to have several master levels in a tiered arrangement, helping increase redundancy and improve load distribution. SaltStack also uses YAML config files, which are set as templates or packages.
How SaltStack Works
SaltStack’s features are designed for automating infrastructure and software environments that rely on cloud computing, virtualization, containerization, and connected devices. Through its “intelligent orchestration software,” Salt helps enterprise IT organizations secure and manage “all aspects of the software-defined data center” in an efficient manner. The software stands out for providing event-driven automation solutions, allowing you to scale and efficiently control your computer, network, and storage functions.
The company’s approach to infrastructure management focuses on the concept of a high-speed, SSH method of communication between multiple systems and how it’s the key to opening new functionalities. As such, SaltStack is all about multitasking across systems, in an effort to identify and solve issues in an IT infrastructure. SaltStack’s foundation is built on its Remote Execution Engine, which established a high-speed and secure communication net for a fleet of systems. Salt adds to the functionality of this community system with the Salt States, its proprietary configuration management system.
We Use SaltStack Because…
For starters, SaltStack is open-source and is thus free to use (Apache license). There’s an enterprise subscription that appears to be node-based, but there’s nothing on the site indicating package-or edition-based pricing, unlike the other software solutions on this list. But even if you’d rather not pony up to a paid version, you still get all the pro features for free. SaltStack’s configuration style also has a forgiving learning curve (as is usually the case with Python).
Unlike Chef and Puppet, which demand configurations in Ruby-based syntax, SaltStack input and output configurations are consistent and very easy to read—all it simple YAML. Indeed, once you are past the setup stage, organization and control are pretty straightforward.
You can even use YAML-parsing software to go through the syntax of your configuration file. SaltStack also provides a top-down execution order in its configuration, something that has long been a source of frustration for sysadmins using Puppet, whose “manifests” depend on declarative execution. What usually happens is that sysadmins have to write dependencies for different executions, creating bloated config files and harder troubleshooting of manifests. In contrast, SalStack configs are imperative and execute from the top down—a huge help when porting bash scripts.
This also eliminates the need to write specific requirements for declarations, resulting in lighter config files. If your primary concern about your IT infrastructure has always been scalability and resilience, SaltStack’s usability should appeal to DevOps sysadmins and pros. The Salt DSL is also feature-rich, but not necessary for states and logic.
As mentioned above, SaltStack is open-source and FREE TO USE. You will, however, have to contact Salt for pricing information about custom support and personalized automation solutions under its Enterprise product. But some users report that pricing is node-based at $150 per node. According to Salt, SaltStack Enterprise “provides enterprise DevOps and IT operations organizations with the first enterprise-grade customer experience built on the powerful Salt open-source platform for IT automation and orchestration.”
Today’s DevOps systems administrators and professionals are often faced with the challenge of managing a large fleet of servers, often requiring some level of automation for tasks and processes that perform the same functions several times over.
Whether it’s installing and provisioning a new server, rebooting groups of servers at certain times of the day, or deploying one or multiple packages across specific sets of servers, the Configuration Management tools highlighted in this list make life a lot easier.
Of course, it’s imperative that before you purchase any configuration management tool, you must have understood its features and uses in relation to your project requirements.
But regardless of what configuration management tool you choose to for your DevOps routine use, any automation project you want to take on must first begin with evaluating your specific circumstances, needs, and existing IT infrastructure. If you’re automating inefficient processes or IT infrastructure your organization has yet to fully understand, you’re only getting a fast ticket to even more problems.
Bottom line? If you want to get the most out of the automation tools in this guide, always start with an IT infrastructure audit, which will ensure that you’ve mapped out and resolved any landmines waiting to be tripped. Only then can your organization and DevOps teams reap the rewards of Configuration Management.
Amazon’s early attempts to use alternative processors in cloud computing weren’t that good. AWS Arm instances did not stand out with high performance. However, now the company is set to change the industry with its new-generation processors Graviton2 built with ARM Neoverse N1 architecture. Let’s check our best practices for maintenance cost savings.
The last thing you want to see when debugging your code is chaos. But what if this chaos is controlled and launched by the developers themselves? Why deliberately create turbulence in the well-coordinated operation of your application? How can you achieve peace of mind when releasing important features? When exactly does the practice of chaos […]
DevOps has been a highly anticipated goal for many organizations over the years. From startups to Fortune 500 companies, businesses of all kinds are rushing to enhance the velocity and quality of software development. CI/CD tools may be precisely the means for that. Companies recognized the need to improve their development workflow many years ago. […]