Infrastructure as Code: Everything You Need to Know

Infrastructure is one of the core tenets of a software development process — it is directly responsible for the stable operation of a software application. This infrastructure can range from servers, load balancers, firewalls, and databases all the way to complex container clusters.

Infrastructure considerations are valid beyond production environments, as they spread across the entire development process. They include tools and platforms such as CI/CD platforms, staging environments, and testing tools. These infrastructure considerations increase as the level of complexity of the software product increases. Very quickly, the traditional approach for manually managing infrastructure becomes an unscalable solution to meet the demands of DevOps modern rapid software development cycles. And that’s how Infrastructure as Code (IaC) has become the de facto solution in development today.

How To Reuse Your Ansible Roles To Build Docker Images With Packer

In this article, I will show you how to reuse your Ansible roles to build Docker images using Packer.
If you are like me, you have probably used Ansible for a while. You probably have Ansible roles that you use to install and manage software on-premise and VM. Now you want to reuse those roles to build your own custom Docker images to use for development, testing, or production.

This is possible and very useful with minor changes to the Ansible roles. In fact, with Docker, you start with a very minimalistic environment and you can't take for granted some components that you will find preinstalled on major Linux distros.

Externalizing Your Configurations With Vault for Scalable Deployments

Table of Contents:

  • Introduction
    1. The Solution
  • Setting Up Vault
    1. Creating API Admin Policy
    2. Creating Read-Only user policy
    3. Creating Token attached with API read-only policy
  • 1. Linux Shell Integration
  • 2. Java Integration
  • 3. Python Integration
  • 4. Nodejs Integration
  • 5. Ansible Integration
  • Conclusion

Introduction:

To implement automation for microservices or applications deployed to a large number of systems, it becomes essential to externalize the configurations to a secure, scalable, centralized configuration store. This is necessary to be able to deploy the application in multiple environments with environment-specific parameters without requiring human intervention and without requiring modification to the core application during automated deployments, scaling, and failover recoveries.

Besides the fact that manually managed configurations involve the risk of human error, they are also not scalable for 24x7 large-scale deployments, particularly when we are deploying several instances of microservices across various infrastructure platforms.

Cloud factory – Common architectural elements

article imageIn our previous article from this series we introduced a use case for a cloud factory, deploying multiple private clouds based on one code base using the principles of Infrastructure as Code.

The process was laid out how we've approached the use case and how portfolio solutions are the base for researching a generic architecture.

Simplifying ARM Template Deployments With Ansible

I discussed how you could use Ansible with Terraform to simplify configuration management in my previous post. If, instead of Terraform, you prefer using Azure Resource Manager (ARM) templates to define the infrastructure and configuration for your project, you can use Ansible for managing parameters that customize each environment by dynamically generating a Resource Manager parameters file.

A great thing about using Ansible for your ARM configuration needs is that it includes a suite of modules for interacting with Azure Resource Manager. You can install the modules using Ansible Galaxy, a repository of Ansible Roles that you can drop directly in your Playbooks.

How To Automate PostgreSQL and repmgr on Vagrant

I often get asked if it's possible to build a resilient system with PostgreSQL.

Considering that resilience should feature cluster high-availability, fault tolerance, and self-healing, it's not an easy answer. But there is a lot to be said about this.

Install and Setup Docker Using Ansible on Ubuntu 18.04 (Part 2)

In the last guide, you learned how to set up, install, and configure Ansible on Ubuntu 18.04. Now, you will use the Ansible to install and set Docker on a remote machine. To begin this guide, you need the following:

  • One Ansible Control Node: You need Ansible installed and configured machine.
  • One or more Ansible Hots: At least one remote host with Ubuntu 18.04 with sudo permissions.

Please make sure that your Ansible control node is able to connect to your Ansible remote machines. To test the connection, you can use ansible all -m ping command.

NetApp and DevOps: How We Did It

For the last several years, NetApp has been on our own DevOps journey. We made the decision to adopt DevOps methodologies and technologies into our own systems. Led by our internal IT team, we've taken the steps to a software-defined, cloud-based data center with end-to-end DevOps workflow automation. So, how is it going and what can you learn from our experience?

Want your team to stay focused through your own DevOps journey? Check out Best Practices for Adopting a DevOps Culture.

Containers and Configuration: 3 DevOps Tools and Cheatsheets

These DevOps tools make every DevOps implementation easier.

Puppet

Puppet is one of the most widely-used DevOps tools. It makes delivering and releasing technology changes quicker and more frequent with features that support versioning, automated testing, and continuous delivery. It can manage multiple servers and enforce system configuration. Puppet is one of the most popular configuration management tools in the IT world these days for a number of reasons.

You may also enjoy:  5 DevOps Tools You Should Know In 2019 


How to Deploy to Several Orchestrators and Be Happy (or Not)

Deploy to Kubernetes or Mesos? Why not both?

In the article “The Power of Abstraction,” we discussed why we need to make our deployment manifest for applications. Now we are going deeper and I will explain how we delivered our services to Mesos and Kubernetes at the same time.

You may also enjoy:  Kubernetes vs. Mesos: Choosing a Container Orchestration Tool

I remember the moment when I started advising ANNA Money, and at that point, there was a process to deploy an application to Mesos. There was an Ansible playbook for templating manifests for Marathon and Chronos. It is a cool idea not to write manifests for each environment and use templates. However, something was broken in that process. Let's look at the standard manifest:

PostgreSQL Backup and Recovery Automation

A critical PostgreSQL client contains valuable data, and PostgreSQL databases should be backed up regularly. Its process is quite simple, and it is important to have a clear understanding of the techniques and assumptions.

SQL Dump

The idea behind this dump method is to generate a text file from DataCenter1 with SQL commands that, when fed back to the DataCenter2 server, will recreate the database in the same state as it was at the time of the dump. In this case, if the Client cannot access the primary server, they can have access to the BCP server. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:  pg_dump dbname >backupoutputfile.db.

How to Monitor 1,000 Network Devices Using Sensu Go and Ansible (in Under 10 Minutes)

Network monitoring, at scale, is an age-old problem in IT. In this post, I’ll discuss a brief history of network monitoring tools — including the pain points of legacy technology when it came to monitoring thousands of devices — and share my modern-day solution using Sensu Go and Ansible.

Then: Nagios and Multiple Network Monitoring Tools

I’ve spent the last ten years as a consultant for open-source monitoring architectures. During this time, I’ve seen many companies, of every size, based all over the world, with very different approaches to implementing and migrating monitoring environments and tools. Especially in small and medium-sized businesses, the demand for a one-fits-all solution is high. While big companies often use more than one monitoring solution — the “best of breed” approach — this option is often untenable for smaller businesses due to financial constraints. And sometimes having multiple monitoring tools makes no sense at all — the more tools you have, the more things you have to keep track of. Many IT organizations feel the best approach is to use one solution to monitor their entire infrastructure. This monolithic approach, with one tool that offers many functionalities, like monitoring, root cause analysis, visualization, reporting, etc., is not wrong as such, but when there are requirements like the need to scale, true interoperability in big environments, and reducing the dependency on one tool and vendor, it’s better to separate these requirements and run a single interconnected solution following the microservices approach.

Deploy a Production-Ready Kubernetes Cluster Using kubespray

What is Kubespray?

Kubespray provides a set of Ansible roles for Kubernetes deployment and configuration. Kubespray is an open-source project with an open development model that can use a bare metal Infrastructure-as-a-Service (IaaS) platform. The tool is a good choice for people who already know Ansible, as there’s no need to use another tool for provisioning and orchestration.                               

Environment Configuration

Below is the sample environment configuration where we will be installing a Kubernetes cluster with 3 Master and 3 Worker nodes.

Verifying Infrastructure, Using Ansible With Very Little Knowledge of It

Introduction

Ansible has been a popular tool for verifying Linux-based infrastructure environment. Usage of the standup module makes it extremely easy as it demands very little knowledge of Ansible itself.

An infrastructure environment, whether it is provisioned in a public cloud or private cloud or built using bare-metal machines in a data center, would need some amount of verification before deploying applications, however ephemeral the life of infrastructure would be. Typically, such verifications are needed during various stages of provisioning and configuring a cluster of machines, and, later when that cluster is available as the infrastructure for running software systems.