Versioning in The World of Containers

Versioning is the act of assigning a label to a software package (whatever from a single file to a complex application). You surely are used the Maven version scheme, and maybe heard about Semantic Versioning . But anything that labels software units in order can be understood as versioning: Timestamps, Commit ID trees, or just numbers or strings.

One may question the need for versioning. If you only need the latest “state” for your product, you are right, you don’t need versioning at all. Sadly, you will be missing a lot of interesting capabilities:

Integration Key to Experience: API Management Details (Part 4)

In my previous article from this series, we started diving into the details that determine how your integration becomes the key to transforming your customer experience.

It started with laying out the process of how I've approached the use case by researching successful customer portfolio solutions as the basis for a generic architectural blueprint. Now it's time to cover various blueprint details.

AWS Networking Overview, Part 2

In this three-part series, we deep dive into the Kubernetes Pod networking options on Amazon, and provide a bit of guidance around the various trade-offs involved in selecting a particular Kubernetes Network technology for your cluster on Amazon. Please see here for Part 1. 

The other part to understanding networking on Kubernetes running on Amazon is the underlying Amazon network technology. AWS started out with a simple flat network, but due to customer demand for segmented networks, and to provide a more full-featured network implementation, it now includes VPC (virtual private cloud).

OWASP ServerlessGoat: Learn Serverless Security By Hacking and Defending

Deliberately-vulnerable applications gained popularity in recent years for the purpose of learning and demonstrating application security concepts. Years ago, OWASP launched the WebGoat project, which has since become the gold standard and to this day is still one of the most popular platforms for teaching web application security.

The Open Web Application Security Project (OWASP) recently launched the serverless counterpart to WebGoat, named ServerlessGoat, which was contributed by serverless security vendor PureSec.

Secure Docker in Production

You are using Docker for development and testing purposes but did not yet take the step to use it in production? Then read on, because in this blog post we will take a look at how you can ensure that you run your Docker containers in a secure way.

The CIS Benchmark

The default Docker installation does not provide us enough security for usage in production. Neither are the numerous examples of Dockerfiles you can find on the web. Even the Dockerfiles in some of our previous blog posts are not production ready. How do we know what to do in order to run our Docker container in a secure way? This brings us to the Center of Internet Security (CIS). The CIS provides best practices for securing IT systems and data against attacks. These best practices are identified and verified by a community of experienced IT professionals. In our case, we will take a look at the CIS Benchmarks page. Here we find a lot of benchmarks for operating systems, devices and software. Within this list, the CIS Benchmark for Docker Community Edition 1.1.0 is available. It is freely downloadable, but you do need to provide your contact details and after that, a download link is sent to your email address. This will also give you access to the other CIS benchmarks.

How to Setup Docker Private Registry on Ubuntu 16.04

Introduction

Docker Private Registry is a highly scalable server-side application that can be used to store and distribute the Docker images internally within your organization. Docker also has its own public registry (Docker Hub) that allows you to store Docker images. But, the images you upload on Docker Hub becomes public. Anyone can access and use your images from Docker Hub. So it is not the best option for your organization. Docker Private Registry allows you to set up a Docker registry for your project privately so that only your organization can store and use Docker images on it. Using Docker Private Registry, you can easily control your images, fully own your images distribution pipeline, and integrate image storage and distribution tightly into your in-house development workflow. If you want to quickly deploy a new image over a large cluster of machines, then Docker Private Registry is the best solution for you.

In this tutorial, We will explain how to set up our own Docker Private Registry server on Alibaba Cloud Elastic Compute Service (ECS) instance with Ubuntu 16.04.

How AWS Control Tower Lowers the Barrier to Enterprise Cloud Migration

Scaling AWS cloud migration to the enterprise just got a whole lot less scary. Let’s give a warm welcome to AWS Control Tower.

If there was one announcement at AWS re:Invent 2018 that made us do the happy dance for our enterprise clients, it was the announcement of AWS Control Tower. It’s a new central control center that allows enterprises to have a single jump-off point for multi-AWS account management across teams, departments, and international borders.

15 Useful Helm Charts Tools

Helm is one of the best things about Kubernetes. (Which is why we talk about it in great depth here.) Rather than setting up an entirely new environment and configuring each kube object manually, you can now use Helm and Helm Charts—the template for different Kubernetes setups—to automate 90% of the work. For more on Helm Charts in full and how they’re designed to be flexible and robust, don’t forget to check out our Spotlight on Helm articles first.

Helm is made even stronger with the help of a huge community of developers around it. Devs have found Helm Charts extremely useful, so they’ve begun developing tools, add-ons, and plugins for specific functions to enhance it further. Here is a compilation of some of the best Helm Charts tools you can use today.

Integration Key to Experience: External Application Details (Part 3)

In my previous article from this series, we took a high-level view at the common architectural elements that determine how your integration becomes the key to transforming your customer experience.

The process was laid out how I've approached the use case and how I've used successful customer portfolio solutions as the basis for researching a generic architectural blueprint. The only thing left to cover was the order in which you'll be led through the blueprint details.

Serverless And Startups: Remodeling The Startups Landscape

What is Most Important For Startups?

Cost Effectiveness + Flexibility + Strong Technology Reliance + Powerful Idea = a Successful Startup!

Over time, there has been a massive change in software development methodologies. The technology stack is continuing to evolve with more powerful and advanced models, like intelligent customer assistance, cloud technology, machine learning, and immersive experiences. In order to achieve "more with less," a number of enterprises and startups are choosing serverless technologies.

The R’s of Migration

There are many ways by which you can migrate your applications to the cloud. In this blog, we will go over different strategies that any company can leverage in order to migration their production workloads to the cloud.

Before the migration phase, it is essential to determine the current environment, its dependencies, types of servers and applications, licenses and much more.

Keycloak Cluster Setup With kube_ping

Keycloak is an open source software which provides single sign-on with Identity Management and access management. Keycoak uses different types of pings to discover other members of cluster. We are going to use kube_ping as discovery (JGROUPS_DISCOVERY_PROTOCOL).

How kube_ping Works

Let's assume we launch a cluster of 3 pods in Kubernetes in the default Namespace. When discovery starts, kube_ping asks for a list of the IP addresses of all pods from Kubernetes.

Grafana Cloud Dropwizard Metrics Reporter

Time series metrics reporting and alerting is an essential tool when it comes to monitoring production services. Graphs help you monitor trends over time, identify spikes in load/latency, identify bottlenecks with constrained resources, etc. Dropwizard Metrics is a great library for collecting metrics and has a lot of features out of the box including various JVM metrics. There are also many third-party library hooks for collections metrics on HikariCP connections pools, Redis client connections, HTTP client connections, and many more.

Once metrics are being collected we need a time series data store as well as a graphing and alerting system to get the most out of our metrics. This example will be utilizing Grafana Cloud which offers cloud-hosted Grafana, a graphing and alerting application that hooks into many data sources, as well as two options for time series data sources Graphite and Prometheus. StubbornJava has public facing Grafana dashboards that will continue to add new metrics as new content is added. Take a look at the StubbornJava Overview dashboard to start with.

Getting Started With DynamoDB and Spring

DynamoDB is a NoSQL database provided by AWS, and in the same way as MongoDB or Cassandra, it is very suitable to boost horizontal scalability and increase development speed.

Main Features

  • Fully managed NoSQL.
  • Document or Key-Value.
  • Scales to any workload. DynamoDB allows you to auto-scaling, so the throughput adapts to your actual traffic.
  • Fast and consistent.
  • Provides access control.
  • Enables Event Driven Programming.

Components

  • Tables. Catalog
  • Items. Group of attributes
  • Attributes. Data elements
  • Partition Key. Mandatory, Key-Value access pattern. Determines data distribution
  • Sort Key. Optional. Model 1:N relationships. Enables rich query capabilities
DynamoDB Components

Guidelines

  • Understand the use case.
    • Nature of the application.
    • Define the E/R Model
    • Identify the data life cycle (TTL, Backups…).
  • Identify the access patterns.
    • Read/Write workloads.
    • Query dimensions.
  • Avoid relational design patterns, and instead, use one table to reduce round trips and simplify access patterns. Identify Primary Keys and define indexes for secondary access patterns.
  • Select a strong Partition Key with a large number of distinct values. Do not use things like Status or Gender. Use UUID, CustomerId, DeviceId...
  • Items are uniformly requested and randomly distributed.
  • Select Sort Keys which follows a model 1:n and n:n relationships.
  • Use efficient and selective patterns for Sort Keys. Query multiple entities at the same time to avoid many round trips.

The official DynamoDB documentation for best practices provides more information.

Tick Tock…6 Months Until SQL Server 2008/2008 R2 Support Expires Unless You Take Action

If you are still running SQL Server 2008/2008 R2, you probably have heard by now that as of July 9, 2019, you will no longer be supported. However, realizing that there are still a significant number of customers running on this platform that will not be able to upgrade to a newer version of SQL before that deadline, Microsoft has offered two options to provide extended security updates for an additional three years.

The first option you have requires the annual purchase of “Extended Security Updates.” Extended Security Updates will cost 75% of the full license cost annually and also requires that the customer is on active software assurance, which is typically 25% of the license cost annually. So effectively, to receive Extended Security Updates you are paying for new SQL Server licenses annually for three years, or until you migrate off SQL Server 2008/2008 R2.

Spring Cloud and Spring Boot, Part 2: Implementing Zipkin Server For Distributed Tracing

In my last article you have learn how to implement Eureka Server for service discovery and registration. In this article, you will learn one more important feature of microservices, Distributed Tracing.

What is Distributed Tracing?

Distributed Tracing is crucial for troubleshooting and understanding microservices. It is very useful when we need to track the request passing through multiple microservices. Distributed Tracing can be used to measure the performance of the microservices. 

Getting Started With Amazon’s New Well-Architected Tool

On the 29th November 2018, Amazon introduced the Well-Architected tool. With the help of this tool, AWS users can access their planned architectures vis-a-vis the latest AWS architecture best practices. In addition, AWS users can get guidance on improving their present application architectures. This tool is based on the increasingly popular Well-Architected framework which helps users build secure, high performance, resilient, and efficient AWS-based solutions. This tool provides a consistent approach for users to evaluate planned and existing architectures and provides guidance to help implement designs that scale in accordance with application needs.

Importantly, with the help of this tool users get insights on potential security risks and identify steps to address these risks via the Well-Architected framework. The tool guides the user through a series of question and answers covering different aspects of the five pillars of the AWS Well-Architected framework (namely, operational excellence, security, reliability, performance efficiency, and cost optimization) and at the end gives you a set of recommendations for the architecture. The Well-Architected tool is freely available to all users but is only available in US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland).