Docker Swarm, Kubernetes’s Clever Little Brother

Is Kubernetes Suitable for Any Container-Based Project?


There is no doubt that Kubernetes is one of the most talked about technologies in the domain of cloud and containers. Kubernetes provides a complete solution to managing containers, but there are cases where it is not the best solution.

Distributed Transactions and Microservices Still Don’t Mix

I’m talking as someone who has actually implemented multiple distributed transaction systems. People moving to microservices are now discovering a lot of the challenges and hurdles of distributed systems and it is only natural to want to go back to the cozy transactional world, where you can reason about things properly.

This post is in response to this article: Microservices and distributed transactions, which I read with interest, because it isn’t often that a post will refute it’s own premise with the very first statement.

Accelerate Software Testing by Sharing Test Assets Across Dev and Test Teams

While the whole shift-left concept is indeed incredibly valuable, you can accelerate testing to keep up with development by simply reducing rework across functional testing and improving collaboration across teams — that is, if you have the right tool.

As 2019 continues, I had to reflect on the thousands of conversations I have had over the last year with QA, test engineers, and managers. Last year, especially the last quarter of it, was dominated by conversations about accelerating testing, especially how to align testing strategies concurrently with development. So I had the distinct pleasure of sharing with many people how to reduce rework across functional and nonfunctional testing with Parasoft SOAtest, simultaneously improving collaboration across teams while accelerating testing to keep up with development.

Towards a Unified Data Processing Framework: Batch as a Special Case of Streaming With Apache Flink

The Apache Flink project has followed the philosophy of taking a unified approach to batch and stream data processing, building on the core paradigm of “continuous processing of unbounded data streams” for a long time. If you think about it, carrying out offline processing of bounded data sets naturally fits the paradigm: these are just streams of recorded data that happen to end at some point in time.

Flink is not alone in this: there are other projects in the open source community that embrace “streaming first, with batch as a special case of streaming,” such as Apache Beam; and this philosophy has often been cited as a powerful way to greatly reduce the complexity of data infrastructures by building data applications that generalize across real-time and offline processing.

The API Economy and Why It Matters to Your Business

First, Let’s Define an API

An API (or “Application Programming Interface”) is a software intermediary for an application or service that enables other applications or services to send them requests and receive responses to those requests. The API will define the terms of the request and response such as the structure of the data, the data required, the protocol, and security settings.

An API makes it easier to integrate applications and services as the API forms a “contract” governing the communication between them. This gives developers certainty when integrating systems and can also enable larger monolithic services to be broken down into smaller independent services with defined interfaces.

React.js Tutorial: Let’s “Hook” Up

With the introduction of React 16.8 in 2018, the React team came up with the concept of “Hooks.” In this post, we are going to explore the reasons behind creating hooks and how to use them in a React application.

In React, we can create two types of components: Functional/Stateless components and Class/Stateful components.