Google Cloud Pub/Sub – Overview

Introduction

The Google Cloud Pub/Sub is a fully managed real-time messaging service and it helps our applications/services to send and receive messages independently between them. It helps us to build robust and scalable applications by integrating them asynchronously. It provides scalability, resilience and handles millions of messages simultaneously.

Why Google Cloud Pub/Sub?

Google Cloud Pub/Sub can be used for a lot of use cases. In general, If you want to process large amounts of data for analytics or do you want to simplify the event-driven microservices development?, then the Google Cloud Pub-Sub is the right choice.

Generate Google Cloud API Credentials [Video]

This blog is a quick walkthrough for downloading GCP API credentials (keys). The credentials are at the core of any cloud computing service. Different public cloud service providers use different types of credentials to connect their services through API. So, without much ado, here are the steps required to generate API keys for GCP (Google Cloud Platform).

If you feel lazy reading this blog, here is a 100 seconds long step-by-step video to generate API keys for GCP.

Reference Architecture: Deploying WSO2 API Manager on Microsoft Azure

Introduction

WSO2 is a 15+ years old software engineering organization that provides a set of Open Source products/platforms for API Management, Enterprise Integration, and Identity and Access Management.

Meeting current industry demands, all the WSO2 product can be deployed on any of the below infrastructure choices:

Introduction to Google BigQuery

It is incredible to see how much businesses rely on data today. 80% of business operations are running in the cloud, and almost 100% of business-related data and documents are now stored digitally. In the 1960s, money made the world go around but in today’s markets, “Information is the oil of the 21st century, and analytics is the combustion engine.” (Peter Sondergaard, 2011)

Data helps businesses gain a better understanding of processes, improve resource usage, and reduce waste; in essence, data is a significant driver to boosting business efficiency and profitability.

GCP Cost Management Best Practices

One of the advantages of building your infrastructure through cloud providers is that you can scale up and down as per your business’ resource demands and pay only for the services that you leverage. However, if you don’t keep an eye on which services are running and monitor your billing and performance regularly, you may end up incurring unnecessary and inefficient business costs.

At GCP, the platform’s solution architects work very closely with their customers to support Google Cloud cost optimization and control the expenses. There are several tools and tricks that are simple to exploit to save unnecessary costs on GCP. It’s simply a matter of knowing what and where to track. In this article, we will discuss Google Cloud cost management and uncover how you can optimize performance and resources to get the best usage output.

Introduction To Google Anthos

Introduction

Google has put over a decade’s worth of work into formulating the newly released Anthos. Anthos is the bold culmination and expansion of many admired container products, including Linux Containers, Kubernetes, GKE (Google Kubernetes Engine), and GKE On-Prem. It has been over a year since the general availability of Anthos was announced, and the platform marks Google’s official step into enterprise data center management.

Anthos is the first next-gen tech multi-cloud platform supporter designed by a mainstream cloud provider. The platform’s unique selling point lies in application deployment capability across multiple environments, whether on-premises data centers, Google cloud, other clouds, or even existing Kubernetes clusters.

The Theory and Motive Behind Active/Active Multi-Region Architectures

The date was 24th December 2012, Christmas eve. The world’s largest video streaming service, Netflix experienced one of its worst incidents in company history. The incident was an outage of video playback on TV devices for customers in Canada, the United States, and the LATAM region. Fortunately, the enduring efforts of responders over at Netflix, along with AWS where the Amazon Elastic Load Balancer service experiencing disruptions resulting in the cause of the incident, managed to restore services just in time for Christmas. If one were to think about the events that ensued over at Netflix and AWS that day, it would be comparable to all those movies of saving Christmas that we all love to watch around that time of year.

This idea of incident management comes from the ubiquitous fact that incidents will happen. This is not an unknown fact and best immortalized by Amazon VP and CTO Werner Vogels when he said “Everything fails all the time”. It is, therefore, understood that things will break but the question that persists is can we do anything to mitigate the impact of these inevitable incidents? The answer is of course yes.

Alexa and Kubernetes: Deploying the Alexa Skill on Google Kubernetes Engine (IX)

Now, we have everything prepared and ready to go to a Kubernetes Cluster in a cloud provider. It is a fact that creating a cluster in any cloud provider manually is a difficult task. Moreover, if we want to automate this deployment, we need something that helps us in this tedious task. In this article, we will see how to create a Kubernetes Cluster and all of its required objects, deploying our Alexa Skill with Terraform using Google Kubernetes Engine.

Pre-Requisites

Here, you have the technologies used in this project:

Upload Files to Google Cloud Storage with Python

Google Cloud is a suite of cloud-based services just like AWS from Amazon and Azure from Microsoft. AWS dominates the market with Azure but Google's not far behind. Google Cloud Platform or GCP is the third largest cloud computing platform in the world, with a share of 9% closely followed by Alibaba Cloud. 

Amazon undoubtedly leads the market with a share of 33% but GCP is showing tremendous spike with the growth rate of whooping 83% in 2019. GCP leads AWS on the cost front, though. Google has a lesser number of services to offer but maintains its position as one of the most cost-effective cloud platform. 

How to Choose a Public Cloud Platform — the Right Questions to Ask

First of all, who I am. I am an engineer, a software architect (not an ivory tower one), a tool builder, thinker, and a huge fan of common sense. And I wanted answers to a few questions that I think many IT and business leaders, technology adoption decision-makers and engineers have.

Unfortunately, many of us don't know what the right questions are to start with. Many of us, unfortunately, get influenced in some way or the other by million-dollar (if not billions) marketing stunts that try to sell "basically bullshit" low-calorie fluff packaged in nice-looking packages.

JobRunr + Kubernetes + Terraform

In this new tutorial, we will build further upon on our first tutorial — Easily process long-running jobs with JobRunr — and deploy the JobRunr application to a Kubernetes cluster on the Google Cloud Platform (GCP) using Terraform. We then scale it up to 10 instances to have a whopping 869% speed increase compared to only one instance!

This tutorial is a beginners guide on the topic cloud infrastructure management. Feel free to skip to the parts that interest you.

Kubernetes, also known as k8s, is the hot new DevOps tool for deploying high-available applications. Today, there are a lot of providers all supporting Kubernetes including the well known Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic Kubernetes Service (EKS).

Azure, AWS, and GCP: A Multicloud Service Cheat Sheet

This article will help you to map between Azure, AWS, and Google Cloud solutions. If you are considering a multicloud architecture, it's important to understand the differences between each provider's suites.

Cloud Services Comparison

AI and Machine Learning

SageMaker

Testing Serverless Applications Like a Pro

Like any other application, continuous testing of serverless applications is essential to ensure the quality of the product. Testing a serverless application is not drastically different from testing a regular application. In traditional application testing, we configure an environment similar to the production environment in the development area and test it. But when dealing with the serverless providers, you cannot simulate the exact production environment. In this article, let's talk about testing serverless applications and the alterations we should make to the normal testing process.

Serverless Applications: Unit Testing

In unit testing, we test each unit of code individually without involving other third-party code and services. In the serverless context, unit testing is pretty much the same as in traditional application testing. So, you can use the same test frameworks like Jasmine, Mocha, Jest, etc., when testing serverless applications. In the serverless concept, most of the complexities are around serverless functions and their integrations. So, the effort of the unit testing for a serverless application is comparatively low.

Top Secrets Management Tools Compared

As apps become more complex in the way they use microservices, managing API keys and other secrets becomes more challenging as well. Microservices running in containers need to transfer secrets to allow them to communicate with each other. Each of those transfers, and each of the secrets being exchanged, needs to be secured properly for the entire system to remain secure.

Hard-coding API keys and other secrets is definitely NOT an option. Despite the obvious nature of the previous statement, a lot of developers still expose the credentials of their microservices or apps on GitHub. Fortunately, there are tools designed to make managing secrets easier. We are going to compare the best secrets management tools in this article.