Deploy RAPIDs on GPU-Enabled Virtual Servers on a Virtual Private Cloud

Learn how to set up a GPU-enabled virtual server instance (VSI) on a Virtual Private Cloud (VPC) and deploy RAPIDS using IBM Schematics.

The GPU-enabled family of profiles provides on-demand, cost-effective access to NVIDIA GPUs. GPUs help to accelerate the processing time required for compute-intensive workloads, such as artificial intelligence (AI), machine learning, inferencing, and more. To use the GPUs, you need the appropriate toolchain - such as CUDA (an acronym for Compute Unified Device Architecture) - ready.

Let's start with a simple question.

Extend VPC Instances with Cloud Functions, Activity Tracker with LogDNA, and Schematics

Reserving a floating IP for one or two VSIs sounds easy. But how about for tens of VSIs provisioned in your Virtual Private Cloud (VPC)? Ever thought of auto-assigning a floating IP on-the-fly as and when a new VSI is provisioned in your VPC? 

In this post, you will use the IBM Cloud Activity Tracker with LogDNA service to track how users and applications interact with IBM Cloud Virtual Private Cloud (VPC). You will then create a view and an alert on Activity Tracker with LogDNA filtering VSI creation logs. The logs are then passed to IBM Cloud Functions Python action as JSON. The action reserves a floating IP to the newly provisioned VSI (instance) using the instance ID in the passed JSON.

How AWS App Mesh Redefines Applications

For someone interested in knowing more about Amazon Web Services and its feature sets, it is appropriate to know about the AWS App Mesh and the diverse applications pertaining to it. That said, it is essential to take a note of the fact that this App Mesh is now available and provides seamless networking, especially at the application level. Put simply, the AWS App Mesh makes sure that diverse computing services across the online networks are connected, in order to communicate with each other. Moreover, the AWS App Mesh also makes room for a standardized communicative model that makes way for high application availability and end-to-end visibility.

More about AWS

For understanding how Amazon Web Services actually works in regard to the computing services, it is necessary to understand that a majority of these services have EC2 and Fargate as the backbones. Therefore, when services keep growing within a specific application, it becomes arduous for both the developer and users to locate errors. In addition to that, when applications fail, it also becomes difficult to reroute the traffic while maintaining the sanctity of the application module.