AWS VPC Sharing Model for Multiple Accounts

As more organizations adopt cloud computing, managing multiple AWS accounts and virtual private clouds (VPCs) can become complex and challenging. When it comes to managing network resources in AWS, there are two main approaches: using a dedicated VPC or a shared VPC. Each approach has its own pros and cons, and choosing the right approach depends on your specific use case and requirements.

AWS VPC sharing is one approach that allows you to share VPC resources across multiple AWS accounts, simplifying network management and reducing costs. In this blog post, we'll explore VPC sharing, its benefits, use cases, and the shared VPC model.

Safeguard Your Code With GitHub Dependabot and IBM Cloud Toolchain

Have you ever wondered if people can take advantage of vulnerabilities present in your code and exploit it in different ways, like selling or sharing exploits, creating malware that can destroy your functionality, launching targeted attacks, or even engaging in cyber attacks? These mostly happen through known vulnerabilities which are present in the code, which are also known as CVEs, which stand for Common Vulnerabilities and Exposures.

In 2017, a malicious ransomware attack, WannaCry, wrought havoc by infiltrating over 300,000 computers in more than 150 nations. The assailants were able to utilize a flaw in the Microsoft Windows operating system, which had been designated a CVE identifier (CVE-2017–0144), to infect the computers with the ransomware. The ransomware encrypted users’ files and demanded a ransom payment in exchange for the decryption key, causing massive disruptions to businesses, hospitals, and government agencies. The attack’s total cost was estimated to have been in the billions of dollars.

Front-End: Cache Strategies You Should Know

Caches are very useful software components that all engineers must know. It is a transversal component that applies to all the tech areas and architecture layers such as operating systems, data platforms, backend, frontend, and other components. In this article, we are going to describe what is a cache and explain specific use cases focusing on the frontend and client side.

What Is a Cache?

A cache can be defined in a basic way as an intermediate memory between the data consumer and the data producer that stores and provides the data that will be accessed many times by the same/different consumers. It is a transparent layer for the data consumer in terms of user usability except to improve performance. Usually, the reusability of data provided by the data producer is the key to taking advantage of the benefits of a cache. Performance is the other reason to use a cache system such as in-memory databases to provide a high-performance solution with low latency, high throughput, and concurrency.

Configuring the Security Plug-In/Custom Security Providers for WebLogic Resource Protection

WebLogic Server is a Java-based application server, and it provides a platform for deploying and managing distributed applications and services. It is a part of the Oracle Fusion Middleware family of products and is designed to support large-scale, mission-critical applications.

WebLogic Server provides a Security Framework that includes a default Security Provider, which provides authentication, authorization, and auditing services to protect resources such as applications, EJBs, and web services. However, you can also use security plug-ins or custom security providers to extend the security framework to meet your specific security requirements. Here is a brief explanation of the security plug-ins and custom security providers in WebLogic Server:

Revolutionizing Drug Discovery with Generative AI

Generative AI refers to a class of artificial intelligence models that are capable of creating new data samples resembling the original data they were trained on. These models learn the underlying patterns and distributions of the data, enabling them to generate novel instances with similar properties. Some popular generative AI techniques include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and transformer-based language models.

In the context of drug discovery, generative AI has emerged as a powerful tool in recent years, offering a more efficient and effective approach to identifying and optimizing new drug candidates. By leveraging advanced techniques like GANs and VAEs, researchers can explore vast chemical spaces, predict molecular properties, and accelerate the drug development process. In this article, we'll delve into the use of generative models in drug discovery, providing code snippets to demonstrate their implementation.

Step-By-Step Tutorial: Installing Eclipse IDE

Eclipse IDE (Integrated Development Environment) is a popular and powerful tool for software development. It provides a wide range of features and plugins that make it a go-to choice for many developers. 

Below is the video showing the steps for installing Eclipse IDE

Leveraging IoT for Water Resource Management

Water is the most pristine resource on the planet. But unfortunately, in the last decade, water scarcity has consistently increased. As per UNICEF, each year, at least four billion people face water scarcity for a month. On top of that, half of the world’s population will be living in water-scarce areas by 2025.

Though there are numerous causes for this, improper usage, distribution, and management lead to maximum water wastage. We know that conventional water resource management practices like rainwater harvesting, rooftop irrigation, etc., save a lot of water. But, they may not solve a leaky water supply pipe in the middle of the city.

Designing and Developing WebSphere Cells, Clusters, and Nodes

WebSphere is a powerful application server that can be configured and managed using a hierarchical structure of cells, clusters, and nodes. Understanding these concepts is essential for deploying, managing, and scaling WebSphere-based applications.

A WebSphere cell is a logical grouping of one or more application servers that share a common administrative domain. Each cell has a unique name and is managed by a single administrative console. A cell can contain multiple clusters and nodes.

Prometheus AWS Exporter and Grafana

The main purpose of this article and use case is to scrape AWS CloudWatch metrics into the Prometheus time series and to visualize the metrics data in Grafana. Prometheus and Grafana are the most powerful, robust open-source tools for monitoring, collecting, visualizing, and performance metrics of the deployed applications in production. These tools give greater visibility other than collecting the metrics also, where we can set up critical alerts, live views, and custom dashboards. CloudWatch Exporter is an open-source tool to capture metrics as defined in yml configuration file. 

Architecture

AWS architectureThe CloudWatch Exporter will collect the metrics from AWS Cloud watch every 15 seconds (default), and it will expose them as key/value pairs in /the metrics API response. Using that configuration, the exporter will collect those metrics from CloudWatch every 15 seconds (default) and expose them as key-value pairs in the '/metrics' API response. The CloudWatchExporter's /metrics endpoint should then be added to the Prometheus configuration as a scrape job. Prometheus allows us to define the scraping frequency, so we can adjust the frequency of calls to CloudWatch to eventually tune the cost.

Orange Pi Cluster With Docker Swarm and MariaDB

Building a cluster of single-board mini-computers is an excellent way to explore and learn about distributed computing. With the scarcity of Raspberry Pi boards, and the prices starting to get prohibitive for some projects, alternatives such as Orange Pi have gained popularity.

In this article, I’ll show you how to build a (surprisingly cheap) 4-node cluster packed with 16 cores and 4GB RAM to deploy a MariaDB replicated topology that includes three database servers and a database proxy, all running on a Docker Swarm cluster and automated with Ansible.

Dynamic Data Processing Using Serverless Java With Quarkus on AWS Lambda (Part 1)

With the growth of the application modernization demands, monolithic applications were refactored to cloud-native microservices and serverless functions with lighter, faster, and smaller application portfolios for the past years. This was not only about rewriting applications, but the backend data stores were also redesigned in terms of dynamic scalability, high performance, and flexibility for event-driven architecture. For example, traditional data structures in relational databases started to move forward to a new approach that enables to storage and retrieval of key-value and document data structures using NoSQL databases.

However, faster modernization presents more challenges for Java developers in terms of steep learning curves about new technologies adoption and retaining current skillsets with experience. For instance, Java developers need to rewrite all existing Java applications to Golang and JavaScript for new serverless functions and learn new APIs or SDKs to process dynamic data records by new modernized serverless applications.

How to Optimize CPU Performance Through Isolation and System Tuning

CPU isolation and efficient system management are critical for any application which requires low-latency and high-performance computing. These measures are especially important for high-frequency trading systems, where split-second decisions on buying and selling stocks must be made. To achieve this level of performance, such systems require dedicated CPU cores that are free from interruptions by other processes, together with wider system tuning.

In modern production environments, there are numerous hardware and software hooks that can be adjusted to improve latency and throughput. However, finding the optimal settings for a system can be challenging as it requires navigating a multidimensional search space. To accomplish this efficiently, it is necessary to understand the tuning landscape and to use tools and strategies that facilitate effective changes.

What Are Events? API Calls

It is always interesting to note how even today's Cloud giants continue to draw a largely imaginary line between Synchronous and Asynchronous HTTP Requests, such as in the case of a typical REST API call. This line, we are asked to believe, very clearly separates Synchronous Requests – where the Request is made and the caller holds the line until receiving a Response from the relevant HTTP API Endpoint – from Asynchronous Requests – where the caller fires off their Request, (typically) gets a 202 Status Code in reply, and then waits for the actual API Response via some other channel; such as a Webhook, WebSocket, or using the 'HTTP Polling' Pattern described by Microsoft in its Asynchronous Request-Reply pattern Article.

We read in this same Article that: "Most APIs can respond quickly enough for responses to arrive back over the same connection… In some scenarios, however, the work done by [the] backend may be long-running, … [and] it isn't feasible to wait for the work to complete before responding to the Request. This situation is a potential problem for any synchronous request-reply pattern… Some architectures solve this problem using a message broker to separate the request and response stages… But this separation also brings additional complexity when the client requires success notification, as this step needs to become asynchronous."