Revolutionizing Real-Time Alerts With AI, NATS, and Streamlit

Imagine you have an AI-powered personal alerting chat assistant that interacts using up-to-date data. Whether it’s a big move in the stock market that affects your investments, any significant change on your shared SharePoint documents, or discounts on Amazon you were waiting for, the application is designed to keep you informed and alert you about any significant changes based on the criteria you set in advance using your natural language.

In this post, we will learn how to build a full-stack event-driven weather alert chat application in Python using pretty cool tools: Streamlit, NATS, and OpenAI. The app can collect real-time weather information, understand your criteria for alerts using AI, and deliver these alerts to the user interface.

How To Build a Google Meet AI Assistant App in 10 Minutes With Unbody and Appsmith

Effective communication and efficient meeting management are key to a team’s success in the modern workplace. Recognizing this, we will develop an AI-powered meeting assistant app to transform Google Meet recordings into automatically generated meeting notes with key takeaways and action items. The blog post is tailored for every creator from developers to no-coders who are interested in the intersection of AI and productivity tools. It’s particularly useful for those with limited AI development experience and who want to build AI applications by using simple low-code tools like Unbody and Appsmith.

Introducing the AI-Powered Meetings Assistant App

Think about the app that connects your Google Drive where all your Google Meet video recordings are saved and automatically captures meeting audio transcriptions and generates meeting notes with key points and action items in real-time. You can fully engage in the conversation during the meeting without taking notes alone. If you are running late or can’t make the meeting, the app will still take notes. The app can make virtual meetings more productive including team leaders, project managers, developers, and anyone who regularly uses Google Meet can benefit from using it.

Adapting API Strategies to Dynamic AI Trend

In today's rapidly evolving technological landscape, Artificial Intelligence (AI) is getting a lot of attention. Everywhere you look on social media, there are new AI startups, prompt engineering tools, and Large Language Model (LLM) solutions. And it's not surprising because AI feels almost like magic! For instance, ChatGPT really got everyone excited. It got 100 million users in just two months after it became publicly available, making it super popular, super fast.

Now, everyone's wondering: what does this AI wave mean for me, my work, and my products? More specifically, how does it impact those who are at the forefront of building digital products and applications using APIs? This article explores what the AI trend means for those of us making digital tools using APIs.

How To Use ChatGPT API in Python for Your Real-Time Data

OpenAI’s GPT has emerged as the foremost AI tool globally and is proficient at addressing queries based on its training data. However, it can not answer questions about unknown topics:

  • Recent events after Sep 2021
  • Your non-public documents
  • Information from past conversations

This task gets even more complicated when you deal with real-time data that frequently changes. Moreover, you cannot feed extensive content to GPT, nor can it retain your data over extended periods. In this case, you need to build a custom LLM (Language Learning Model) app efficiently to give context to the answer process. This piece will walk you through the steps to develop such an application utilizing the open-source LLM App library in Python. The source code is on GitHub (linked below in the section "Build a ChatGPT Python API for Sales").

Real-Time Data Processing Pipeline With MongoDB, Kafka, Debezium and RisingWave

Today, the demand for real-time data processing and analytics is higher than ever before. The modern data ecosystem requires tools and technologies that can not only capture, store, and process vast amounts of data but also it should deliver insights in real time. This article will cover the powerful combination of MongoDB, Kafka, Debezium, and RisingWave to analyze real-time data, how they work together, and the benefits of using these open-source technologies.

Understanding Debezium and RisingWave

Before we dive into the implementation details, it’s important to understand what these two tools are and what they do.

Monitor API Health Check With Prometheus

APISIX has a health check mechanism which proactively checks the health status of the upstream nodes in your system. Also, APISIX integrates with Prometheus through its plugin that exposes upstream nodes (multiple instances of a backend API service that APISIX manages) health check metrics on the Prometheus metrics endpoint typically, on URL path /apisix/prometheus/metrics.

In this article, we'll guide you on how to enable and monitor API health checks using APISIX and Prometheus.

LLM(Large Language Models) For Better Developer Learning of Your Product

Developing a tech product is not just about coding and deployment. It’s about the learning journey that goes into building and utilizing it as well. Especially, if you have a developer-oriented product, it is about ensuring that developers understand your product at a deep level through the docs, tutorials, and how-to guides, improving both their own skills and the quality of the work they produce. Nowadays AI can not only generate docs from code but also makes it easy to find specific information or answer questions about your product using a chatbot for a better developer experience. It is life-changing for these project docs maintainers.

This article explores how LLMs(Large Language Models) and LLM apps can be leveraged for effective and efficient developer education, which can boost the utilization of your product.

Reliable Microservices Data Exchange With Streaming Database

Nowadays, we usually build multiple services for a single product to work, and client apps need to consume functionality from more than one service. Microservices architecture has become a popular approach for building scalable and resilient applications. In a microservices-based system, multiple loosely coupled services work together to deliver the desired functionality. One of the key challenges in such systems is exchanging data between microservices reliably and efficiently. One pattern that can help address this challenge is the Outbox pattern.

In this article, we will explore how to implement the outbox pattern with a streaming database which can provide a reliable solution for microservices or multiple services data exchange.

API’s Role in Digital Government, 10 National Best Practices

As the digital revolution reshapes government operations worldwide, Application Programming Interfaces (APIs) have emerged as a critical tool in driving digital transformation. Through APIs, governments can ensure smoother interoperability between various systems, facilitate data sharing, and innovate public services. Here, we look at 10 best practices for using APIs in digital government based on national examples from around the globe.

APIs are instrumental in public digital service provision for their connective nature”. According to the publication: Application Programming Interfaces in Governments: Why, what, and How.

Visualize Real-Time Data With Python, Dash, and RisingWave

Real-time data is important for businesses to make quick decisions. Seeing this data visually can help make decisions even faster. We can create visual representations of data using various data apps or dashboards. Dash is an open-source Python library that provides a wide range of built-in components for creating interactive charts, graphs, tables, and other UI elements. RisingWave is a SQL-based streaming database for real-time data processing. This article will explain how to use Python, Dash, and RisingWave to make visualizations of real-time data.

How To Visualize Data in Real-Time

We know that real-time data is data that is generated and processed immediately, as it is collected from different data sources. Sources can be typical databases such as Postgres or MySQL, and message brokers like Kafka. A real-time data visualization consists of a few steps: first, we ingest, then process, and finally show this data in a dashboard.

Managing AI-Powered Java App With API Management

In this article, we will explore how to integrate OpenAI's ChatGPT APIs with a Spring Boot application and manage the APIs using Apache APISIX, an open-source API gateway. This integration will allow us to leverage the power of ChatGPT in our Spring Boot application, while APISIX will provide a robust, scalable, and secure way to manage the APIs.

OpenAI ChatGPT APIs

OpenAI's ChatGPT API is a powerful tool that we can use to integrate the capabilities of the ChatGPT model into our own applications, or services. The API allows us to send a series of messages and receive an AI model-generated message in response via REST. It offers a bunch of APIs to create text responses in a chatbot, code completion, generate images, or answer questions in a conversational interface. In this tutorial, we will use chat completion API to generate responses to a prompt (basically we can ask anything). Before starting with the tutorial, you can explore the API to have an understanding of how to authenticate to the API using API keys, how API request parameters and response look like.

Query Real-Time Data With GraphQL and Streaming Database

In modern application development, efficiently querying and retrieving real-time data is crucial to building robust and performant systems. Using materialized views we can improve query performance. When it is combined with GraphQL and a steaming database, we can define our queries to leverage these materialized views for the data that constantly changes.

For example, social media platforms like Twitter produce a massive volume of data every second. This data is valuable for analyzing trends and user behavior. In this article, we will explore how integrating GraphQL, materialized views, and streaming databases such RisingWave can enable us to efficiently query tweets and discover the hottest hashtags in real-time.

Chaining API Requests With API Gateway

As the number of APIs that need to be integrated increases, managing the complexity of API interactions becomes increasingly challenging. By using the API gateway, we can create a sequence of API calls, which breaks down the API workflows into smaller, more manageable steps. For example, in an online shopping website when a customer searches for a product, the platform can send a request to the product search API, then send a request to the product details API to retrieve more information about the products. In this article, we will create a custom plugin for Apache APISIX API Gateway to handle client requests that should be called in sequence.

What Is a Chaining API Request, and Why Do We Need It?

Chaining API requests (or pipeline requests, or sequential API calls) is a technique used in software development to manage the complexity of API interactions where software requires multiple API calls to complete a task. It is similar to batch request processing where you group multiple API requests into a single request and send them to the server as a batch. While they may seem similar, a pipeline request involves sending a single request to the server that triggers a sequence of API requests to be executed in a defined order. Each API request in the sequence can modify the request and response data, and the response from one API request is passed as input to the next API request in the sequence. Pipeline requests can be useful when a client needs to execute a sequence of dependent API requests that must be executed in a specific order.

RBAC With API Gateway and Open Policy Agent (OPA)

With various access control models and implementation methods available, constructing an authorization system for backend service APIs can still be challenging. However, the ultimate goal is to ensure that the correct individual has appropriate access to the relevant resource. In this article, we will discuss how to enable the Role-based access control (RBAC) authorization model for your API with open-source API Gateway Apache APISIX and Open Policy Agent (OPA).

What Is RBAC?

Role-based access control (RBAC)and attribute-based access control (ABAC) are two commonly used access control models used to manage permissions and control access to resources in computer systems. RBAC assigns permissions to users based on their role within an organization. In RBAC, roles are defined based on the functions or responsibilities of users, and permissions are assigned to those roles. Users are then assigned to one or more roles, and they inherit the permissions associated with those roles. In the API context, for example, a developer role might have permission to create and update API resources, while an end-user role might only have permission to read or execute API resources.

Batch Request Processing With API Gateway

Batch request processing is a powerful technique used in web development to improve the performance of APIs. It allows developers to group multiple API requests into a single HTTP request/response cycle. In other words, a single API request from a client can be turned into multiple API requests to a set of backend servers, and the responses are aggregated into a single response to the client. It can significantly reduce the number of round trips between the client and server.

In this article, we'll explore how to implement batch request processing in Apache APISIX and look at some use cases where it can be beneficial.

API Gateway For ChatGPT Plugins

OpenAI has recently launched a new version of ChatGPT which now allows plugins inside ChatGPT. These plugins can be added directly to the chatbot, providing it with access to a wide range of knowledge and information from its third-party partners through the APIs. ChatGPT plugins can extend its functionality and enhance its capabilities to access up-to-date information such as research travel costs, find out discount information, or help you book flights and order food. You can also build your own plugin that allows ChatGPT to call your API data intelligently.

Yes, that’s right! To make your data accessible through a ChatGPT custom plugin, ChatGTP requires you to build a new API or use an existing one that can be used to query it and receive its responses. Then it generates a user-friendly answer by combining the API data and its natural language capabilities. In this case, API Gateway can help with improving security, usability, and efficiency. This post explores how API Gateway can be beneficial for ChatGPT plugin developers to expose, secure, manage, and monitor their API endpoints.

How To Choose the Right Streaming Database

In today's world of real-time data processing and analytics, streaming databases have become an essential tool for businesses that want to stay ahead of the game. These databases are specifically designed to handle data that is generated continuously and at high volumes, making them perfect for use cases such as the Internet of Things (IoT), financial trading, and social media analytics. However, with so many options available in the market, choosing the right streaming database can be a daunting task.

This post helps you understand what SQL streaming is, the streaming database, when and why to use it, and discusses some key factors that you should consider when choosing the right streaming database for your business.

Infrastructure as Code (IaC) for Java-Based Apps on Azure

The Evolution of Java

Over the past several years, the Java ecosystem landscape has evolved from monolith Java EE applications running on application servers and the Spring Framework to modern smaller-footprint Spring Boot, MicroProfile, and Jakarta EE microservices. Today, more Java developers are looking at how they can bring their existing Java applications to the cloud—or how to build new cloud-native applications. Many organizations have significant investments in the migration of mission-critical Java applications running on-premises to fully supported environments to run these apps in the cloud.

In this article, let's take a closer look at Java at Microsoft and Azure to understand what Microsoft can offer to modernize existing Java-based apps or bring new ones with the well-known practice of Infrastructure as Code (IaC).