Journey of AI to Generative AI and How It Works

In the last few years, cutting-edge technologies and services have drastically changed their directions, dynamics, and use cases. It is quite evident that the recent wave of global technology adoption by industries is overwhelmed by Artificial Intelligence (AI) and its various flavors.  AI is becoming increasingly woven into the fabric of our everyday lives, changing the way we live and work. This article discusses the basics of AI/ML, its usage, the evolution of Generative AI, Prompt Engineering, and LangChain.

What Are AI and ML?

AI is the capability of simulating human intelligence and thought processes such as learning and problem-solving. It can perform complex tasks that historically could only be done by humans. Through AI, a non-human system uses mathematical and logical approaches to simulate the reasoning that people use for learning new information and making decisions.

Top Trends in AI-Based Application Testing You Need To Know

Engineering managers understand better than most the relentless pace at which the world of AI is evolving. You're likely tasked with integrating this technology into your offerings and making sure it all functions seamlessly to advance your business.

Thankfully, with these AI advancements, new approaches to testing, automation, and quality assurance (QA) are also emerging, opening new doors to AI application testing.

Temporal Paradoxes: Multitasking at Its Finest

When computing started, it was relatively easy to reason about things as a single series of computations. It didn't take long, though, before we introduced the ability to compute more than one thing at a time. These days, we take for granted computers' ability to multitask. We know that it's because we have multiple cores, CPUs, and servers. Yet somehow, "single-threaded" things like JavaScript and Python are also able to "do more than one thing at a time."

How? There are two different concepts at play here, often at the same time, often confused, yet entirely distinct: Parallelism and Concurrency.

Introduction to ESP32 for Beginners Using the Xedge32 Lua IDE

What Is the ESP32?

The ESP32 is an incredible microcontroller developed by Espressif Systems. Based on its predecessor's legacy, the ESP8266, the ESP32 boasts dual-core processing capabilities, integrated Wi-Fi, and Bluetooth functionalities. Its rich features and cost-effectiveness make it a go-to choice for creating Internet of Things (IoT) projects, home automation devices, wearables, and more.

What Is Xedge32?

Xedge32, built upon the Barracuda App Server C Code Library, offers a comprehensive range of IoT protocols known as the "north bridge." Xedge32 extends the Barracuda App Server's Lua APIs and interfaces seamlessly with the ESP32's GPIOs, termed the "south bridge." 

Coding Once, Thriving Everywhere: A Deep Dive Into .NET MAUI’s Cross-Platform Magic

Developed by Microsoft, the .NET MAUI (multi-platform app UI) is an open-source framework to build native mobile and desktop applications for multiple platforms, including Android, iOS, macOS, Windows, and more, and that too, using a single codebase. Unlike the Xamarin forms, where developers have to maintain a separate codebase for each targeted platform. 

An Overview of .NET Framework

If you are aware of what .NET framework is and how it works, then you can skip this section and jump to “How It Works.”

Time-Travel Debugging Production Code

Normally, when we use debuggers, we set a breakpoint on a line of code, we run our code, execution pauses on our breakpoint, we look at values of variables, and maybe the call stack, and then we manually step forward through our code's execution. In time-travel debugging, also known as reverse debugging, we can step backward as well as forward. This is powerful because debugging is an exercise in figuring out what happened: traditional debuggers are good at telling you what your program is doing right now, whereas time-travel debuggers let you see what happened. You can wind back to any line of code that is executed and see the full program state at any point in your program’s history.

History and Current State

It all started with Smalltalk-76, developed in 1976 at Xerox PARC.  It had the ability to retrospectively inspect checkpointed places in execution. Around 1980, MIT added a "retrograde motion" command to its DDT debugger, which gave a limited ability to move backward through execution. In a 1995 paper, MIT researchers released ZStep 95, the first true reverse debugger, which recorded all operations as they were performed and supported stepping backward, reverting the system to the previous state. However, it was a research tool and not widely adopted outside academia.

How to Enable Duet AI in your Google Workspace

Whether you are looking to write emails in Gmail, create tables with custom data in Google Sheets or design a presentation in Google Slides, Duet AI for Google Workspace can do the work for you in few easy steps.

Duet AI - Google Workspace

How to Enable Duet AI

Duet AI is now available for Google Workspace but you need to take the following steps to start using the AI capabilities of Duet AI in your Gmail and other Google apps.

1. Purchase the Duet AI add-on

Open admin.google.com and sign in to your Google Workspace account as an administrator. Inside the dashboard, navigate to Billing > Get more services > Google Workspace add-ons.

Here, look for the Duet AI for Google Workspace Enterprise card and cick the Start Free Trial link to subscribe to the Duet AI service. You can use the Duet AI add-on without payment for a period of 14 days.

Duet AI in Google Workspace Admin

2. Assign Duet AI licenses

Once you’ve successfully activated Duet AI, it’s time to share its benefits with your team. Go to Directory > Users and select one or more users and click Assign Licenses. Select Duet AI for Google Workspace from the list of available subscription and click Assign.

Assign Licenses

Please note that Duet AI is not compatible with Google Workspace Business Starter edition. Additionally, it’s important to ensure that your Workspace users have English set as their preferred language in their Google account settings to access Duet AI.

You may visit the Workspace help center to learn more about Duet AI.

Deploy Kubernetes Resources in a Controlled and Orderly Manner

When deploying Kubernetes resources in a cluster, it is sometimes necessary to deploy them in a specific order. For example, a Custom Resource Definition (CRD) must exist before any custom resources of that type can be created.

Sveltos can help you solve this problem by allowing you to specify the order in which Kubernetes resources are deployed.

The Ultimate Guide To Building Front-End Web Applications From Scratch

Today’s competitive environment has made it vital for businesses to optimize their user experiences with improved customer service to capture market segments. Using web applications that you can customize to communicate directly with consumers from their browsers, regardless of their device, is one of the best ways to capture the attention in the market. These are examples of web applications that include social networking sites, educational products, online stores, photos, video, text editors, games, and reservation systems.

The user interacts with the organization more actively than they do on a standard informational website since they are more complex than standard informational sites. Moreover, the capabilities of web applications are becoming more competent in replacing desktop software and sometimes even surpassing them. No doubt, the capabilities of a web application have already advanced vastly in the past few years. 

Simplifying Your Kubernetes Infrastructure With CDK8s

Of late, I have started to pen down summaries of my talks for those not interested in sitting through a 45-minute video or staring at slides without getting much context. Here is one for AWS DevDay Hyderabad 2023 where I spoke about "Simplifying Your Kubernetes Infrastructure With CDK8s."

CDK for Kubernetes, or CDK8s is an open-source CNCF project that helps represent Kubernetes resources and application as code (not YAML!).

Creating Conversational Intelligence: Machine Learning’s Impact on Personalized Automated Texting

In the evolving digital landscape, where customer interactions are increasingly digital-first, automated texting has emerged as a pivotal channel for businesses to engage with their customers. The challenge, however, lies in delivering personalized experiences at scale. Enter conversational intelligence—a realm where machine learning (ML) plays a transformative role. This article delves into how ML shapes conversational intelligence, enabling automated texting to go beyond scripted responses and understand context, sentiment, and user intent more effectively.

Understanding Conversational Intelligence at Scale

In the realm of automated texting, understanding context, intent recognition, and sentiment analysis are paramount. Imagine a scenario where a user asks, "What's the weather like today?" While a simple query, it requires the chatbot to understand the user's intent—to obtain weather information—while also considering the context, such as the user's location. Additionally, gauging the sentiment is crucial; a user expressing frustration about a delayed delivery needs a different response than one inquiring about product availability.

A Comprehensive Comparison of AWS Step Functions and AWS MWAA

Workflow automation tools play a pivotal role in today's dynamic digital landscape. They help streamline routine tasks, eradicate human errors, and increase productivity. With the help of workflow automation, organizations can automate processes, allowing teams to focus on strategic tasks. Whether it's data processing, application integration, or system monitoring, these tools provide scalable solutions to meet diverse needs.

Amazon Web Services (AWS) offers a plethora of services geared towards automating process workflows. AWS Step Functions and AWS Managed Workflow for Apache Airflow (MWAA) are two such prominent services. Step Functions is a serverless workflow service that allows developers to coordinate multiple AWS services into serverless workflows. On the other hand, MWAA is a managed orchestration service for Apache Airflow, which is an open-source platform used to programmatically author, schedule, and monitor workflows. These robust tools have revolutionized businesses across sectors by simplifying complex processes and enhancing operational efficiency. In this article, I will delve into a comprehensive comparison between these two powerful tools, exploring their features, cost implications, ease of use, integration capabilities, and more.

Send Your Logs to Loki

One of my current talks focuses on Observability in general and Distributed Tracing in particular, with an OpenTelemetry implementation. In the demo, I show how you can see the traces of a simple distributed system consisting of the Apache APISIX API Gateway, a Kotlin app with Spring Boot, a Python app with Flask, and a Rust app with Axum.

Earlier this year, I spoke and attended the Observability room at FOSDEM. One of the talks demoed the Grafana stack: Mimir for metrics, Tempo for traces, and Loki for logs. I was pleasantly surprised how one could move from one to the other. Thus, I wanted to achieve the same in my demo but via OpenTelemetry to avoid coupling to the Grafana stack.

Introducing Klone: Open-Source Remote debugging Tool for Kubernetes Services

At ZeroK, every time an issue occurred in our staging environment, we would manually reproduce the issue in the local development environment to debug. For this, we'd manually set up mocks to emulate the behavior of dependencies or update the local DB. Additionally, keeping these mocks up to date was a pain, especially as the dependent services are being continuously updated with rapid development cycles.

Sometimes, the error would be caused specifically by the behavior of a dependent service, and the reproduction would be harder. In these instances, we'd dig through logs to find the specific response or coordinate with the owner of the dependent service to understand the reason, which would further delay debugging.

Prompt Engineering Is Not a Thing

The rise of large language models like OpenAI's GPT series has brought forth a whole new level of capability in natural language processing. As people experiment with these models, they realize that the quality of the prompt can make a big difference to the results and some people call this “prompt engineering.” To be clear: there is no such thing. At best it is “prompt trial and error.”

Prompt “engineering” assumes that by tweaking and perfecting input prompts, we can predict and control the outputs of these models with precision.