Building a RESTful Minimal API With .NET Core 7

NET Core and ASP.NET Core are popular frameworks for creating powerful RESTful APIs. In this tutorial, we will use it to develop a simple Minimal API that simulates a credit score rating. Minimal APIs provide a streamlined approach to creating high-performing HTTP APIs using ASP.NET Core. They allow you to construct complete REST endpoints with minimal setup and code easily. Instead of relying on conventional scaffolding and controllers, you can fluently define API routes and actions to simplify the development process.

We will create an endpoint allowing a user to retrieve a credit score rating by sending a request to the API. We can also save and retrieve credit scores using POST and GET methods. However, it is essential to note that we will not be linking up to any existing backend systems to pull a credit score; instead, we will use a random number generator to generate the score and return it to the user. Although this API is relatively simple, it will demonstrate the basics of REST API development using .NET Core and ASP.NET. This tutorial will provide a hands-on introduction to building RESTful APIs with .NET Core 7 and the Minimal API approach.

Safeguarding the Digital Realm: Ensuring Virtual Machine Security

In today’s technology-driven world, virtual machines (VMs) have become an integral part of computing environments. They enable efficient resource utilization, flexibility, and scalability, making them a preferred choice for businesses of all sizes. However, with increased reliance on VMs, the importance of virtual machine security has also escalated.

As organizations increasingly adopt virtualization technology, the importance of virtual machine security cannot be overstated. Virtual machines (VMs) play a vital role in data centers, providing flexible and scalable solutions for hosting multiple operating systems and applications on a single physical server. However, with their extensive use, VMs have become attractive targets for cybercriminals.

Jaeger and ScyllaDB Integration: High Performance at Scale

Jaeger has gained significant popularity in the software development community due to its powerful capabilities and ease of integration with various programming languages and frameworks. With the rise of microservices and cloud-native applications, Jaeger has become a crucial tool for developers and system administrators to gain insights into the performance and behavior of their applications.

How do you make Jaeger even more effective for monitoring and troubleshooting distributed applications, especially in high-traffic, demanding environments where a high-performance storage solution is critical? Use the best-performing Jaeger storage backend that you can find.

Generate Music Using Meta’s MusicGen On Colab

In the vast realm of artificial intelligence, deep learning has revolutionized numerous domains, including natural language processing, computer vision, and speech recognition. However, one fascinating area that has captivated researchers and music enthusiasts alike is the generation of music using artificial intelligence algorithms. MusicGen is a state-of-the-art controllable text-to-music model that seamlessly translates textual prompts into captivating musical compositions.

What Is MusicGen?

MusicGen is a remarkable model designed for music generation that offers simplicity and controllability. Unlike existing methods such as MusicLM, MusicGen stands out by eliminating the need for a self-supervised semantic representation. The model employs a single-stage auto-regressive Transformer architecture and is trained using a 32kHz EnCodec tokenizer. Notably, MusicGen generates all four codebooks in a single pass, setting it apart from conventional approaches. By introducing a slight delay between the codebooks, the model demonstrates the ability to predict them in parallel, resulting in a mere 50 auto-regressive steps per second of audio. This innovative approach optimizes the efficiency and speed of the music generation process.

Automation Testing Roundtable: Industry Insights From QA Leaders

Automation testing is one of those technical fields where you never stop learning. Whether you’ve been doing it for a few months, a couple of years, or over a decade, there is always something new and exciting to find out. This is why this roundtable is a must-read for anyone working with automation testing or even considering it.

To get a comprehensive idea about the state of the automation industry and how to be good at automation testing, we’ve talked to some of the finest QA minds. Here are the industry leaders we interviewed for this article:

Best Practices for Microservices: Building Scalable and Efficient Systems

Microservices architecture has revolutionized modern software development, offering unparalleled agility, scalability, and maintainability. However, effectively implementing microservices necessitates a deep understanding of best practices to harness their full potential while avoiding common pitfalls. In this comprehensive guide, we will delve into the key best practices for microservices, providing detailed insights into each aspect.

1. Defining the "Micro" in Microservices

Single Responsibility Principle (SRP)

Best Practice: Microservices should adhere to the Single Responsibility Principle (SRP), having a well-defined scope of responsibility that encapsulates all tasks relevant to a specific business domain.

How to Use Contact Form to Grow Your Email List in WordPress

Want to learn how to use contact forms to grow your email list in WordPress?

Connecting contact forms to your WordPress website lets you capture valuable leads and expand your email subscriber list.

In this article, we will show you how to use contact forms to grow your email list in WordPress so you can get more customers.

How to use contact form to build your email list in WordPress

Why Use Contact Forms in WordPress to Grow Your Email List

If you’re not using WordPress to build your email list, you’re leaving money on the table. Emails are a great way to promote your products or services, build a loyal customer base, and even increase your customer lifetime value.

Unlike other marketing channels (paid or organic social media), you own and control the content and list of subscribers. However, on social media platforms, if anything happens to your account or even the platform, you’ll lose everything.

With email marketing, you can customize your email newsletter any way you want without being subjected to the unforgiving social media algorithm changes.

But why use contact forms to build your email list?

Contact forms on your WordPress site provide a secure and easy way for visitors to ask questions, book your services, or send in feedback.

They add legitimacy to your website since many people see contact forms as a trust factor. The idea that you can reach out to them directly makes your business more trustworthy.

Besides that, you can capture their initial interest by allowing them to join your email newsletter. This allows you to stay in touch with them via email, which can lead to future revenue as you send targeted offers and promotions to an engaged audience.

All you have to do is add a signup box at the bottom of the contact form, which offers a subtle way to enhance your lead generation strategy and grow your list.

How to Create a Contact Form with WPForms in WordPress and Collect Email Subscribers

Adding a contact form in WordPress is easy and doesn’t require any coding! Follow these steps, and your form will be ready in no time.

1. Pick the Best Contact Form Plugin

The first thing you’ll need is to install a contact form plugin for WordPress. With so many contact form plugins on the market, it can be hard to choose the right one.

We recommend WPForms because it’s the most beginner-friendly and feature-rich contact form plugin available. With its easy drag and drop interface, you can have your form live in minutes.

WPForms homepage

First, you will want to install and activate the free WPForms Lite plugin. For more details, you can see our step by step guide on how to install a WordPress plugin.

You can use this WPForms Coupon to get 50% off on any WPForms plan. The paid plan gives you advanced features such as fancy fields, conditional logic, user journeys, multi-page forms, and the ability to install other addons.

2. Create a New Contact Form

Once WPForms is activated, go to WPForms ≫ Add New in your WordPress dashboard.

Add new form in WPForms

You’ll be taken to the WPForms drag and drop form builder. In the ‘Setup’ tab, you’ll select the template you want to use for your contact form.

With hundreds of templates available, you can choose a form for just about any occasion.

Find the Simple Contact Form template and click on ‘Use Template.’

WPforms contact form templates

3. Add Email Signup Checkbox to Your Contact Forms

Once you have created your form, the next step is to add an email subscription box to the same form.

Under the Fields column, drag the ‘Checkboxes’ box to where you want to insert the signup option in the contact form.

You will notice that there are three checkboxes. Click on the field to open its settings.

Checkboxes in WPForms

In the ‘Field Options’ tab, you will need to delete two checkboxes, since we only need one checkbox for the email signup.

Simply click on the minus icons from the ‘Second Choice’ and ‘Third Choice’ checkboxes to remove them.

Removing checkboxes in WPForms

Then, just change the label to something that aligns with your intent, such as ‘Signup for our Email List.’

Under the ‘Choices’ checkbox label, you need to write something that allows visitors to confirm and provide consent to submitting their contact information.

For example, you can label the checkbox with something like ‘Sign up for our email list.’ Then name the choices with ‘Yes please!’

Signup email box in WPForms

4. Connect Your Email List to WPForms

Next, you’ll need to connect your email marketing service. WPForms has many integration addons for the top email marketing platforms, including Constant Contact, Drip, Mailchimp, and more.

Note: WPForms Lite supports Constant Contact automatically, meaning you can get started growing your list for free!

That being said, if you want to connect to other email marketing services, you’ll need to be a Pro subscriber of WPForms. Click here to upgrade to the WPForms Pro version.

Go to the ‘Marketing’ column in the form builder and find your email service provider. Then, simply click on ‘Add New Connection.’

Connecting to Constant Contact in WPForms

You’ll be asked to name this connection.

Give it an appropriate name so that you can keep track of it, and then click ‘OK.’

Constant contact connection

From here, you can connect your Constant Contact account to WPForms.

On the page displayed, you’ll need to register WPForms with Constant Contact by clicking on ‘Click here to register with Constant Contact.

Register Constant Contact in WPForms

After clicking the link, a window will open, and you’ll need to log into your Constant Contact account.

When you’re logged in, click the orange ‘Allow’ button to give WPForms access.

Allow access to Constant Contact from WPForms

Next, you’ll be given a Constant Contact authorization code.

Copy the code so you can enter it into WPForms.

Copy authorization code for Constant Contact

Paste this code into the ‘Constant Contact Authorization Code’ field back in the WPForms.

This will allow WPForms to fetch your email service account and pull in data from Constant Contact.

Paste Constant Contact authorization code

From there, you want to add a name below. It’s just for internal reference and won’t be visible to your site’s visitors.

Once you’ve filled in both fields, click on the ‘Connect’ button to continue.

Connect Constant Contact to WPForms

Once the connection is complete, you’ll see a checkmark next to the Constant Contact tab.

This shows that the connection is working and verified.

Constant Contact connection verified

WPForms will ask you which account and list you’d like to use for this contact form. When you select a list, it’ll add new email subscribers to the list of your choice.

Make sure to choose the appropriate account and list.

Choose email list from Constant Contact

Next, you want to add the list of fields that you plan on capturing from the contact form.

For example, if you intend to take their Full Name and Email, you want to select the appropriate dropdown menu.

Constant Contact list fields

Scroll to the bottom of the list fields box and click on ‘Enable Conditional Logic.’ This means that the signup checkbox only appears when the user has completed a specific action, such as providing their email address.

Make sure to choose what the required field users must complete for the signup box to appear. For instance, you most likely want their email address to be required but their name and email list signup be optional.

Enable conditional logic for WPForms

5. Embed the Contact Form Into a Page

Now, you’re ready to add the contact form to a post or page on your WordPress website.

Scroll up and click on the ‘Embed’ button located on the top right of the screen.

Embed contact form to contact page

Assuming you already have a contact page, you’ll click on the ‘Select Existing Page’ button.

If you don’t have a contact page, then you’ll choose the ‘Create New Page’ button.

Embed in a page contact form

You’ll be asked to choose the page you want to add your contact form to.

Once you’ve selected your form from the dropdown menu, click on ‘Let’s Go!’

Embed contact form to your contact page

You’ll be sent to your WordPress page with the WPForms embedded inside. Customize your page to fit your needs.

When you’re ready, hit the ‘Publish’ or ‘Update’ button to make your page live.

Publish contact page in WordPress

Congratulations, you’ve successfully created and published your contact form. With the email signup box in your form, you’ll be able to collect subscribers to help grow your list as you receive new inquiries.

If you want to learn more about creating contact forms, see our detailed instructions on how to easily create a contact form in WordPress.

Best Practices for Designing Your Contact Form

To maximize your success and get the most out of your contact forms, you’ll want to follow these best practices.

Make Your Form GDPR Compliant

GDPR, or General Data Protection Regulation, is a data protection and privacy regulation to give consumers greater control over their personal data.

This is required if you plan on collecting personal information from anyone living in the European Union.

Fortunately, you can easily create GDPR compliant forms in WordPress with WPForms.

Just head over to WPForms >> Settings in your WordPress admin area. Then, under the ‘General’ tab, you’ll find the ‘GDPR’ section.

General settings in WPForms

Then scroll down until you see the GDPR section. You’ll want to check the ‘GDPR Enhancements’ box.

Then check the ‘Disable User Cookies’ box if you want to remove user tracking cookies. You can also tick the ‘Disable User Details’ option so WPForms doesn’t collect user IP addresses.

GDPR in WPForms

Configure Form Notifications

It’s a good idea to set your form notifications properly.

A form notification is an email that goes out to the user once they submit a message and subscribe to your newsletter from the contact form.

Just head over to the Settings column in the WPForms builder and select Notifications. Make sure to toggle the ‘Enable Notifications’ button on.

Then, fill in the fields based on your intent. You can update the subject line, the name, and the email.

Enable notifications

Scroll down to configure the email message.

Once you’ve written your email message, click the ‘Save’ button up top.

Confirmation email message

Even after users submit the form and subscribe to your list, you should use the opportunity to redirect users to other pages to get even more conversions.

For instance, you can direct them to a thank you page along with other high-converting articles.

Track Your Results

Make sure you’re tracking your WordPress form so you can see the number of views and conversions it gets.

WPForms has a built-in user journey feature to see which pages users go to before they land on your form.

If you’d like even more in-depth tracking, we recommend using MonsterInsights.

Turn on CAPTCHA for Spam Protection

Form spam is a big problem that every website deals with. Countless hackers are trying to send phishing links or find your direct email to hack into.

WPForms CAPTCHA feature helps prevent robots from submitting your contact form.

You can read our guide on how to add CAPTCHA in WordPress to learn more about how it works and include them in your contact forms.

Limit the Number of Fields

Long forms are boring and can deter users from completing the form.

If you want to increase form submissions and maximize conversion rates, make sure to limit your contact form to under five fields.

How to Grow Your Email List With Other Forms

Building a WordPress contact form is just one way to add new subscribers to your email list. There are dozens of other list-building strategies besides adding an email optin when someone sends you a message.

Here are several ways to do so using various types of forms.

Pop-up Sign-up Form

Instantly grab the attention of visitors with signup forms that pop up after a certain amount of time. You can add a pop-up signup form to any webpage and choose when you want it to trigger.

We recommend creating Exit-Intent pop-ups with OptinMonster. These forms are less intrusive since they only appear when a user is about to leave your website. This can result in a less disruptive user experience than pop-ups that appear immediately upon arrival, which can annoy or deter visitors.

You can even make your pop-ups more interactive and animated with a slide-out contact form. These interactive and animated forms divert visitors’ attention and allow them to quickly fill out the form without leaving their current page.

Sidebar Sign-up Form

Placing a signup form in the sidebar makes it easily accessible to visitors on every page of your website. The added convenience can increase the odds of users subscribing to your newsletter.

Inline or After Post Sign-up Form

Readers are on your website for a reason. The less you interrupt them, the more likely they are to stick around and convert.

Placing a sign-up form after a blog post is less intrusive and allows you to tailor your call to action (CTA) to the content the reader has just consumed. You can place relevant offers in front of visitors to boost conversions since the CTA aligns with their interests.

If you want to boost conversations and turn readers into paid customers, read our other tutorials, such as our guide on how to create an email newsletter the right way or our expert guide on easy ways to grow your email list fast.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Use Contact Form to Grow Your Email List in WordPress first appeared on WPBeginner.

Centralized Control Plane for SAAS Infra: Part 1

For a couple of years, my journey has revolved around constructing control planes for data infrastructure startups. As an engineer, I have been fortunate to gain invaluable insights into the challenges and intricacies of developing successful SAAS products. Drawing from my firsthand experiences, this series of blog posts delve into the lessons learned and shares my goals at DataInfra, where I am building a centralized control plane for SAAS infrastructure. Join me as we explore the critical aspects and key considerations of constructing effective control planes in the dynamic and competitive SAAS industry.

Introduction

Before we delve into the content, let's address the ambiguity surrounding the term "Control Plane for SAAS Infrastructure." In the context of SAAS infrastructure, the control plane refers to the centralized system that governs and orchestrates the deployment, configuration, and operation of applications and services on data planes. Data Planes can exist within the SAAS provider's network or at the customer premise. By SAAS Infra, I specifically mean the service provisioning layer in a SAAS product.

WebRTC 102: Understanding SDP Internals

As a WebRTC developer, you've probably heard the term "SDP" thrown around quite a bit, but what exactly is SDP and why is it important in WebRTC? In this article, we'll explore SDP — its meaning and how it works in WebRTC, and offer tips and best practices for working with it.

Let’s dive in!

Jenkins Test Cases Template: Top 140+ Jenkins Test Cases

Jenkins is a popular open-source CI/CD that helps automate various aspects of software development, including building, testing, and deploying applications. Jenkins is highly extensible, with over 1,000 available plugins, which help to integrate with various third-party tools and technologies.

Consider a scenario where you're working on a large software project with multiple developers. Testing each and every change manually can be time-consuming and prone to human error. This is where Jenkins test cases can come in handy.

How To Use Serverless Architecture

Serverless architecture is becoming increasingly popular for fintech developers and CTOs looking to simplify their tech stack. The technology offers many benefits, including reduced server management complexity and lower costs due to its pay-as-you-go model.  

But how exactly do you implement serverless technology? In this article, I provide a comprehensive, step-by-step guide to using serverless architecture, with practical tips and real-world use cases.

Apache SeaTunnel, Milvus, and OpenAI Improve Accuracy and Efficiency of Book Title Similarity Search

Currently, existing book search solutions (such as those used in public libraries) heavily rely on keyword matching rather than a semantic understanding of the actual content of book titles. As a result, search results may not meet our needs very well or even be vastly different from what we expect. This is because relying solely on keyword matching is not enough, as it cannot achieve semantic understanding and, therefore, cannot understand the searcher’s true intent.

So, is there a better way to conduct book searches more accurately and efficiently? The answer is yes! In this article, I will introduce how to combine the use of Apache SeaTunnel, Milvus, and OpenAI for similarity search to achieve a semantic understanding of the entire book title and make search results more accurate.

Building a Slack Chatbot With OpenAI API, NodeJs, and FL0

The advent of OpenAI's API has empowered countless developers to create sophisticated chatbots without breaking a sweat.

We've noticed that there's a considerable amount of curiosity within the developer community regarding the workings and features of FL0. This gave us the idea to build a simple chatbot using the GPT API.

Aperture in Action: How We Solved PostgreSQL Performance Challenges

Even thirty years after its inception, PostgreSQL continues to gain traction, thriving in an environment of rapidly evolving open-source projects. While some technologies appear and vanish swiftly, others, like the PostgreSQL database, prove their longevity, illustrating that they can withstand the test of time. It has become the preferred choice by many organizations for data storage, from general data storage to an asteroid tracking database. Companies are running PostgreSQL clusters with petabytes of data.

Operating PostgreSQL on a large scale in a production environment can be challenging. Companies have experienced downtime and performance problems, resulting in financial losses and diminished trust, especially if the outages extend beyond a few hours. A case in point is the GitLab database outage in Jan 2017. Though there were many attributes to how this outage happened, to emphasize how overload can play a significant role, in their timeline, they explained how much time it took to control overload happening at that time, which cost them hours to control it.

A High-Level Overview Of Large Language Model Concepts, Use Cases, And Tools

Even though a simple online search turns up countless tutorials on using Artificial Intelligence (AI) for everything from generative art to making technical documentation easier to use, there’s still plenty of mystery around it. What goes inside an AI-powered tool like ChatGPT? How does Notion’s AI feature know how to summarize an article for me on the fly? Or how are a bunch of sites suddenly popping up that can aggregate news and auto-publish a slew of “new” articles from it?

It all can seem like a black box of mysterious, arcane technology that requires an advanced computer science degree to understand. What I want to show you, though, is how we can peek inside that box and see how everything is wired up.

Specifically, this article is about large language models (LLMs) and how they “imbue” AI-powered tools with intelligence for answering queries in diverse contexts. I have previously written tutorials on how to use an LLM to transcribe and evaluate the expressed sentiment of audio files. But I want to take a step back and look at another way around it that better demonstrates — and visualizes — how data flows through an AI-powered tool.

We will discuss LLM use cases, look at several new tools that abstract the process of modeling AI with LLM with visual workflows, and get our hands on one of them to see how it all works.

Large Language Models Overview

Forgoing technical terms, LLMs are vast sets of text data. When we integrate an LLM into an AI system, we enable the system to leverage the language knowledge and capabilities developed by the LLM through its own training. You might think of it as dumping a lifetime of knowledge into an empty brain, assigning that brain to a job, and putting it to work.

“Knowledge” is a convoluted term as it can be subjective and qualitative. We sometimes describe people as “book smart” or “street smart,” and they are both types of knowledge that are useful in different contexts. This is what artificial “intelligence” is created upon. AI is fed with data, and that is what it uses to frame its understanding of the world, whether it is text data for “speaking” back to us or visual data for generating “art” on demand.

Use Cases

As you may imagine (or have already experienced), the use cases of LLMs in AI are many and along a wide spectrum. And we’re only in the early days of figuring out what to make with LLMs and how to use them in our work. A few of the most common use cases include the following.

  • Chatbot
    LLMs play a crucial role in building chatbots for customer support, troubleshooting, and interactions, thereby ensuring smooth communications with users and delivering valuable assistance. Salesforce is a good example of a company offering this sort of service.
  • Sentiment Analysis
    LLMs can analyze text for emotions. Organizations use this to collect data, summarize feedback, and quickly identify areas for improvement. Grammarly’s “tone detector” is one such example, where AI is used to evaluate sentiment conveyed in content.
  • Content Moderation
    Content moderation is an important aspect of social media platforms, and LLMs come in handy. They can spot and remove offensive content, including hate speech, harassment, or inappropriate photos and videos, which is exactly what Hubspot’s AI-powered content moderation feature does.
  • Translation
    Thanks to impressive advancements in language models, translation has become highly accurate. One noteworthy example is Meta AI’s latest model, SeamlessM4T, which represents a big step forward in speech-to-speech and speech-to-text technology.
  • Email Filters
    LLMs can be used to automatically detect and block unwanted spam messages, keeping your inbox clean. When trained on large datasets of known spam emails, the models learn to identify suspicious links, phrases, and sender details. This allows them to distinguish legitimate messages from those trying to scam users or market illegal or fraudulent goods and services. Google has offered AI-based spam protection since 2019.
  • Writing Assistance
    Grammarly is the ultimate example of an AI-powered service that uses LLM to “learn” how you write in order to make writing suggestions. But this extends to other services as well, including Gmail’s “Smart Reply” feature. The same thing is true of Notion’s AI feature, which is capable of summarizing a page of content or meeting notes. Hemmingway’s app recently shipped a beta AI integration that corrects writing on the spot.
  • Code and Development
    This is the one that has many developers worried about AI coming after their jobs. It hit the commercial mainstream with GitHub Copilot, a service that performs automatic code completion. Same with Amazon’s CodeWhisperer. Then again, AI can be used to help sharpen development skills, which is the case of MDN’s AI Help feature.

Again, these are still the early days of LLM. We’re already beginning to see language models integrated into our lives, whether it’s in our writing, email, or customer service, among many other services that seem to pop up every week. This is an evolving space.

Types Of Models

There are all kinds of AI models tailored for different applications. You can scroll through Sapling’s large list of the most prominent commercial and open-source LLMs to get an idea of all the diverse models that are available and what they are used for. Each model is the context in which AI views the world.

Let’s look at some real-world examples of how LLMs are used for different use cases.

Natural Conversation
Chatbots need to master the art of conversation. Models like Anthropic’s Claude are trained on massive collections of conversational data to chat naturally on any topic. As a developer, you can tap into Claude’s conversational skills through an API to create interactive assistants.

Emotions
Developers can leverage powerful pre-trained models like Falcon for sentiment analysis. By fine-tuning Falcon on datasets with emotional labels, it can learn to accurately detect the sentiment in any text provided.

Translation
Meta AI released SeamlessM4T, an LLM trained on huge translated speech and text datasets. This multilingual model is groundbreaking because it translates speech from one language into another without an intermediary step between input and output. In other words, SeamlessM4T enables real-time voice conversations across languages.

Content Moderation
As a developer, you can integrate powerful moderation capabilities using OpenAI’s API, which includes a LLM trained thoroughly on flagging toxic content for the purpose of community moderation.

Spam Filtering
Some LLMs are used to develop AI programs capable of text classification tasks, such as spotting spam emails. As an email user, the simple act of flagging certain messages as spam further informs AI about what constitutes an unwanted email. After seeing plenty of examples, AI is capable of establishing patterns that allow it to block spam before it hits the inbox.

Not All Language Models Are Large

While we’re on the topic, it’s worth mentioning that not all language models are “large.” There are plenty of models with smaller sets of data that may not go as deep as ChatGPT 4 or 5 but are well-suited for personal or niche applications.

For example, check out the chat feature that Luke Wrobleski added to his site. He’s using a smaller language model, so the app at least knows how to form sentences, but is primarily trained on Luke’s archive of blog posts. Typing a prompt into the chat returns responses that read very much like Luke’s writings. Better yet, Luke’s virtual persona will admit when a topic is outside of the scope of its knowledge. An LLM would provide the assistant with too much general information and would likely try to answer any question, regardless of scope. Members from the University of Edinburgh and the Allen Institute for AI published a paper in January 2023 (PDF) that advocates the use of specialized language models for the purpose of more narrowly targeted tasks.

Low-Code Tools For LLM Development

So far, we’ve covered what an LLM is, common examples of how it can be used, and how different models influence the AI tools that integrate them. Let’s discuss that last bit about integration.

Many technologies require a steep learning curve. That’s especially true with emerging tools that might be introducing you to new technical concepts, as I would argue is the case with AI in general. While AI is not a new term and has been studied and developed over decades in various forms, its entrance to the mainstream is certainly new and sparks the recent buzz about it. There’s been plenty of recent buzz in the front-end development community, and many of us are scrambling to wrap our minds around it.

Thankfully, new resources can help abstract all of this for us. They can power an AI project you might be working on, but more importantly, they are useful for learning the concepts of LLM by removing advanced technical barriers. You might think of them as “low” and “no” code tools, like WordPress.com vs. self-hosted WordPress or a visual React editor that is integrated with your IDE.

Low-code platforms make it easier to leverage large language models without needing to handle all the coding and infrastructure yourself. Here are some top options:

Chainlit

Chainlit is an open-source Python package that is capable of building a ChatGPT-style interface using a visual editor.

LLMStack is another low-code platform for building AI apps and chatbots by leveraging large language models. Multiple models can be chained together into “pipelines” for channeling data. LLMStack supports standalone app development but also provides hosting that can be used to integrate an app into sites and products via API or connected to platforms like Slack or Discord.

LLMStack is also what powers Promptly, a cloud version of the app with freemium subscription pricing that includes a free tier.

FlowiseAI

Stack AI is another no-code offering for developing AI apps integrated with LLMs. It is much like FlowiseAI, particularly the drag-and-drop interface that visualizes connections between apps and APIs. One thing I particularly like about Stack AI is how it incorporates “data loaders” to fetch data from other platforms, like Slack or a Notion database.

I also like that Stack AI provides a wider range of LLM offerings. That said, it will cost you. While Stack AI offers a free pricing tier, it is restricted to a single project with only 100 runs per month. Bumping up to the first paid tier will set you back $199 per month, which I suppose is used toward the costs of accessing a wider range of LLM sources. For example, Flowise AI works with any LLM in the Hugging Face community. So does Stack AI, but it also gives you access to commercial LLM offerings, like Anthropic’s Claude models and Google’s PaLM, as well as additional open-source offerings from Replicate.

Voiceflow

Install FlowiseAI

First things first, we need to get FlowiseAI up and running. FlowiseAI is an open-source application that can be installed from the command line.

You can install it with the following command:

npm install -g flowise

Once installed, start up Flowise with this command:

npx flowise start

From here, you can access FlowiseAI in your browser at localhost:3000.

It’s possible to serve FlowiseAI so that you can access it online and provide access to others, which is well-covered in the documentation.

Setting Up Retrievers

Retrievers are templates that the multi-prompt chain will query.

Different retrievers provide different templates that query different things. In this case, we want to select the Prompt Retriever because it is designed to retrieve documents like PDF, TXT, and CSV files. Unlike other types of retrievers, the Prompt Retriever does not actually need to store those documents; it only needs to fetch them.

Let’s take the first step toward creating our career assistant by adding a Prompt Retriever to the FlowiseAI canvas. The “canvas” is the visual editing interface we’re using to cobble the app’s components together and see how everything connects.

Adding the Prompt Retriever requires us to first navigate to the Chatflow screen, which is actually the initial page when first accessing FlowiseAI following installation. Click the “Add New” button located in the top-right corner of the page. This opens up the canvas, which is initially empty.

The “Plus” (+) button is what we want to click to open up the library of items we can add to the canvas. Expand the Retrievers tab, then drag and drop the Prompt Retriever to the canvas.

The Prompt Retriever takes three inputs:

  1. Name: The name of the stored prompt;
  2. Description: A brief description of the prompt (i.e., its purpose);
  3. Prompt system message: The initial prompt message that provides context and instructions to the system.

Our career assistant will provide career suggestions, tool recommendations, salary information, and cities with matching jobs. We can start by configuring the Prompt Retriever for career suggestions. Here is placeholder content you can use if you are following along:

  • Name: Career Suggestion;
  • Description: Suggests careers based on skills and experience;
  • Prompt system message: You are a career advisor who helps users identify a career direction and upskilling opportunities. Be clear and concise in your recommendations.

Be sure to repeat this step three more times to create each of the following:

  • Tool recommendations,
  • Salary information,
  • Locations.

Adding A Multi-Prompt Chain

A Multi-Prompt Chain is a class that consists of two or more prompts that are connected together to establish a conversation-like interaction between the user and the career assistant.

The idea is that we combine the four prompts we’ve already added to the canvas and connect them to the proper tools (i.e., chat models) so that the career assistant can prompt the user for information and collect that information in order to process it and return the generated career advice. It’s sort of like a normal system prompt but with a conversational interaction.

The Multi-Prompt Chain node can be found in the “Chains” section of the same inserter we used to place the Prompt Retriever on the canvas.

Once the Multi-Prompt Chain node is added to the canvas, connect it to the prompt retrievers. This enables the chain to receive user responses and employ the most appropriate language model to generate responses.

To connect, click the tiny dot next to the “Prompt Retriever” label on the Multi-Prompt Chain and drag it to the “Prompt Retriever” dot on each Prompt Retriever to draw a line between the chain and each prompt retriever.

Integrating Chat Models

This is where we start interacting with LLMs. In this case, we will integrate Anthropic’s Claude chat model. Claude is a powerful LLM designed for tasks related to complex reasoning, creativity, thoughtful dialogue, coding, and detailed content creation. You can get a feel for Claude by registering for access to interact with it, similar to how you’ve played around with OpenAI’s ChatGPT.

From the inserter, open “Chat Models” and drag the ChatAnthropic option onto the canvas.

Once the ChatAnthropic chat model has been added to the canvas, connect its node to the Multi-Prompt Chain’s “Language Model” node to establish a connection.

It’s worth noting at this point that Claude requires an API key in order to access it. Sign up for an API key on the Anthropic website to create a new API key. Once you have an API key, provide it to the Mutli-Prompt Chain in the “Connect Credential” field.

Adding A Conversational Agent

The Agent component in FlowiseAI allows our assistant to do more tasks, like accessing the internet and sending emails.

It connects external services and APIs, making the assistant more versatile. For this project, we will use a Conversational Agent, which can be found in the inserter under “Agent” components.

Once the Conversational Agent has been added to the canvas, connect it to the Chat Model to “train” the model on how to respond to user queries.

Integrating Web Search Capabilities

The Conversational Agent requires additional tools and memory. For example, we want to enable the assistant to perform Google searches to obtain information it can use to generate career advice. The Serp API node can do that for us and is located under “Tools” in the inserter.

Like Claude, Serp API requires an API key to be added to the node. Register with the Serp API site to create an API key. Once the API is configured, connect Serp API to the Conversational Agent’s “Allowed Tools” node.

Building In Memory

The Memory component enables the career assistant to retain conversation information.

This way, the app remembers the conversation and can reference it during the interaction or even to inform future interactions.

There are different types of memory, of course. Several of the options in FlowiseAI require additional configurations, so for the sake of simplicity, we are going to add the Buffer Memory node to the canvas. It is the most general type of memory provided by LangChain, taking the raw input of the past conversation and storing it in a history parameter for reference.

Buffer Memory connects to the Conversational Agent’s “Memory” node.

The Final Workflow

At this point, our workflow looks something like this:

  • Four prompt retrievers that provide the prompt templates for the app to converse with the user.
  • A multi-prompt chain connected to each of the four prompt retrievers that chooses the appropriate tools and language models based on the user interaction.
  • The Claude language model connected to the multi-chain prompt to “train” the app.
  • A conversational agent connected to the Claude language model to allow the app to perform additional tasks, such as Google web searches.
  • Serp API connected to the conversational agent to perform bespoke web searches.
  • Buffer memory connected to the conversational agent to store, i.e., “remember,” conversations.

If you haven’t done so already, this is a great time to save the project and give it a name like “Career Assistant.”

Final Demo

Watch the following video for a quick demonstration of the final workflow we created together in FlowiseAI. The prompts lag a little bit, but you should get the idea of how all of the components we connected are working together to provide responses.

Conclusion

As we wrap up this article, I hope that you’re more familiar with the concepts, use cases, and tools of large language models. LLMs are a key component of AI because they are the “brains” of the application, providing the lens through which the app understands how to interact with and respond to human input.

We looked at a wide variety of use cases for LLMs in an AI context, from chatbots and language translations to writing assistance and summarizing large blocks of text. Then, we demonstrated how LLMs fit into an AI application by using FlowiseAI to create a visual workflow. That workflow not only provided a visual of how an LLM, like Claude, informs a conversation but also how it relies on additional tools, such as APIs, for performing tasks as well as memory for storing conversations.

The career assistant tool we developed together in FlowiseAI was a detailed visual look inside the black box of AI, providing us with a map of the components that feed the app and how they all work together.

Now that you know the role that LLMs play in AI, what sort of models would you use? Is there a particular app idea you have where a specific language model would be used to train it?

References