Why Choose FastAPI over Flask?

Article Image To help you quickly get started with Milvus, the open-source vector database, we released another affiliated open-source project, Milvus Bootcamp on GitHub. The Milvus Bootcamp not only provides scripts and data for benchmark tests, but also includes projects that use Milvus to build some MVPs (minimum viable products), such as a reverse image search system, a video analysis system, a QA chatbot, or a recommender system. You can learn how to apply vector similarity search in a world full of unstructured data and get some hands-on experience in Milvus Bootcamp.

We provide both front-end and back-end services for the projects in Milvus Bootcamp. However, we have recently made the decision to change the adopted web framework from Flask to FastAPI. This article aims to explain our motivation behind such a change in the adopted web framework for Milvus Bootcamp by clarifying why we chose FastAPI over Flask.

Web Frameworks for Python

A web framework refers to a collection of packages or modules. It is a set of software architecture for web development that allows you to write web applications or services and saves you the trouble of handling low-level details such as protocols, sockets, or process/thread management. Using a web framework can significantly reduce the workload of developing web applications as you can simply "plugin" your code into the framework, with no extra attention needed when dealing with data caching, database access, and data security verification. For more information about what a web framework for Python is, see Web Frameworks.  

There are various types of Python web frameworks. The mainstream ones include Django, Flask, Tornado, and FastAPI.
  • Flask

Flask is a lightweight micro-framework designed for Python, with a simple and easy-to-use core that allows you to develop your own web applications. In addition, the Flask core is also extensible. Therefore, Flask supports an on-demand extension of different functions to meet your personalized needs during web application development. This is to say, with a library of various plug-ins in Flask, you can develop powerful websites.

Flask has the following characteristics:


  1. Flask is a microframework that does not rely on other specific tools or components of third-party libraries to provide shared functionalities. Flask does not have a database abstraction layer and does not require form validation. However, Flask is highly extensible and supports adding application functionality in a way similar to implementations within Flask itself. Relevant extensions include object-relational mappers, form validation, upload processing, open authentication technologies, and some common tools designed for web frameworks.
  2. Flask is a web application framework based on WSGI (Web Server Gateway Interface). WSGI is a simple interface connecting a web server with a web application or framework defined for the Python language.
  3. Flask includes two core function libraries, Werkzeug and Jinja2. Werkzeug is a WSGI toolkit that implements request, response objects, and practical functions, which allows you to build web frameworks on top of it. Jinja2 is a popular full-featured templating engine for Python. It has full support for Unicode, with an optional but widely-adopted integrated sandbox execution environment.
  • FastAPI

FastAPI is a modern Python web application framework that has the same level of high performance as Go and NodeJS. The core of FastAPI is based on Starlette and Pydantic. Starlette is a lightweight ASGI (Asynchronous Server Gateway Interface) framework toolkit for building high-performance Asyncio services. Pydantic is a library that defines data validation, serialization, and documentation based on Python-type hints.

FastAPI has the following characteristics:


How to Build a GraphQL API for Text Analytics with Python, Flask and Fauna

GraphQL is a query language and server-side runtime environment for building APIs. It can also  be considered as the syntax that you write in order to describe the kind of data you want from APIs. What this means for you as a backend developer is that with GraphQL, you are able to expose a single endpoint on your server to handle GraphQL queries from client applications, as opposed to the many endpoints you’d need to create to handle specific kinds of requests with REST and turn serve data from those endpoints.

If a client needs new data that is not already provisioned. You’d need to create new endpoints for it and update the API docs. GraphQL makes it possible to send queries to a single endpoint, these queries are then passed on to the server to be handled by the server’s predefined resolver functions and the requested information can be provided over the network.

Running Flask server

Flask is a minimalist framework for building Python servers. I always use Flask to expose my GraphQL API to serve my machine learning models. Requests from client applications are then forwarded by the GraphQL gateway. Overall, microservice architecture allows us to use the best technology for the right job and it allows us to use advanced patterns like schema federation. 

In this article, we will start small with the implementation of the so-called Levenshetein distance. We will use the well-known NLTK library and expose the Levenshtein distance functionality with the GraphQL API. In this article, i assume that you are familiar with basic GraphQL concepts like BUILDING GraphQL mutations. 

Note: We will be working with the free and open source example repository with the following:

In the projects, Pipenv was used for managing the python dependencies. If you are located in the project folder. We can create our virtual environment with this:

…and install dependencies from Pipfile.

We usually define a couple of script aliases in our Pipfile to ease our development workflow.

It allows us to run our dev environment easily with a command aliases as follows:

The flask server should be then exposed by default at port 5000. You can immediately move on to the GraphQL Playground, which serves as the IDE for the live documentation and query execution for GraphQL servers. GraphQL playground uses the so-called GraphQL introspection for fetching information about our GraphQL types. The following code initializes our Flask server:

It is a good practice to use the WSGI server when running a production environment. Therefore, we have to set-up a script alias for the gunicorn with: 

Levenshtein distance (edit distance)

The levenshtein distance, also known as edit distance, is a string metric. It is defined as the minimum number of single-character edits needed to change a one character sequence  a to another one b. If we denote the length of such sequences |a| and |b| respectively, we get the following:

Where

1(ai≠bj) is the distance between the first i characaters of a and the first j character of b. For more on the theoretical background, feel free to check out the Wiki.

In practice, let’s say that someone misspelt ‘machine learning’ and wrote ‘machinlt learning’. We would need to make the following edits:

EditEdit typeWord state
0Machinlt lerning
1SubstitutionMachinet lerning
2DeletionMachine lerning
3InsertionMachine learning

For these two strings, we get a Levenshtein distance equal to 3. The levenshtein distance has many applications, such as spell checkers, correction system for optical character recognition, or similarity calculations.

Building a Graphql server with graphene in Python

We will build the following schema in our article:

Each GraphQL schema is required to have at least one query. We usually define our first query in order to health check our microservice. The query can be called like this:

query {
  healthcheck
}

However, the main function of our schema is to enable us to calculate the Levenshtien distance. We will use variables to pass dynamic parameters in the following GraphQL document:

We have defined our schema so far in SDL format. In the python ecosystem, however, we do not have libraries like graphql-tools, so we need to define our schema with the code-first approach. The schema is defined as follows using the Graphene library:

We have followed the best practices for overall schema and mutations. Our input object type is written in Graphene as follows:

Each time, we execute our mutation in GraphQL playground:

With the following variables:

{
  "input": {
    "s1": "test1",
    "s2": "test2"
  }
}

We obtain the Levenshtein distance between our two input strings. For our simple example of strings test1 and test2, we get 1. We can leverage the well-known NLTK library for natural language processing (NLP). The following code is executed from the resolver:

It is also straightforward to implement the Levenshtein distance by ourselves using, for example, an iterative matrix, but I would suggest to not reinvent the wheel and use the default NLTK functions.

Serverless GraphQL APIs with Fauna

First off some introductions, before we jump right in. It’s only fair that I give Fauna a proper introduction as it is about to make our lives a whole lot easier. 

Fauna is a serverless database service, that handles all optimization and maintenance tasks so that developers don’t have to bother about them and can focus on developing their apps and shipping to market faster.

Again, serverless doesn’t actually mean “NO SERVERS” but to simply put it: what serverless means is that you can get things working without necessarily having to set things up from scratch. Some apps that use serverless concepts don’t have a backend service written from scratch, they employ the use of cloud functions which are technically scripts written on cloud platforms to handle necessary tasks like login, registration, serving data etc.

Where does Fauna fit into all of this? When we build servers we need to provision our server with a database, and when we do that it’s usually a running instance of our database. With serverless technology like Fauna, we can shift that workload to the cloud and focus on actually writing our auth systems, implementing business logic for our app. Fauna also manages things like, maintenance and scaling which usually calls for concern with systems that use conventional databases.

If you are interested in getting more info about Fauna and it’s features, check the Fauna docs. Let’s get started with building our GraphQL API the serverless way with the GraphQL.

Requirements

  1. Fauna account: That’s all you need, an account with Fauna is all you need for the session, so click here to go to the sign up page.
  2. Creating a Fauna Database
  3. Login to your Fauna account once you have created an account with them. Once on the dashboard, you should see a button to create a new database, click on that and you should see a little form to fill in the name of the database, that resembles the ones below:

I call mine “graphqlbyexample” but you can call yours anything you wish. Please, ignore the pre-populate with demo data option we don’t need that for this demo. Click “Save” and you should be brought to a new screen as shown below:

Adding a GraphQL Schema to Fauna

  1. In order to get our GraphQL server up and running, Fauna allows us to upload our own graphQL schema, on the page we are currently on, you should see a GraphQL options; select that and it will prompt you to upload a schema file. This file usually contains raw GraphQL schema and is saved with either the .gql or .graphql file extension. Let’s create our schema and upload it to Fauna to spin up our server.
  1. Create a new file anywhere you like. I’m creating it in the same directory as our previous app, because it has no impact on it. I’m calling it schema.gql and we will add the following to it:

Here, we simply define our data types in tandem to our two tables (Notes, and user). Save this and go back to that page to upload this schema.gql that we just created. Once that is done, Fauna processes it and takes us to a new page — our GraphQL API playground.

We have literally created a graphql server by simply uploading that really simple schema to Fauna and to highlight some of the really cool feats that Fauna has, observe:

  1. Fauna automatically generates collections for us, if you notice, we did not create any collection(translates to Tables, if you are only familiar with relational databases). Fauna is a NoSQL database and collections are technically the same as tables, and documents as to rows in tables. If we go to the collections options and click on that we had see the tables that were auto-generated on our behalf, courtesy of the schema file that we uploaded.
  1. Fauna automatically creates indexes on our behalf: head over to the indexes option and see what indexes have been created on behalf of the API. Fauna is a document-oriented database and does not have primary keys or foreign-keys as you have in relational databases for search and index purposes, instead, we create indexes in Fauna to help with data retrieval.
  1. Fauna automatically generates graphql queries and mutations as well as API Docs on our behalf: This is one of my personal favorites and I can’t seem to get over just how efficient Fauna does this. Fauna is able to intelligently generate some queries that it thinks you might want in your newly created API. Head over back to the GraphQL option and click on the “Docs” tab to open up the Docs on the playground.

As you can see two queries and a handful of mutations already auto-generated (even though we did nit add them to our schema file), you can click on each one in the docs to see the details. 

Testing our server

Let’s test out some of these queries and mutations from the playground, we also use our server outside of the playground (by the way, it is a fully functional GraphQL server).

Testing from the Playground

  1. We will test our first off by creating a new user, with the predefined createUser mutations as follows:

If we go to the collections options and choose User, we should have our newly created entry(document aka row) in our User collections.

  1. Let’s create a new note and associate it with a user as the author via it’s document ref id, which is a special ID generated by Fauna for all documents for the sake of references like this much like a key in relational tables. To find the ID for the user we just created simply navigate to the collection and from the list of documents you should see the option(a copy Icon)to copy Ref ID:

Once you have this you can create a new note and associate is as follows:

  1. Let’s make a query this time, this time to get data from the database. Currently, we can fetch users by ID or fetch a note by it’s ID. Let’s see that in action:

You must have been thinking it, what if we wanted to fetch info of all users, currently, we can’t do that because Fauna did not generate that automatically for us, but we can update our schema so let’s add our custom query to our schema.gql file, as follows. Note that this is an update to the file so don’t clear everything in the file out just add this to it:

Once you have added this, save the file and click on the update schema option on the playground to upload the file again, it should take a few seconds to update, once it’s done we will be able to use our newly created query, as follows:

Don’t forget that as opposed to having all the info about users served (namely: name, email, password) we can choose what fields we want because it’s a GraphQL and not just that. It’s Fauna’s GraphQL so feel free to specify more fields if you want. 

Testing from without the playground – Using python (requests library)

Now that we’ve seen that our API works from the playground lets see how we can actually use this from an application outside the playground environment, using python’s request library so if you don’t have it installed kindly install it using pip as follows:

pip install requests
  • Before we write any code we need to get our API key from Fauna which is what will help us communicate with our API from outside the playground. Head over to security on your dashboard and on the keys tab select the option to create a new key, it should bring up a form like this:

Leave the database option as the current one, change the role of the key from admin to the server and then save. It’ll generate for you a new key secret that you must copy and save somewhere safe, as an environment variable most probably.

  • For this I’m going to create a simple script to demonstrate, so add a new file call it whatever you wish — I’m calling mine test.py to your current working directory or anywhere you wish. In this file we’ll add the following:

Here we add a couple of imports, including the requests library which we use to send the requests, as well as the os module used here to load our environment variables which is where I stored the Fauna secret key we got from the previous step.

Note the URL where the request is to be sent, this is gotten from the Fauna GraphQL playground here:

Next, we create a query which is to be sent this example shows a simple fetch query to find a user by id (which is one of the automatically generated queries from Fauna), we then retrieve the key from the environment variable and store it in a variable called a token, and create a dictionary to represent out headers, this is, after all, an HTTP request so we can set headers here, and in fact, we have to because Fauna will look for our secret key from the headers of our request.

  • The concluding part of the code features how we use the request library to create the request, and is shown as follows:

We create a request object and check to see if the request went through via its status_code and print the response from the server if it went well otherwise we print an error message lets run test.py and see what it returns.

Conclusion

In this article, we covered creating GraphQL servers from scratch and looked at creating servers right from Fauna without having to do much work, we also saw some of the awesome, cool perks that come with using the serverless system that Fauna provides, we went on to further see how we could test our servers and validate that they work.

Hopefully, this was worth your time, and taught you a thing or two about GraphQL, Serverless, Fauna, and Flask and text analytics. To learn more about Fauna, you can also sign up for a free account and try it out yourself!


By: Adesina Abdrulrahman


The post How to Build a GraphQL API for Text Analytics with Python, Flask and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Learn How to Deploy a Django Application

In this post, we'll go over the steps to deploy a Django application on a production server. I am using an AWS ec2 server, an Ubuntu 20.04 instance, and Python 3.8. The steps are the same for most versions of Ubuntu and Python, however, the syntax might differ based on the version you are using.

Steps

  1. Install Apache2.
  2. List out the project's folder and file's path.
  3. Collect static files.
  4. Migrate the database.
  5. Change the permission and ownership of the database files and other folders.
  6. Make changes in the Apache config file.
  7. Enable the site.
  8. Install WSGI mod in Apache2.
  9. Restart the Apache Server.

Step 1: Install Apache 2

 The following are the commands to install the Apache 2 server on the Ubuntu instance.

Django Async: What’s New and What’s Next?

Starting with Django 3.1, the latest version that dropped a couple of weeks ago, Django now supports fully asynchronous request path. This is exciting for everyone who’s been waiting on the edge of their seats ever since Andrew Godwin’s DEP 0009 was approved by Django Technical Board in July 2019. Read on to know all about what this release means if you have a Django application in production and looking to add async support. At DeepSource, we’re working on adding more Django issues in our Python analyzer, which will also include async-specific bug risks and anti-patterns.

Support for Asynchronous Views and Middleware

In Django 3.1, async features are now supported across the request-response cycle. This means you can define fully asynchronous views using the async keyword:

Tornado Web Server on Elastic Beanstalk

Using Tornado Web server for Python on Elastic Beanstalk isn't quite straightforward. There isn't much information on the internet, either. The objective of this article is to be a guide and reference.

Tornado Web Server uses non-blocking IO to support thousands of connections in parallel. It is different from most Python web frameworks. It is not based on WSGI, and it is typically run with only one thread per process. Tornado is integrated with the standard library asyncio module and shares the same event loop (by default since Tornado 5.0). In general, libraries designed for use with asyncio can be mixed freely with Tornado.

How to Set Up Django With Postgres, Nginx, and Gunicorn on Ubuntu 16.04

Introduction

Django is a free, open source, and high-level Python web framework that encourages rapid development and clean, pragmatic design. Django's MVC (Model-View-Controller) architecture is maintained by the Django Software Foundation. Django is a strong web framework that can assist you to get your application online as quickly as possible. The primary goal of Django is to ease the creation of complex, database-driven websites. Django supports four major database backends including, PostgreSQL, MySQL, SQLite, and Oracle.

You can run Django in conjunction with Apache, Nginx using WSGI, Gunicorn, or Cherokee using a Python module.