Building an E-Commerce API Using Nestjs, SQLite, and TypeORM

Introduction

Nestjs is a cutting-edge Node.js framework for developing server-side applications that are efficient, dependable, and scalable. It is simple to integrate with NoSQL and SQL databases such as MongoDB, Yugabyte, SQLite, Postgres, MySQL, and others. It supports popular object-relational mappers such as TypeORM Sequelize and Mongoose.

In this tutorial, we'll create an e-commerce application with SQLite and TypeORM. We'll also look at Arctype, a powerful SQL client and database management tool.

7 Must-Haves For Ultimate AWS Security

AWS makes our life easier in many ways. But, as it often happens, in an attempt to address all possible needs, it ended up with just too many features to keep an eye on. Newbies or small teams that cannot have a dedicated AWS admin, may get lost or spend too much time managing and configuring it.

In our new series, we want to help everyone in setting up an AWS account completely from scratch.
We will be sending you to the AWS docs quite often. Our goal for writing this blog series is to have all useful links for you in one place and to point you to the facts that may have skipped your attention before.

Intrusion Detection vs. Intrusion Prevention: The Beginner’s Guide to IPS and IDS

I think you'll agree with me when I say that the words "database" and "intrusion" are not words you want to hear in the same sentence. Databases house critical information that needs to be kept private for our businesses and our customers - we can't have criminals exploiting gaps in our security and accessing this information.

Fortunately, we can take steps to stop intruders from getting in and catching them if they manage to sneak in anyway. Let's learn about these methods and how we can put them to good use to protect our valuable data. We'll explore two systems - the IPS and the IDS - and take a look at how they compare and what you should think about when implementing them.

Querying SQL Databases With PySpark

Article Image SQL is a powerful language that provides a deep understanding of what can and cannot be done with data. SQL excels at bringing order to disorganized, large data sets and helps you discover how distinct data sets are related. Spark is an open-source analytics engine for processing large amounts of data (what you might call "big data").

It allows us to maximize distributed computing when carrying out time-intensive operations on lots of data, or even when building ML models. PySpark is a Python application programming interface that allows us to use Apache Spark in Python. Querying SQL databases with PySpark thus lets us take advantage of Spark's implicit data parallelism and fault tolerance from a Python interface. This gives us the ability to process large quantities of data quickly.

Choose the Right Model: Comparing Relational, Document, and Graph Databases

What exactly is a database model? A database model is nothing more than the structure a database developer has chosen to store information. A database model also spells out the relationships between different parts of the dataset, and any limitations that may restrict read or write access.

Individual databases are designed based on the rules and concepts of the broader data model the designers adopt. Developers often use these models to strategically design databases for larger-scale real-world projects such as:

How to Use WebSockets with AWS Serverless

In this guide, we are going to see how we can use WebSockets using an AWS serverless framework with NodeJs. At the end of this guide, we will have an application where we can create a chat room and other users can join our room to chat with each other in a custom room. I made the procedure very simple to follow, and at the end of this post, you will also get a link to the Github repository for the code.

Project Setup

The first thing is to set up the project folder and install the required project dependencies by creating a new folder and running the below commands in the root of the project folder.

Using Cursors and Loops in MySQL

Dolphins image.

If you've ever wanted to learn how to write a MySQL cursor or a MySQL loop, you've come to the right place. Let's iterate!

Consider loops in general programming. They help you execute a specific sequence of instructions repeatedly until a particular condition breaks the loop. MySQL also provides a way to execute instructions on individual rows using cursors. Cursors in MySQL will execute a set of instructions on rows returned from SQL queries.

Properties of MySQL Cursors

  • Non-Scrollable: You can only iterate through rows in one direction. You can't skip a row; you can't jump to a row; you can't go back to a row.
  • Read-only: You can't update or delete rows using cursors.
  • Asensitive: MySQL cursors point to the underlying data. It runs faster than an insensitive cursor. Insensitive cursors point to a snapshot of the underlying data, making it slower than the asenstive cursors.

Creating a MySQL Cursor

To create a MySQL cursor, you'll need to work with the DECLARE, OPEN, FETCH, and CLOSE statements.

Analyzing Scans in PostgreSQL

Analysizing Scans in PostgreSQL

Introduction and Data Setup

Before we dive in, it is vital to understand the basic elementary blocks of PostgreSQL query plans. This has been covered in a separate blog post, and I highly encourage readers to go through it first.

There are several node types in PostgreSQL query plans,

Temporary Tables in MySQL: A High-level Overview

Anyone who has done substantial work with MySQL has probably noticed how big data affects MySQL databases — most likely some partition nuances or a couple of items related to indexes. However, another important feature offered by MySQL for big data purposes is the ability to create temporary tables. In this blog post, we are going to go into more detail on this subject.

What Are Temporary Tables?

In MySQL, a temporary table is a special type of table that (you guessed it) holds temporary data. These kinds of tables are usually created automatically and are typically only considered when certain types of problems arise-for example, when ALTER TABLE statements are run on vast sets of data.

Why You Need To Backup Your Postgres Database and How To Do It

Do you remember the last time you worked hard on an essay or paper, only to lose it all when Word or your computer suddenly crashed? Hours of work are gone because you didn't hit the save button often enough. That sort of frustration could drive even the bravest of souls to tears.

You might be able to afford this kind of data loss once in a while, but now imagine it's an entire database with Terrabytes of information—not just one document. Especially if that information is in a production environment, the damage could be catastrophic, costing millions of dollars. Not even mature organizations can afford these mistakes, and database backups prevent precisely this kind of situation.

Probing Text Data Using PostgreSQL Full-Text Search

Elephant cartoon graphic.

Introduction

Recap

In the previous article, we saw how we could use fuzzy search in PostgreSQL. This article is more about using the power of full-text indexes to search through large text data. This is different from any other PostgreSQL indexes, which work either with alphanumeric or smaller textual content.

Dataset Preparation

To understand how full-text search works, we are going to utilize the same data set we saw in the PostgreSQL Fuzzy Search blog -

Optimizing MySQL and MariaDB for TEXT: A Guide

Text in the shape of a dolphin. Introduction

If you frequently find yourself working with relational database instances, and especially MariaDB or MySQL, you probably already know the nuances of a couple of data types offered by MySQL. Some of the data types provided by MySQL are suited for numbers; others fit variable-length values, of which some are also good fits for text-based values.

What Data Types Are Available in MySQL?

Before explaining how to tune MySQL instances for specific (in this case, TEXT) data types, we must go over some of the data types offered by MySQL to make you understand how everything works in the first place. When it comes to data types, MySQL offers a few categories to choose from:

The Basics of MySQL Query Caching

MySQL Caching Techniques logo.

Introduction

Queries are ubiquitous in the life of every MySQL database administrator or even a database-savvy developer. As we have already stated in some of our previous blog posts, queries are simply tasks composed of smaller tasks. To optimize their performance, we should make those smaller tasks execute quicker or not execute at all. First, we must examine how MySQL performs its queries. We have already covered the basics of what makes queries slow in MySQL and we came down to the fact that we need to profile our queries-the query cache was one of the first things that MySQL looked at, remember?

What Is the Query Cache?

The MySQL query cache, though deprecated in MySQL 5.7 (and removed in 8.0), stores statements that have previously been run in memory: in other words, a query cache usually stores SELECT statements in the memory of a database. Therefore, if we run a query and then run precisely the same query again after a while, the results will be returned faster because they will be retrieved from memory and not from the disk.

Slow Query Basics: Why Are Queries Slow?

Introduction

If you've ever worked with databases, you have probably encountered some queries that seem to take longer to execute than they should. Queries can become slow for various reasons ranging from improper index usage to bugs in the storage engine itself. However, in most cases, queries become slow because developers or MySQL database administrators neglect to monitor them and keep an eye on their performance. In this article, we will figure out how to avoid that.

What Is a Query?

To understand what makes queries slow and improve their performance, we have to start from the very bottom. First, we have to understand what a query fundamentally is–a simple question, huh? However, many developers and even very experienced database administrators could fail to answer.