Image Classification Using SingleStore DB, Keras, and Tensorflow

Abstract

Image classification can have many practical, valuable and life-saving benefits. The "Hello World" of image classification is often considered MNIST and, more recently, Fashion MNIST. This article will use Fashion MNIST and store the images in a SingleStore DB database. We'll also build an image classification model using Keras and Tensorflow and store the prediction results in SingleStore DB. Finally, we'll build a quick visual front-end to our database system using Streamlit that enables us to retrieve an image and determine if the model correctly identified it.

The SQL scripts, Python code and notebook files used in this article are available on GitHub. The notebook files are available in DBC, HTML and iPython formats.

SingleStore DB Loves R

Abstract

The R programming language is very popular with many Data Scientists. At the time of writing this article, R ranks as the 11th most popular programming language according to TIOBE.

R provides many compelling data manipulation capabilities enabling the slicing and dicing of data with ease. Often data are read into and written out of R programs using files. However, R can also work with database systems. In this article, we'll see two quick examples of how Data Scientists can use R from Spark with SingleStore DB.

Using SingleStore DB to Map Crimes and Visualize Hot Routes

Abstract

Many great visualization techniques, such as kernel density mapping, can help us map and analyze crime concentrations. However, sometimes, it may be more beneficial to visualize crime concentrations along a linear network, such as a bus route or subway/underground line. Law enforcement agencies could use this to target resources at particular hot spots. To map these hot spots, we can use hot routes. Hot routes enable crimes to be mapped along sections of a linear network using thematic mapping (colour and line width).

An excellent tutorial illustrates hot routes, using R, for the Bakerloo Line on the London Underground. Using Python, we'll use the example described in that tutorial and apply it to SingleStore DB. In a previous article, we have already seen one approach to model and store the London Underground network in SingleStore DB.

Using SingleStoreDB for Full-Text Index and Search

Abstract

Continuing our exploration of the multi-model capabilities of SingleStoreDB, we'll discuss SingleStoreDB's support for Full-Text Index and Search in this article.

Using the example of medical journal articles from the SingleStore self-paced training course on Full-Text Index and Search, we'll store the text from journal articles and then perform a variety of queries using the full-text capabilities of SingleStoreDB.

Using SingleStoreDB as a JSON Document Database

Abstract

Continuing our series on the multi-model capabilities of SingleStoreDB, we'll discuss SingleStoreDB's support for JSON data in this article.

We'll build a small inventory system to model an online store that sells various electronic equipment. This example is derived from an excellent tutorial available on DigitalOcean. We'll apply that tutorial to SingleStoreDB, and we'll see that it is effortless to store, retrieve and query JSON data using SingleStoreDB. We'll also build a quick visual front-end to our inventory system using Laravel and PHP.

Using SingleStore as a Kafka Producer

Abstract

In a previous article series, we looked at how SingleStore Pipelines could be used to ingest data from a Kafka cluster running on the Confluent Cloud. We can also push data from SingleStore to a Kafka cluster. This is very easy to achieve and, in this article, we'll see how.

Introduction

SingleStore Pipelines are a compelling feature that can ingest data at scale. However, there may be situations where we would like to push data from SingleStore to an external source. Let's see how we can do this with Apache Kafka™. For ease of use, we'll develop mainly in the cloud using the SingleStore Managed Service and the Developer Duck plan on CloudKarafka.

How To Use SingleStore Pipelines With Kafka, Part 3 of 3

Abstract

This article is the third and final part of our Pipelines series. We'll look at replacing the Consumer part of our Producer-Consumer application by using a compelling feature of SingleStore, called Pipelines.

The SQL scripts, Java code, and notebook files used in this article series are available on GitHub. The notebook files are available in DBC, HTML, and iPython formats.

How To Use SingleStore Pipelines With Kafka, Part 2 of 3

Abstract

In this second part of our Pipelines series, we'll write some Java code to simulate our sensors sending temperature readings. We then store these readings in a SingleStore database. A Producer application will generate and send the temperature readings to the Confluent Cloud. A Consumer will then read these values from the Confluent Cloud and connect to SingleStore using JDBC, where the sensor readings will be stored in our temperatures table.

The SQL scripts, Java code, and notebook files used in this article series are available on GitHub. The notebook files are available in DBC, HTML, and iPython formats.

How To Use SingleStore Pipelines With Kafka, Part 1 of 3

Abstract

In this article series, we'll look at a compelling feature of SingleStore called Pipelines. This enables vast quantities of data to be ingested, in parallel, into a SingleStore database. We'll also see an example of how we can use this feature in conjunction with Apache Kafka™. This first article will focus on uploading some data into SingleStore using Spark. In a previous article, we noted that Spark was great for ETL with SingleStore. We'll also perform some analysis of the data. In the example application, we will simulate some sensors distributed globally that generate temperature readings, and these readings will be ingested into SingleStore through the Confluent Cloud. We'll implement a Producer-Consumer model using Java and JDBC, and then simplify this using SingleStore Pipelines.

The SQL scripts, Java code, and notebook files used in this article series are available on GitHub. The notebook files are available in DBC, HTML, and iPython formats.

An Example of Pushdown Using SingleStore and Spark

Abstract

In this article series, we’ll look at an example of query Pushdown when using the SingleStore Spark Connector. This first article will load some weather data into SingleStore using Databricks Community Edition (CE).

The notebook files used in this article series are available on GitHub in DBC, HTML, and iPython formats.