Getting Started With Apache Iceberg

With the amount of data produced on a daily basis continuing to rise, so too do the number of data points that companies collect. Apache Iceberg was developed as an open table format to help sift through large analytical datasets.

This Refcard introduces you to Apache Iceberg by taking you through the history of its inception, dives into key methods and techniques, and provides hands-on examples to help you get introduced to the Iceberg community.

Part 2 – How to Hive on GCP using Google DataProc and Cloud Storage

In part 1 of this series, we have seen how to create a Google Dataproc cluster, create external tables in HIVE, point to the data stored on cloud storage, and perform exploratory data analysis in a staging environment. As part of this analysis, we found out that our sample datasets had around:

  • ~ 11% of non-confirming records for Green Taxi Y2019 dataset
  • ~ 33% of non-confirming records for Yellow Taxi Y2019 dataset

Identifying non-confirming records is one of the important steps of exploratory data analysis as they can lead to wrong or faulty interpretation of results. So, as part of the next step, we will create a new environment i.e. new external tables in HIVE with only valid data required for deep-dive analysis and eliminate the non-confirming records.

How to Hive on GCP Using Google DataProc and Cloud Storage: Part 1

Google Cloud Dataproc is a managed Spark and Hadoop service that lets you take advantage of open-source data tools for batch processing, querying, streaming, and machine learning. This includes the Hadoop ecosystem (HDFS, Map/Reduce processing framework, and a number of applications such as Hive, Mahout, Pig, Spark, and Hue that are built on top of Hadoop). Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. Queries submitted via HIVE are converted into Map/Reduce jobs that access stored data,  results are then aggregated and returned to the user or application. 

For this exercise, we will be using New York city's yellow and green taxi trip data accumulated for the year 2019. Yellow Taxis are the only vehicles licensed to pick up street-hailing passengers anywhere in NYC while Green Taxis provide street hail service and prearranged service in northern Manhattan (above E 96th St and W 110th St) and in the outer boroughs. The dataset is available at the city portal.

5 Essential Diagnostic Views to Fix Hive Queries

A perpetual debate rages about the effectiveness of a modern-day Data Analyst in a Distributed Computing environment. Analysts are used to SQL’s returning answers to their questions in short order. The RDBMS user is often unable to comprehend the root-cause when queries don’t return results for multiple hours. The opinions are divided, despite broad acceptance of the fact that Query Engines such as Hive and Spark are complex for the best engineers. At Acceldata, we see full TableScans run on multi-Tera Byte tables to get a count of rows, which to say the least is taboo in the Hadoop world. What results is a frustrating conversation between Cluster Admins and Data Users, which is devoid of data that is hard to collect. It is also a fact that data needs conversion into insights to make business decisions. More importantly, the value in Big Data needs to be unlocked without delays.

From here we start from the point where the Hadoop Admin/Engineer is ready to unravel the scores of metrics and interpret the reasons for poor performance and taking resources away from the cluster causing:

When Small Files Crush Big Data — How to Manage Small Files in Your Data Lake

Big Data faces an ironic small file problem that hampers productivity and wastes valuable resources.

If not managed well, it slows down the performance of your data systems and leaves you with stale analytics. This kind of defeats the purpose, doesn’t it? HDFS stores small files inefficiently, leading to inefficient Namenode memory utilization and RPC calls, block-scanning throughput degradation, and reduced application layer performance. If you are a big data administrator on any modern data lake, you will invariably come face to face with the problem of small files. Distributed file systems are great but let's face it, the more you split storage layers the greater your overhead is when reading those files. So the idea is to optimize the file size to best serve your use case, while also actively optimizing your data lake.

Deep Dive Into Join Execution in Apache Spark

Join operations are often used in a typical data analytics flow in order to correlate two data sets. Apache Spark, being a unified analytics engine, has also provided a solid foundation to execute a wide variety of Join scenarios.

At a very high level, Join operates on two input data sets and the operation works by matching each of the data records belonging to one of the input data sets with every other data record belonging to another input data set. On finding a match or a non-match (as per a given condition), the Join operation could either output an individual record, being matched, from either of the two data sets or a Joined record. The joined record basically represents the combination of individual records, being matched, from both the data sets.

Create a Scale-Out Hive Cluster With a Distributed, MySQL-Compatible Database

Hive Metastore supports various backend databases, among which MySQL is the most commonly used. However, in real-world scenarios, MySQL's shortcoming is obvious: as metadata grows in Hive, MySQL is limited by its standalone performance and can't deliver good performance. When individual MySQL databases form a cluster, the complexity drastically increases. In scenarios with huge amounts of metadata (for example, a single table has more than 10 million or even 100 million rows of data), MySQL is not a good choice.

We had this problem, and our migration story proves that TiDB, an open-source distributed Hybrid Transactional/Analytical Processing (HTAP) database, is a perfect solution in these scenarios.

Introducing Wormhole: Fast Dockerized Presto and Alluxio Setups

Just like a real wormhole, this tool is all about speed.

This blog introduces Wormhole, an open-source Dockerized solution for deploying Presto and Alluxio clusters for blazing-fast analytics on file system (we use S3, GCS, OSS). When it comes to analytics, generally people are hands-on in writing SQL queries and love to analyze data that resides in a warehouse (e.g. MySQL database). But as data grows, these stores start failing and there arises a need for getting the faster results in the same or a shorter time frame. This can be solved by distributed computing and Presto is designed for that. When attached to Alluxio, it works even more, faster. That’s what Wormhole is all about.

You may also enjoy:  Alluxio Cluster Setup Using Docker

Here is the high-level architecture diagram of solution:

Data Orchestration: What Is it, Why Is it Important?

I first heard the term "data orchestration" earlier this year at a technical meetup in the San Francisco Bay Area. The presenter was Bin Fan, founding engineer and PMC maintainer of the Alluxio open source project.

Bin explained that data orchestration is a relatively new term. A data orchestration platform, he said, "brings your data closer to compute across clusters, regions, clouds, and countries." 

One Challenge With 10 Solutions

Technologies we use for Data Analytics have evolved a lot, recently. Good old relational database systems become less popular every day. Now, we have to find our way through several new technologies, which can handle big (and streaming) data, preferably on distributed environments.

Python is all the rage now, but of course there are lots of alternatives as well. SQL will always shine, and some other oldies-but-goldies, which we can never under-estimate, are still out there.

Getting Started With EMR Hive on Alluxio in 10 Minutes

Find out what the buzz is behind working with Hive and Alluxio.

This tutorial describes steps to set up an EMR cluster with Alluxio as a distributed caching layer for Hive, and run sample queries to access data in S3 through Alluxio.

You may also enjoy:  Distributed Data Querying With Alluxio

Prerequisites

  • Install AWS command line tool on your local laptop. If you are running Linux or macOS, it is as simple as running pip install awscli.
  • Create an from the EC2 console if you don’t have an existing one.

Step 1: Create an EMR Cluster

First, let's create an EMR cluster with Hive as its built-in application and Alluxio as an additional application through bootstrap scripts. The following command will submit a query to create such a cluster with one master and two workers instances running on EC2. Remember to replace “alluxio-aws-east” in the following command with your AWS keypair name, and “m4.xlarge” with the EC2 instance type you like to use. Check out this page for more details of this bootstrap script.

Running Alluxio-Presto Sandbox in Docker

The Alluxio-Presto sandbox is a Docker application featuring installations of MySQL, Hadoop, Hive, Presto, and Alluxio. The sandbox lets you easily dive into an interactive environment where you can explore Alluxio, run queries with Presto, and see the performance benefits of using Alluxio in a big data software stack.

In this guide, we’ll be using Presto and Alluxio to showcase how Alluxio can improve Presto’s query performance by caching our data locally so that it can be accessed at memory speed!