Five Tips to Fasten Your Skewed Joins in Apache Spark

Joins are one of the most fundamental transformations in a typical data processing routine. A Join operator makes it possible to correlate, enrich and filter across two input datasets. The two input datasets are generally classified as a left dataset and a right dataset based on their placing with respect to the Join clause/operator.

Fundamentally, a Join works on a conditional statement that includes a boolean expression based on the comparison between a left key derived from a record from the left dataset and a right key derived from a record from the right dataset. The left and the right keys are generally called ‘Join Keys’. The boolean expression is evaluated against each pair of records across the two input datasets. Based on the boolean output from the evaluation of the expression, the conditional statement includes a selection clause to select either one of the records from the pair or a combined record of the records forming the pair.

‘mapPartitions’ in Apache Spark: 5 Key Benefits

'mapPartitions' is the only narrow transformation, being provided by Apache Spark Framework, to achieve partition-wise processing, meaning, process data partitions as a whole.  All the other narrow transformations, such as map, flatmap, etc. process partitions record-wise. 'mapPartitions', if used judiciously, can speed up the performance and efficiency of the underlying Spark Job manifold. 

'mapPartitions' provides an iterator to the partition data to the computing function and expects an iterator to a new data collection as the return value from the computing function. Below is the 'mapPartitions' API applicable on a Dataset of type <T> expecting a functional interface of type 'MapPartitionsFunction' to process each data partition as a whole along with an Encoder of the type <U>, <U> being representing the returned data type in the returned Dataset.     

Identify and Resolve Stragglers in Your Spark Application

Stragglers are detrimental to the overall performance of Spark applications and lead to resource wastages on the underlying cluster. Therefore, it is important to identify potential stragglers in your Spark Job, identify the root cause behind them, and put required fixes or provide preventive measures. 

What Is a Straggler in a Spark Application?

 A straggler refers to a very very slow executing Task belonging to a particular stage of a Spark application (Every stage in Spark is composed of one or more Tasks, each one computing a single partition out of the total partitions designated for the stage). A straggler Task takes an exceptionally high time for completion as compared to the median or average time taken by other tasks belonging to the same stage. There could be multiple stragglers in a Spark Job being present either in the same stage or across multiple stages. 

Deep Dive Into Join Execution in Apache Spark

Join operations are often used in a typical data analytics flow in order to correlate two data sets. Apache Spark, being a unified analytics engine, has also provided a solid foundation to execute a wide variety of Join scenarios.

At a very high level, Join operates on two input data sets and the operation works by matching each of the data records belonging to one of the input data sets with every other data record belonging to another input data set. On finding a match or a non-match (as per a given condition), the Join operation could either output an individual record, being matched, from either of the two data sets or a Joined record. The joined record basically represents the combination of individual records, being matched, from both the data sets.

Guide to Partitions Calculation for Processing Data Files in Apache Spark

The majority of Spark applications source input data for their execution pipeline from a set of data files (in various formats). To facilitate the reading of data from files, Spark has provided dedicated APIs in the context of both, raw RDDs and Datasets. These APIs abstract the reading process from data files to an input RDD or a Dataset with a definite number of partitions. Users can then perform various transformations/actions on these inputs RDDs/Datasets.

Each of the partitions in an input raw RDD or Dataset is mapped to one or more data files, the mapping is done either on a part of a file or the entire file. During the execution of a Spark Job with an input RDD/Dataset in its pipeline, each of the partition of the input RDD/Dataset is computed by reading the data as per the mapping of partition to the data file(s) The computed partition data is then fed to dependent RDDs/Dataset further into the execution pipeline.