On Some Aspects of Big Data Processing in Apache Spark, Part 4: Versatile JSON and YAML Parsers

In my previous post, I presented design patterns to program Spark applications in a modular, maintainable, and serializable way—this time I demonstrate how to configure versatile JSON and YAML parsers to be used in Spark applications. 

A Spark application typically needs to ingest JSON data to transform the data, and then save the data in a data source. On the other hand, YAML data is needed primarily to configure Spark jobs. In both cases, the data needs to be parsed according to a predefined template. In a Java Spark application, these templates are POJOs. How to program a single parser method to process a wide class of such POJO templates with data taken from local or distributed file systems?