Scala Interview Questions

Let me guess, you are having your Scala interview scheduled and you are here to know what's in store for you in the interview. Well.....that would be the probable reason you opened this blog. So, without further ado, I'll start the topic.

What Is Scala?

Scala is a high-level language, statically typed language that supports OOPS concepts as well as functional programming concepts. The language Scala is written in Java(optional). 

Tracking Changes in MongoDB With Scala and Akka

Need for Real-Time Consistent Data

Many different databases are used at Adform, each tailored for specific requirements, but what is common for these use cases is the necessity for a consistent interchange of data between these data stores. It’s a tedious task to keep the origin of that data and its copies consistent manually, not to mention that with a sufficiently large number of multiplications the origin may not be the source of truth anymore. The need for having its own copy of data is also dictated by the necessity of loose coupling and performance. It wouldn’t be practical to be constantly impacted by every change made in the source system. The answer here is an event-based architecture which allows to keep every change consistent and provides us with the possibility of restoring the sequence of changes related to particular entities. For those reasons, the decision was made to use the publisher/subscriber model. MongoDB’s change streams saved the day, finally letting us say farewell to much more complex oplog tailing.

Change Streams

As of version 3.6 MongoDB offers change data capture implementation named as change streams. It allows us to follow every modification made to an entire database or chosen set of collections. Previous versions already offered some solution to that problem by means of oplog (operation log) mechanism but tailing it directly had serious drawbacks, especially huge traffic caused by iteration overall changes to all collections and lack of reliable API allowing to resume tracking after any interruption. Change streams solve these issues by hiding oplog’s nook and crannies from us behind refined API interoperable with reactive streams implementations.

Reactive Systems: Actor Model and Akka.NET

All the world's a stage,
And all the men and women merely players.
-Shakespeare  

This quote from William Shakespeare's pastoral comedy As You Like It, can be very helpful when describing Actor Model. In this post, we will see what an Actor Model is and how you can use it to build concurrent, distributed, and resilient applications.

Leaning Towards Reactive Architecture

Why Reactive Architecture?

Reactive Architecture aims to provide software that remains responsive in all situations. Reactive Systems build user confidence by ensuring that the application is available whenever the users need it in all conditions.

What Is the Goal of Reactive Architecture?

  • Be responsive to interactions with its users
  • Handle failure and remain available during outages
  • Strive under varying load conditions
  • Be able to send, receive, and route messages in varying network conditions

What Is Reactive Manifesto?

The Reactive Manifesto is a document that was authored by Jonas Boner, Dave Farley, Roland Kuhn and Martin Thompson. The Manifesto was created in response to companies trying to cope with changes in the software landscape. Multiple groups or companies independently developed similar patterns for solving similar solutions. So aspects of a Reactive Systems were previously individually recognized by these groups. So the Manifesto attempts to bring all of these common ideas into a single unified set of principles, known as Reactive Principles.

Achieve Concurrency With Akka actors

Java comes with a built-in multi-threading model based on shared data and locks. To use this model, you decide what data will be shared by multiple threads and mark as “synchronized” sections of the code that access the shared data.

It also provides a locking mechanism to ensure that only one thread can access the shared data at a time. Lock operations remove possibilities for race conditions but simultaneously add possibilities for deadlocks. In Scala, you can still use Java threads, but the “Actor model” is the preferred approach for concurrency. 

Backpressure in Akka Streams

“Reactive Streams”, whenever we come across these words, there are two things that come to our mind. The first is asynchronous stream processing and the second is non-blocking backpressure. In this blog, we are going to learn about the latter part.

Understanding Back Pressure

According to the English dictionary, back pressure means “Resistance or force opposing the desired flow of fluid through pipes”. To define it in our software context, in place of the flow of fluid we can say the flow of data  “Resistance or force opposing the desired flow of data through software

Sending HTTP Requests in 5 Minutes With Scala and Akka HTTP

This article is for the Scala programmer who wants to run one-off HTTP requests quickly. The thinking style assumed is, "I don't want to care too much. I'll give you a payload; you just give me a future containing your response". With minimal boilerplate, we'll do exactly that with Akka HTTP in 5 minutes.

You can find this article in video form on YouTube or embedded below.

The Rock the JVM blog is built with me typing my posts in plain text with minimal Markdown formatting and then generating a uniform HTML out of it, with a simple Scala parser (I hate typing HTML). For syntax highlighting, I use an online highlighter, which happens to have a REST endpoint.

Naturally, I don't want to do it by hand, so as my HTML is generated, the syntax is automatically retrieved via an Akka HTTP as client, with little code. My HMTL generator currently has less than 100 lines of code in total.

In this article, I'm going to get you started with the simplest Akka HTTP client API in 5 minutes.

The Tiny Setup

First, you need to add the Akka libraries. Create an SBT project in your dev environment (I recommend IntelliJ). Then, add this to the build.sbt file. (If you've never used SBT before, the build.sbt file describes all the libraries that the project needs, which the IDE will download automatically)
Scala
 




x
10


 
1
val akkaVersion = "2.5.26"
2
val akkaHttpVersion = "10.1.11"
3

           
4
libraryDependencies ++= Seq(
5
  // akka streams
6
  "com.typesafe.akka" %% "akka-stream" % akkaVersion,
7
  // akka http
8
  "com.typesafe.akka" %% "akka-http" % akkaHttpVersion,
9
  "com.typesafe.akka" %% "akka-http-spray-json" % akkaHttpVersion,
10
)



Then, in a Scala application, I'm going to write a piece of small boilerplate because Akka HTTP needs an actor system to run:
Scala
 




xxxxxxxxxx
1


 
1
implicit val system = ActorSystem()
2
implicit val materializer = ActorMaterializer()
3
import system.dispatcher



Sending HTTP Requests

Now, we'll need to start sending HTTP requests. I'll use the exact HTTP API I'm using for the blog: http://markup.su/highlighter/api. The API says it needs GET or POST requests to /api/highlighter, with the parameters "language", "theme", and "source" in a request with content type   application/x-www-form-urlencoded.

So, let me create a piece of Scala code:
Scala
 




xxxxxxxxxx
1
12


 
1
val source =
2
"""
3
  |object SimpleApp {
4
  |  val aField = 2
5
  |
6
  |  def aMethod(x: Int) = x + 1
7
  |
8
  |  def main(args: Array[String]) = {
9
  |    println(aMethod(aField))
10
  |  }
11
  |}
12
""".stripMargin



and then let me create an HTTP request for it:
Scala
 




xxxxxxxxxx
1


 
1
val request = HttpRequest(
2
    method = HttpMethods.POST,
3
    uri = "http://markup.su/api/highlighter",
4
    entity = HttpEntity(
5
      ContentTypes.`application/x-www-form-urlencoded`,
6
      s"source=${URLEncoder.encode(source.trim, "UTF-8")}&language=Scala&theme=Sunburst"
7
    )
8
  )



where I've named the arguments in the call for easy reading. In the Akka HTTP, an HttpRequest contains the HTTP method (POST in our case). The URI and a payload in the form of an HttpEntity. We specify the content type per the description specified in the API — notice the backticks for the name of the field — and the actual string we want to send, as described by the API. In practice, you can send other strings, like JSON — I'll show you how to auto-convert your data types to JSON auto-magically in another article.

Then, we actually need to send our request:
Scala
 




xxxxxxxxxx
1


 
1
  def simpleRequest() = {
2
    val responseFuture = Http().singleRequest(request)
3
    responseFuture.flatMap(_.entity.toStrict(2 seconds)).map(_.data.utf8String).foreach(println)
4
  }



The Akka HTTP client call is simple: just call the singleRequest method. You obtain a Future containing an HTTP response, which we can then unpack. We use its entity (= its payload) and convert it to a strict entity, meaning that we take its whole content in memory. We then take its data, which is a sequence of bytes, and convert that to a string. And we're done.

Hide it All

We can create a very nice method which hides this all away:
Scala
 




xxxxxxxxxx
1
16


 
1
  def highlightCode(myCode: String): Future[String] = {
2
    val responseFuture = Http().singleRequest(
3
      HttpRequest(
4
        method = HttpMethods.POST,
5
        uri = "http://markup.su/api/highlighter",
6
        entity = HttpEntity(
7
          ContentTypes.`application/x-www-form-urlencoded`,
8
          s"source=${URLEncoder.encode(myCode.trim, "UTF-8")}&language=Scala&theme=Sunburst"
9
        )
10
      )
11
    )
12

           
13
    responseFuture
14
      .flatMap(_.entity.toStrict(2 seconds))
15
      .map(_.data.utf8String)
16
  }



And then you can go on with your day: pass a string, expect a future containing an HTML highlighting. All done!

If you want to practice sending HTTP requests as an exercise, you can use https://jsonplaceholder.typicode.com/ for dummy APIs, the same principle applies.

An Introduction to Akka Clustering

This article was first published on the Knoldus blog.

Akka cluster provides a fault-tolerant, decentralized, peer-to-peer cluster membership service with no single point of failure or single point of bottleneck. It does this using gossip protocol and an automatic failure detector.

Java Is Not Dying (Yet)

I’m a Java/Scala/Groovy guy, that’s not a mystery. Some people, especially the youngsters, roll their eyes when they learn about it. Old fashioned, ineffective when compared to modern languages, and doomed — that’s basically what I hear all the time. But is that really the case?

Old Fashioned

Yes, the base constructs and constraints that Java offers are old fashioned and rely on how code was written 20 years ago. Programming languages are not created in a sterilized bubble. Developers’ habits and needs are central to language design, and while inventors try to infuse innovation into it, it’d be dumb to ignore how people will actually use it. The first milestone for Java dates back in 1995, and while some of its foundational blocks looked visionary for at least 10 years, it is pretty obvious that 29 24 years later… not so much.

Akka Persistence: Making Actor Stateful

Akka is a toolkit for designing scalable, resilient systems that span processor cores and networks. Akka allows you to focus on meeting business needs instead of writing low-level code to provide reliable behavior, fault tolerance, and high performance.

Akka actor can have state but it’s lost when the actor is shutdown or crashed. Fortunately, we can persist actor state using Akka Persistence, which is one of Akka extensions.