SKP’s Algorithms and Data Structures #9: Java Problem: Monkeys in the Garden

[Question/Problem Statement is the Property of Techgig] 

Monkeys in the Garden [www.techgig.com]

In a garden, trees are arranged in a circular fashion with an equal distance between two adjacent trees. The height of trees may vary. Two monkeys live in that garden and they were very close to each other. One day they quarreled due to some misunderstanding. None of them were ready to leave the garden. But each one of them wants that if the other wants to meet him, it should take maximum possible time to reach him, given that they both live in the same garden.

SKP’s Algorithms and Data Structures #8: Java Problem: Simple Inheritance (OOPs)

[Question/Problem Statement is the Property of Techgig]
 
Java Inheritance / Simple OOPs [www.techgig.com]
Create Two Classes:

BaseClass
The Rectangle class should have two data fields-width and height of int types. The class should have display() method, to print the width and height of the rectangle separated by space.

DerivedClass
The RectangleArea class is Derived from Rectangle class, i.e., it is the Sub-Class of Rectangle class. The class should have read_input() method, to Read the Values of width and height of the Rectangle. The RectangleArea class should also Overload the display() Method to Print the Area (width*height) of the Rectangle.

Input Format
The First and Only Line of Input contains two space-separated Integers denoting the width and height of the Rectangle.

Constraints
1 <= width,height <= 10^3

Output Format
The Output Should Consist of Exactly Two Lines.
In the First Line, Print the Width and Height of the Rectangle Separated by Space.
In the Second Line, Print the Area of the Rectangle.


[Explanation of the Solution]
This is the Simplest of all OOPs Questions! Demonstration of Inheritance and Overriding (Very Loosely, Liskov Substitution of SOLID).


[Source Code, Sumith Puri (c) 2021 — Free to Use and Distribute]
Java
 




x
67


1
 /*    
2
  * Techgig Core Java Basics Problem - Get Simple OOPs Right!  
3
  * Author: Sumith Puri [I Bleed Java!]; GitHub: @sumithpuri;  
4
  */   
5
     
6
  import java.io.*;   
7
  import java.util.*;   
8
   
9
   
10
  class Rectangle {  
11
   
12
    private int width;  
13
    private int height;  
14
   
15
    public void display() {  
16
   
17
      System.out.println(width + " " + height);  
18
    }  
19
   
20
    public int getWidth() {  
21
   
22
      return width;  
23
    }  
24
   
25
    public void setWidth(int width) {  
26
   
27
      this.width=width;  
28
    }  
29
   
30
    public int getHeight() {  
31
   
32
      return height;  
33
    }  
34
   
35
    public void setHeight(int height) {  
36
   
37
      this.height=height;  
38
    }  
39
  }  
40
   
41
  class RectangleArea extends Rectangle {  
42
   
43
    public void read_input() {  
44
   
45
     Scanner scanner = new Scanner (System.in);   
46
       
47
     setWidth(scanner.nextInt());   
48
     setHeight(scanner.nextInt());  
49
    }  
50
   
51
    public void display() {  
52
   
53
      super.display();  
54
      System.out.println(getWidth()*getHeight());  
55
    }  
56
  }  
57
     
58
  public class CandidateCode {   
59
     
60
   public static void main(String args[] ) throws Exception {   
61
     
62
     RectangleArea rectangleArea = new RectangleArea();  
63
     rectangleArea.read_input();  
64
   
65
     rectangleArea.display();  
66
   }   
67
 } 



SKP’s Algorithms and Data Structures #7: Functional Programming and Java Lambdas

[Question/Problem Statement is the Property of Techgig]

Java Advanced — Lambda Expressions [www.techgig.com] 
Write the Following Methods that Return a Lambda Expression Performing a Specified Action: Perform Operation isOdd(): The Lambda Expression must return if a Number is Odd or  If it is Even. Perform Operation isPrime(): The lambda expression must return if a number is prime or if it is composite. PerformOperation isPalindrome(): The Lambda Expression must return if a number is a Palindrome or if it is not.

Input Format
Input is as Show in the Format Below
Input
3
1 3
2 7
3 7777

Constraints
NA

Output Format
Output is as Show in the Format Below
Output
ODD
PRIME
PALINDROME


[Explanation of the Solution]
This is a Good Question to Refresh Java 8 Lambdas. In my Solution, I Implemented the Functional Interfaces within my main() Method and assigned it to Local Reference Variables.


SKP’s Algorithms and Data Structures #6: Java Problem: Active Traders

[Question/Problem Statement is the Property of HackerRank]

Algorithms/Data Structures — [Problem Solving] 
An Institutional Broker wants to Review their Book of Customers to see which are Most Active. Given a List of Trades By "Customer Name, Determine which Customers Account for At Least 5% of the Total Number of Trades. Order the List Alphabetically Ascending By Name."


Example
n = 23
"customers = {"Bigcorp", "Bigcorp", "Acme", "Bigcorp", "Zork", "Zork", "Abe", "Bigcorp", "Acme", "Bigcorp", "Bigcorp", "Zork", "Bigcorp", "Zork", "Zork", "Bigcorp", "Acme", "Bigcorp", "Acme", "Bigcorp", "Acme", "Littlecorp", "Nadircorp"}."

"Bigcorp had 10 Trades out of 23, which is 43.48% of the Total Trades."
"Both Acme and Zork had 5 trades, which is 21.74% of the Total Trades."
"The Littlecorp, Nadircorp, and Abe had 1 Trade Each, which is 4.35%..."

"So the Answer is ["Acme","Bigcorp","Zork"] (In Alphabetical Order) Because only These Three Companies Placed at least 5% of the Trades.


Function Description

Complete the Function mostActive in the Editor Below.

mostActive
has the following parameter:
String customers[n]: An Array Customer Names
(Actual Question Says String Array, But Signature is List of Strings)

SKP’s Algorithms and Data Structures #5: Java Problem: Changes in Usernames

[Question/Problem Statement is the Adapted from HackerRank]

Algorithms/Data Structures — [Problem Solving] 
There is a Specific Need for Changes in a List of Usernames. In a given List of Usernames — For Each Username — If the Username can be Modified and Moved Ahead in a Dictionary. The Allowed Modification is that Alphabets can change Positions in the Given Username.

Example
usernames[] = {"Aba", "Cat"}
 
"Aba" can be Changed to only "Baa" — Hence, It can Never Find a Place Ahead in the Dictionary. Hence, Output will be "NO". "Cat" can be Changed to "Act", "Atc", "Tca", "Tac", "Cta" and Definitely "Act" will Find a Place Before "Cat" in the Dictionary. Hence, Output will be "YES".

[Function Description]
Complete the function possibleChanges in the Editor Below.
 
possibleChanges has the Following Parameters:
String usernames[n]: An Array of User Names
 
Returns String[n]: An Array with "YES" or "NO" Based on Feasibility
(Actual Question Says String Array, But Signature is List of Strings)


Constraints
• [No Special Constraints Exist, But Cannot Recall Exactly]


Input Format 
"The First Line Contains an Integer, n, the Number of Elements in Usernames.",
"Each Line of the n Subsequent Lines (where 0 < i < n) contains a String usernames[i]."        

[Sample Case 0 — Sample Input For Custom Testing]         
5
Aba
Cat
Boby
Buba
Bapg
Sungi
Lapg
Acba
       
Sample Output (Each Should Be on a Separate Line) 
NO YES NO YES YES YES YES YES
   

 
[Explanation of the Solution]
This is again a Good Question from Hacker Rank to Test Your Logic / Problem Solving Abilities. The Core Point to Handle is that For Each Combination of 2 Alphabets that Exists in the Username String > We Need to Check if the Latter Occurring Character (ASCII) is Less than the Former Occurring Character (ASCII). For Example in the String "Bapg" — For a Selection of "Ba" from "Bapg" — We have "a" Occurring Before "B" in the English Alphabet. We can Have Two Loops (One Nested) to Decide for a Combination of Each Two Alphabets. The Time Complexity of this Solution is O(n^2).
 


[Source Code, Sumith Puri (c) 2021 — Free to Use and Distribute]

MS, Leave My Settings Alone

What with limited formal education and an IQ score only moderately above average, I have no trouble discerning the manipulative tactics of Microsoft. Their kindergarten psychology used to keep you attached to, and dependent upon, their proprietary products is less than amusing. This tit for tat, childish approach Of MS at keeping you 100% 'in the fold' is becoming quite tiresome, to say the least.

My issue today is file type association. As far back as XP (and possibly before) file type associations were set in the Control Panel. Once set with your preferences, they had a tendency to stay at those desired settings.

Windows 10 uses Settings instead of Control Panel as a place to make changes to file type associations. While I see no benefit in this change, I have no problems with it. I don't care where I designate my preferences as long as I'm able to make them and keep them.

In the good old days, when I set my file type associations, they stayed the way I set them. Update after update, they stayed the way I set them.

Now, with Windows 10, things are different. Every time MS has the occasion to tinker with my machine, updates and such, I find I need to reset associations all over again.

MS, do know this. I chose VLC and MPC-HC as my media player for most formats simply because WMP would not play them, or at least not without third party codecs.

I chose (free) VLC and (free) MPC-HC to play my media files because they do so without any hassle. Apparently, made a choice that doesn't set well with MS,

But please, can we be adults about this matter. When updates are installed, I soon find I am being asked what app I want to play mp3s, flacs, wavs, mp4s, jpgs, png, and the list goes on. I am offered a list of apps to choose from with Windows apps at the top of the list. From there, the list includes app that wont even play the format in question. This is childish.

MS makes me hunt for the particular app I want as it is not on the presented list. The very same app I've set as my choice many times over. Why is it so difficult to leave my settings alone? Not everyone is interested in being a Windows insider.

Upon notification of the last two updates, I was informed they contained some very good things. I was presented with a link to see just what these good things were. The link provided no such thing. It did offer another link to where I would have to get an app to be able to see those 'good things". By now, I've lost all interest. If I have to acquire a special app just to get MS to tell me the what's in the latest updates...to hell with it. This is carrying proprietorship a bit too far.

As for my app settings, I know what is best for me. I will continue to change them back to my choices as often as they are switched from my choices. Childish tit for tat.

JVM Advent Calendar: How to Ally as Java Devs

Technology reflects the people who make it. Today, women make up 52 percent of the world population; however, most technology is designed by men. Women hold only approximately twenty percent of technology jobs and earn on average 28 percent less than what men make. Yet, it is proven that teams with both men and women are more profitable, smarter, faster, and more innovative — their collective IQ rises. As technology becomes more pervasive and we move to a more digital society, the impact grows beyond just those in the technology world.

In my role, as Director and Chairperson of the Java Community Process (JCP) Program, I also serve as an international speaker engaging the 12 million+ Java developer community worldwide — and in this space, as in the stats above, it is also primarily men. I love my job, and I love working with all kinds of people, but increasingly, community members ask me what we can do to change the ratio.

Designing Emotional Interfaces Of The Future

Designing Emotional Interfaces Of The Future

Designing Emotional Interfaces Of The Future

Gleb Kuznetsov

Emotions play a vital role in our decision-making process. One second of emotion can change the whole reality for people engaging with a product.

Humans are an emotionally driven species; we choose certain products not because of what makes sense, but because of how we think they will make us feel. The interfaces of the future will use the concept of emotions within the foundation of product design. The experiences that people use will be based both on intellectual quotient (IQ) and emotional quotient (EQ).

This article is my attempt to look into the future and see what interfaces we will design in the next ten years. We’ll be taking a closer look at the three mediums for interaction:

  1. Voice
  2. Augmented Reality (AR)
  3. Virtual Reality (VR)

Developing For Virtual Reality

It’s not that difficult to create content for VR nowadays. Still, if you’re looking for a way to get a better understanding of VR development, working a demo project can help. Learn more →

Practical Examples Of Future Emotional Interfaces

How will interfaces look like in the future? Even though we do not have an answer to this question just yet, we can discuss what characteristics interfaces might have. In my opinion, I’m sure that we will eventually move away from interfaces full of menus, panels, buttons, and move towards more ‘natural interfaces’, i.e. interfaces that extend our bodies. The interfaces of the future will not be locked in a physical screen, but instead they will use the power of all five senses. Because of that, they will require a less learning curve — ideally, no learning curve at all.

The Importance Of EQ Emotional Intelligence In Business

Apart from making the experience more natural and reducing the learning curve, designing for emotion has another benefit for product creators: it improves user adoption of the product. It’s possible to use humans’ ability to act on emotions to create better user engagement.

Voice Interfaces That Feel Real

Products that use voice as the primary interface are becoming more and more popular. Many of us use Amazon Echo and Apple Siri for daily routine activities such as setting an alarm clock or making an appointment. But a majority of voice interaction systems available on the market today still have a natural limitation: they do not take user emotions into account. As a result, when users interact with products like Google Now, they have a strong sense of communicating with a machine — not a real human being. The system responds predictably, and their responses are scripted. It’s impossible to have a meaningful dialogue with such a system.

But there are some completely different systems available on the market today. One of them is Xiaoice, a social chatbot application. This app has an emotional computing framework at its core; the app is built on the idea that it’s essential to establish an emotional connection with the user first. Xiaoice can dynamically recognize emotion and engage the user throughout long conversations with relevant responses. As a result, when users interact with Xiaoice they feel like they’re having a conversation with a real human being.

The limitation of Xiaoice is that it’s a text-based chat app. It’s evident that you can achieve a more powerful effect by making voice-based interactions (the human voice has different characteristics such as a tone that can convey a powerful spectrum of emotions).

Many of us have seen the power of voice-based interactions in the film “Her” (2013). Theodore (the main character played by Joaquin Phoenix) fell in love with Samantha (a sophisticated OS). This also makes us believe that one of the primary purposes of voice-based systems of the future will be a virtual companion to users. The most interesting thing about this film is that Theodore did not have a visual image of the Samantha — he only had her voice. To build that kind of intimacy, it’s essential to generate responses that reflect a consistent personality. This will make the system both predictable and trustworthy.

Technology is still a long away from a system like Samantha, but I believe that voice-first multimodal interfaces will be the next chapter in the evolution of voice-enabled interfaces. Such interfaces will use voice as a primary way of interaction and provide additional information in a context that creates and builds a sense of connection.

Voice interfaces for Brain.ai Image: Gleb Kuznetsov
An example of a voice interface designed for Brain.ai (Image credit: Gleb Kuznetsov)

The Evolution Of AR Experience

Augmented Reality (AR) is defined as a digital overlay on top of the real world and transforms the objects around us into interactive digital experiences. Our environment becomes more ‘intelligent’ and users have an illusion of ‘tangible’ objects on the tips of their fingers, which establishes a deeper connection between a user and a product (or content).

Reimagine Existing Concepts Using AR

The unique aspect of AR is that it gives us an extraordinary ability to physically interact with digital content. It allows us to see things that we could not see before and this helps us learn more about the environment around us. This AR property helps designers to create new level experiences using familiar concepts.

For example, by using mobile AR, it’s possible to create a new level of in-flight experience that allows a passenger to see detailed information about her class or current flight progress:

AR in flight experience for Airbus A380
AR in flight experience for Airbus A380. (Image credit: Gleb Kuznetsov)

AR helps us find our way through spaces and get the required information at a glance. For example, AR can be used to create rich contextual hints for your current location. The technology known as SLAM (Simultaneous Localization And Mapping) is perfect for this. SLAM allows real-time mapping of an environment and also makes it possible to place multimedia content into the environment.

There are massive opportunities for providing value to users. For example, users can point their devices at a building and learn more about it right there on their screens. It significantly reduces the effort and allows for an emotional experience of ease by allowing navigation and access.

Providing additional information in context
Providing additional information in context (Image credit: Gleb Kuznetsov)

The environment around us (such as walls or floors) can become a scene for interactivity in ways that used to be limited to our smartphones and computers.

The concept that you see below does just that; it uses a physical object (white wall) as a canvas for the content usually delivered using a digital device:

The concept of interactive walls a digital overlay on top of the real world.
The concept of interactive walls — a digital overlay on top of the real world. (Image credit: Gleb Kuznetsov)

Avoiding Information Overload

Many of us saw the video called “HYPER-REALITY”. In this video, the physical and digital worlds have merged, and the user is overwhelmed with a vast amount of information.

Technology allows us to display several different objects at the same time. When it’s misused, it can easily cause overload.

Information overload is a serious issue that has a negative impact on user experience and avoiding it will be one of the goals of designing for AR. Well-designed apps will filter out elements that are irrelevant to users using the power of AI.

Advanced Personalization

Personalization in digital experience happens when the system curates the content or functionality to users’ needs and expectations in real time. Many modern mobile apps and websites use the concept of personalization to provide relevant content. For example, when you visit Netflix, the list of movies you see is personalized based on your interests.

AR glasses allow creating a new level of personalization, i.e. an ‘advanced’ level of personalization. Since the system ‘sees’ what the user sees, it’s possible to utilize this information to make a relevant recommendation or provide additional information in context. Just imagine that you’ll soon be wearing AR glasses, and the information that is transferred to your retina will be tailored to your needs.

Here’s a foretaste of what’s in store for us:

Moving From Augmented Reality Towards Virtual Reality To Create An Immersive Experience

AR experience has a natural limitation. As users, we have a clear line between us and content; this line separates one world (AR) with another (real world). This line causes a sense that the AR world is clearly not real.

You probably know how to solve this limitation, i.e. with virtual reality (VR), of course. VR is not exactly a new medium, but it’s only been in the last few years that technology has reached a point where it allowed designers to create immersive experiences.

Immersive VR experiences remove the barrier between the real world and digital. When you put on a VR headset, it’s difficult for your brain to process whether the information that you are receiving is real. The idea of how VR experiences can look in the nearest future is well explained in the movie “Ready Player One”:

Here is what designers need to remember when creating immersive virtual environments:

  1. Write a story
    Meaningful VR has a strong story at its core. That’s why before you even start designing for a VR environment, you need to write a narrative for the user journey. A powerful tool known as a ‘storyboard’ can help you with that. Using a storyboard, it’s possible to create a story and examine all the possible outcomes. When you examine your story, you will see when and how to use both visual and audio cues to create an immersive experience.
  2. Create a deeper connection with a character
    In order to make users believe that all things around them in VR are real, we need to create a connection with the characters played by the users. One of the most obvious solutions is to include a representation of users’ hands in the virtual scene. This representation should be of actual hands — not just a rigged replica. It’s vital to consider different factors (such as gender or skin color) because it’ll make interactions more realistic.
    If a user looks at her hands she sees that she is a character.
    A user can look at his or her hands and see themselves appear as a character. (Source: leapmotion)

    It’s also possible to bring some objects from real life to a VR environment in order to create this connection. For instance, a mirror. When the user looks at a mirror and sees their character in the reflection, it enables more realistic interactions between the user and virtual characters.
    A VR user looks into a virtual mirror and sees himself as a character in a VR environment. (Image credit: businesswire)
    A virtual reality user looks into a virtual mirror and sees himself as a character in a VR environment. Credits: businesswire. (Large preview)
  3. Use gestures instead of menus
    When designing immersive VR experiences, we can’t rely on traditional menus and buttons. Why? Because it is relatively easy to break a sense of immersion by showing a menu. Users will know that everything around them is not real. Instead of using traditional menus, designers need to rely on gestures. The design community is still in the process of defining a universal language for using gestures, and taking part in this activity is fun and exciting exercise. The tricky part is to make gestures familiar and predictable for users.
    Hovercast VR menu is an attempt to reuse existing concepts of interaction for VR experience. Unfortunately, this concept can break the sense of  immersion. New medium requires a new model of interaction.
    Hovercast VR menu is an attempt to reuse existing concepts of interaction for VR experience. Unfortunately, this concept can break the sense of immersion. New medium requires a new model of interaction.
  4. Interact with elements in the VR environment
    To create an environment that feels real, we need to give the user the ability to interact with objects in that reality. Ideally, all objects in the environment can be designed in a way that allows users to touch and inspect them. Such objects will act as stimuli and will help you create a more immersive experience. Touch is extremely important for exploring the environment; the most important information that babies get in the first days is received through touch.
  5. Share emotion in VR
    VR has a real opportunity to become a new level of social experience. But to make it happen, we need to solve one significant problem, i.e. bring the non-verbal cues into the interaction.

    When we interact with other people, a significant part on information that we get comes from body language. Surprise, disgust, anger — all these emotions are in our facial expressions, and during face-to-face interactions, we infer information from the eye region. It’s important to provide this information when people interact in a VR environment to create more realistic interactions.

    The good news is that the head-mounted devices (HMDs) will soon cover emotion recognition. Almost any area of human-to-human interaction will benefit from facial expressions in VR.
    Sharing emotions in VR space. Credits: Rachel Metz of MITReview
    Sharing emotions in VR space (Source: Rachel Metz of MITReview)
  6. Design sound and music suitabke for a VR environment
    Audio is a huge component of the immersive experience. It’s impossible to create a genuinely immersive experience without designing sound for the environment. The sound can both be used as a background element (i.e., ambient sound of wind) or directional. In the latter case, the sound can be used as a cue — by playing with directionality (where the sound comes from) and distance (it’s possible to focus user attention on particular elements).

    When it comes to designing audio for VR, it’s essential to make the sound 3D. 2D sound doesn’t work for VR very well because it makes everything too flat. The 3D sound is the sound that you can hear in every direction around you — front, behind, above and beyond — all over the place. You don’t need specialized headphones to experience 3D sound; it’s possible to create it using standard stereo speakers of HMD.

    Head tracking is another critical aspect of a good sound design. It’s vital to make sounds behave in a realistic manner. That’s why when a user moves his head, the sound should change according to the head movement.
  7. Prevent motion sickness
    Motion sickness is one of the primary pain-points in VR. It’s a condition in which a disagreement exists between visually perceived movement and the vestibular system’s sense of movement. It’s vital to keep users comfortable while they experience VR.

    There are two popular theories what causes motion sickness in VR:
    • ‘Sensory Conflict’ Theory
      According to this theory, motion sickness occurs as a result of a sensory disagreement between expected motion and motion that is actually experienced.
    • ‘Eye Movement’ Theory
      In the book “The VR Book: Human-Centered Design For Virtual Reality”, Jason Jerald mentions that motion sickness occurs because of the unnatural eye motion that is required to keep the scene’s image stable on the retina.
    Here are a few tips that will help you prevent users from reaching for the sickbag:
    • Physical body movement should match with visual movement. Sometimes even a small visual jitter can have an enormously negative impact on the experience.
    • Let users rest between moving scenes (this is especially important when the VR experience is really dynamic).
    • Reduce virtual rotations.

Conclusion

When we think about the modern state of product design, it becomes evident that we are only at the tip of the iceberg because we’re pretty limited to flat screens.

We’re witnessing a fundamental shift in Human-Computer Interaction (HCI) — rethinking the whole concept of digital experience. In the next decade, designers will break the glass (the era of mobile devices as we know them today) and move to the interfaces of the future — sophisticated voice interfaces, advanced ARs, and truly immersive VRs. And when it comes to creating a new experience, it’s essential to understand that the only boundary we have are our brains telling us it’s got to be how it’s always been.

Smashing Editorial (cc, ra, yk, il)