Internationalization and Localization: The Challenges

In a globalized world, our software companies serve the needs of customers that happen to have their business in multiple geographical regions. Hence we developers need to ensure that the software we build is usable in different languages and cultural contexts. In other words, our software must be designed with internationalization (i18n) in mind. This includes the employment of Unicode character sets, flexible layouts (allowing bi-directional texts), or externalizing strings that vary across languages. Localization (l10n) is a process of using i18n to adapt the software to the locale, language, and cultural requirements of a given market. Very good definitions of internationalization and localization can be found in this W3C article.

Moreover, UX, Engineering, Product Management, and other relevant stakeholders must ensure that the level/depth of i18n and l10n is consistent across the product suite. For example, it can be very user-unfriendly if one part of the UI is in Spanish while another is in English. It can be even misleading if one product uses commas as a thousand separator while the other as a decimal separator. Not to mention the American date format mm/dd/yyyy vs. the European dd/mm/yyyy. This is especially problematic in embedded use cases, where the user has no way of telling where one UI ends and the other begins.

Kafka: The Basics

Data synchronization is one of the most important aspects of any product. Apache Kafka is one of the most popular choices when designing a system that expects near-real-time propagation of large volumes of data. Even though Kafka has simple yet powerful semantics, working with it requires insight into its architecture. This article summarizes the most important design aspects of Kafka as a broker and applications that act as data producers or consumers.

About Kafka

Apache Kafka originated on LinkedIn and was developed as a highly scalable distribution system for telemetry and usage data. Over time, Kafka evolved into a general-purpose streaming data backbone that combines high throughput with low data delivery latencies. Internally, Kafka is a distributed log. A (commit) log is an append-only data structure to whose end the producers append the data (log records), and subscribers read the log from the beginning to replay the records. This data structure is used, for example, in the database write-ahead log. Distributed log means that the actual data structure is not hosted on a single node but is distributed across many nodes to achieve both high availability and high performance.

Choosing the Right IAM Solution

Identity and Access Management (IAM) is one of the critical components of any commercial software. As the name suggests, IAM solutions cover the identity of the users, their roles, privileges, authentication, and authorization. Long story short, IAM is based on proven industry-standard protocols and is the backbone of software security.

Because access management is a complex topic, the engineering teams usually start with a simplistic local authentication model. In such a case, the application itself manages users — and their passwords — and serves a login screen to the user whenever he/she needs to authenticate. This is an easy solution that, unfortunately, reaches its limits once the software matures, and the users expect integration with various single-sign-on providers, self-service registration, password reset, and integrations with other products. Moreover, security engineers on the customer’s side require adherence to certain policies — usage of multi-factor authentication, password complexity, and reset policies. The complexity of the once simple local IAM increases exponentially, and it makes little sense to try to implement all these features yourself using only basic libraries. As the IAM landscape is vast, the protocols are complex, and it is really easy to code something just imprecise enough to open doors for the attacker.

From Zero to Kubernetes: The Fast Track

Historically, enterprise Java development was known (and feared) for its steep learning curve. It was necessary to use dozens of lines of XML code just to deploy a simple application or to configure application servers. With the rise of DevOps, this hassle is merely the beginning of a long and painful process — the developer (or DevOps engineer, if you wish) is not only responsible for configuration but also for running the application. This has historically required either the creation of curator scripts that ensure that the application is running (and restarts it, in case it is not running), or manual intervention in case of application failure.

Modern Trends to The Rescue

Fortunately for us, the times of laborious configuration are over. Now it is possible to utilize tools which automate most tasks that were previously time-intensive or which required active monitoring. In this tutorial, we demonstrate how to create a simple application in Java (Spring Boot) and how to containerize it and deploy it in a Kubernetes cluster. Our application, even though simple in functionality, will have resiliency built-in and will recover automatically in the event of failure. Based on the example provided today, the whole application can easily be created and deployed into a testing cluster in under 30 minutes, without much configuration.