Analytics Web Socket

Web analytics is the measurement and collection of continuous activity of web usage. This requires continuous communication between client and server. To do that, having a REST API adds an overhead of connection and TLS handshakes every communication. Also, each user event requires communications and a few microseconds worth of a response to report it to the backend. 

For this kind of scenario, WebSockets come to the rescue. A WebSockets is a full-duplex communication, which makes a connection once and then sends/receives data throughout the persisted connection. For more on WebSockets, check out this article.

Configuring SSL/TLS Connection Made Easy

Setting up encryption for your application, how hard can it be? I thought it should be easy, as all communication with modern web applications should be encrypted, right? Well, my expectations were wrong... While setting it up, I encountered a couple of hidden difficulties. For example, the configuration is vague, verbose, not straight-forward to set it up, hard to debug, and not unit-test friendly.

For this article, I'll assume you already have a basic understanding of certificates, keystores, encryption protocols, and ssl-handshake. If not, I would recommend going through this article: How to Easily Set Up Mutual TLS.

Java: It’s Time to Move Your Application to Java 11

In the past few years, several exciting things have happened in the Java world, among them is certainly the new version control. Previously, it could take anywhere between two or three years for a new version to be released. Now, there's a new cadence every six months in addition to the LTS for new versions of the JVM. Currently, there is already work to deliver Java 14, however, as the latest research reveals, the use of Java 8 remains very strong for the market. This post aims to encourage and discuss the reasons for upgrading to Java 11, the current Java LTS.

There are several Java surveys that show that Java 8 is still the most popular version among developers, such as the SNYK that shows Java 8 with 64% against 25% in Java 11 and Eclipse Foundation where the number is even more significant with 80% with Java 8 against 20% with Java 11.

What Is SSL Offloading and How it Works

SSL (Secure Sockets Layer) certificates are given to a website to make sure that the website is secured and won’t fall prey to malicious hackers. Since this process involves loading the web-server with a lot of load, the process of SSL offloading is done to remove the SSL-based encryption from the incoming traffic to make the web-server a bit relieved from decrypting the incoming traffic.

This process is to be designed specifically for the acceleration of SSL known as SSL acceleration. The SSL offloading device processes both encryption and decryption, both of which make the web-server slow.  

OPEN SSL – The Hero Nobody Talks About

When we see HTTPS and HTTP connections, most of us can’t differentiate between them. We ask ourselves what difference can a single, "S," make? Little do we know that the letter "S" is all that matters.

The difference between the HTTP and HTTPS connection is not of just a letter, but a secure and protected connection ensured by a valid SSL certificate.

State of API Security

The current age is the age of science and technology. With the advent of modern technology, the problems associated with modern technology have also increased to a great level.

Application Programming Interfaces (APIs) have become all the rage nowadays, with enterprise developers now relying heavily on them to support the delivery of new products and services. With the number of APIs increasing, the amount of data that is passed on networks is also increasing.

Streaming ETL With Apache Flink – Part 1

Flink: as fast as squirrels

Introduction

After working in multiple projects involving Batch ETL through polling data sources, I started working on Streaming ETL. Streaming computation is necessary for use cases where real or near real-time analysis is required. For example, in IT Operations Analytics, it is paramount that Ops get critical alert information in real-time or within acceptable latency (near real-time) to help them mitigate downtime or any errors caused due to misconfiguration.

While there are many introductory articles on Flink (my personal favorite are blogs from Ivan Mushketyk), not many have been into details of streaming ETL and advanced aspects of the Flink framework, which are useful in a production environment.

Configuring TLS and Resolving Errors

Today, we are going to discuss and see how to configure the TLS and resolve errors related to this. There are different versions of TLS.

Protocol

What Is Server Name Indication and How Does it Work?


SNI  or server name indication is an addition or an extension to the TLS protocol, which again stands for transport layer security. So, basically server name indication allows the client to indicate the host where it wants to terminate the encrypted session. 

 It allows a server to present multiple certificates on the same IP address and TCP port number and hence allows multiple secure (https) websites to be served by the same IP address without requiring all those sites to use the same certificate.

Kubernetes and Microservices Security

To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "How does K8s help you secure containers?" Here’s what we learned.

Microservices, Security, and Kubernetes (K8s)

RBAC

  • K8s helps with authorization and authentication via workload identity. Role-based access control (RBAC) is provided out of the box. Service mesh ensures communication between microservices. 
  • K8s solves more problems than it creates. RBAC enforces relationships between resources like pod security policies to control the level of access pods have to each other, how many resources pods have access to. It’s too complicated but K8s provides tools to build a more stable infrastructure.
  • RBAC mechanisms. Good security profile and posture. K8s provides access and mechanisms to use other things to secure containers.
  • K8s provides a pluggable infrastructure so you can customize it to the security demands of your organization. The API was purpose-built for extensibility to ensure, for example, that you can scan workloads before they go into production clusters. You can apply RBAC rules for who can access your environments, and you can use webhooks for all kinds of sophisticated desired-state security and compliance policies that mitigate operational, security, and compliance risk.
  • The advantage of K8s is in its open-source ecosystem, which offers several security tools like CIS Benchmarks, Network Policy, Istio, Grafeas, Clair, etc. that can help you enforce security. K8s also has comprehensive support for RBAC on System and Custom resources. Container firewalls help enforce network security to K8s. Due to the increased autonomy of microservices deployed as pods in K8s, having a thorough vulnerability assessment on each service, change control enforcement on the security architecture, and strict security enforcement is critical to fending off security threats. Things like automated monitoring-auditing-alerting, OS hardening, and continuous system patching must be done. 
  • The great part about the industry adopting K8s as the de facto standard for orchestration means that many talented people have collaboratively focused on building secure best practices into the core system, such as RBAC, namespaces, and admission controllers. We take the security of our own architecture and that of our customers very seriously, and the project and other open-source supporting projects releasing patches and new versions quickly in the event of common vulnerabilities and exposures (CVE) makes it possible for us to always ensure we are keeping systems fully up to date. We have built-in support for automated K8s upgrades. We are always rolling out new versions and patches and providing our users with the notifications and tooling to keep their systems up to date and secure.

Reduced Exposure

  • You have services deployed individually in separated containers. Even if someone gets into a container, they’re only getting into one microservice, rather than being able to get into an entire server infrastructure. Integration with Istio provides virtualization and security on the access side.
  • Since the beginning, containers have been perceived as a potential security threat because you are running these entities on the same machine with low isolation. There's a perceived risk of data leakage, moving from one container to another. I see the security benefits of containers and K8s outweigh the risks because containers tend to be much smaller than a VM running NGINX will run a full OS with many processes and servers. Containers have far less exposure and attack surface. You can keep containers clean and small for minimal attack surface. K8s has a lot of security functionality embedded in the platform. Security is turned on by default with K8s. When you turn it on, there are a lot of security features in the platform. The microservices and container model is based on immutable infrastructure. You offer limited access to the container running the application. You're able to lock down containers in a better way and be in charge of what our application does. We are now using ML to understand what service or set of containers are doing and we can lock down the service. 
  • You need to look at security differently with containers to be more secure. Containers allow you to reduce the attack surface with fewer privileges. Limit access to the production environment. K8s has a number of security mechanisms built-in like authentication and authorization to control access to resources. You need to learn to configure. Be careful about setting the right privileges.

K8s Security Policies

  • My team helps with full and automatic application discovery spread across multiple data centers and cloud, and creating clean infrastructure policies and runtime discovery. Dynamic policies help lock down containers and apps built on top of containers.
  • You’re only as secure as you design yourself to be in the first place. By automating concepts and constructs of where things go using rules and stabilizing the environment, it eliminates a lot of human errors that occur in a manual configuration process. K8s standardizes the way we deploy containers. 
  • Namespaces, pod security policies, and network layer firewalling where we just define a policy using Calico and then kernel-level security that’s easy for us since we’re running on top of K8s. Kaneko runs Docker builds inside of Docker. Kaneko came out of Google. 
  • 1) Helps us with features like namespaces to segregate networks, from a team perspective. 2) Second is network policies. This pod cannot talk to that database and helps prevent mal-use of software.  3) Theres benefits from K8s protecting individual containers. This mitigates problems escaping from containers and helps you stay isolated.

Other

  • It’s not built-in per se, that’s why there are a number of security tools coming up. Things like scanning  Docker images. as K8s does not scan. A number of security companies are coming out with continuous scanning of Docker images before they are deployed, looking for security vulnerabilities during the SDLC. DevSecOps moves security checking and scanning to occur earlier in the development process. There are tools that are popping up to do that.
  • If you enable the security capabilities provided, it’s an advantage. There are capabilities in K8s that control whether you have the ability to pull up a container. It has to be set up correctly. You need to learn to use the capabilities. You need to think about the security of every container.
  • Security is a very important topic. One area is open source and the level of involvement and the number of developers involved can help drive greater security in the environment. Cloud-native security with the code and the cluster. For customers to leverage K8s in the cloud it changes the investment you have to make because you are inheriting the security capabilities of the cloud provider and dramatically lowering costs. K8s has API-level automation built-in.
  • Our container images are created using the Linux package security update mechanism to ensure the images include the latest security patches. Further, our container image is published to the Red Hat Container Catalog which requires these security measures to be applied as part of the publishing process. In addition, domain and database administrative commands are authenticated using TLS secure certificate authentication and LDAP, as well, domain meta-data, application SQL commands, and user data communications are all protected using the AES-256-CTR encryption cipher.
  • K8s provides only minimal structures for security, and it is largely the responsibility of implementers to provide security. You can build quite a lot on top of the Secrets API, such as implementing TLS in your applications or using it to store password objects.
  • K8s-orchestrated containerized environments and microservices present a large attack surface. The highly-dynamic container-to-container communications internal to these environments offer an opportune space for attacks to grow and escalate if they aren’t detected and thwarted. At the same time, K8s itself is drawing attackers’ attention: just last year a critical vulnerability exposing the K8s API server presented the first major known weakness in K8s security, but certainly not the last. To secure K8s environments, enterprises must introduce effective container network security and host security, which must include the visibility to closely monitor container traffic. Enterprise environments must be protected along each of the many vectors through which attackers may attempt entry. To do so, developers should implement security strategies featuring layer-7 inspection (capable of identifying any possible application layer issues). At the same time, the rise of production container environments that handle personally identifiable information (PII) has made data loss prevention a key security concern. This is especially true considering industry and governmental regulatory compliance requirements dictating how sensitive data must be handled and protected.

Here’s who shared their insights:

Code Signing Credentials Are Machine Identities and Need to Be Protected

The world is experiencing a digital transformation that is eclipsing all previous technological advancements. As more IT workloads move to the cloud, and as more IT services are containerized, they all need to be authenticated using cryptographic keys and digital certificates, or machine identities. Given the pace and scale of this new world of machines, protecting those machine identities is becoming increasingly critical to security. Although these changes affect every business, many organizations use outdated methods to protect the exponentially rising number of machine identities they now require. Those approaches simply can’t keep up.

How does this impact the security of code? There are many types of machine identities — TLS, SSH, mobile and more — that are used on many types of machines. When you look at it in this light, code is the ultimate "machine" that requires an authorized identity so that we can trust it. That is precisely why machine identities are so critical to the code signing process.

Why Manual Management of SSL\TLS Certs Destroys Security

For something so important, if you look around, you will find that the means and methods of managing SSL and TLS certificates are stuck somewhere in the past. Despite being the backbone of cybersecurity, IT folks often recount an alarming dependence on ad-hoc, manual, or semi-automated approaches to addressing this problem.

The range of tasks involved with certificate management is greater than one might assume at first glance. For example, someone has to purchase certificates and renew them when they expire, time-consuming activity in and of itself. Then, there’s the actual deployment of certificates, etc.

What Are the Stages of the Certificate Lifecycle?

Digital certificates are electronic credentials that are used to certify the identities of individuals, computers, and other entities on a network. Because they act as machine identities, digital certificates function similarly to identification cards such as passports and drivers’ licenses. For example, passports and drivers’ licenses are issued by recognized government authorities, whereas digital certificates are issued by recognized certification authorities (CAs).

Private and public networks are being used with increasing frequency to communicate sensitive data and complete critical transactions. This has created a need for greater confidence in the identity of the person, computer, or service on the other end of the communication. In addition, these valuable communications must be protected while they are on the network. Although accounts and strong passwords provide a certain level of assurance in the identity of the entity on the other end of the network, they offer little or no protection while data is in transit. In comparison, digital certificates and public key encryption identify machines and provide an enhanced level of authentication and privacy to digital communications.

Accelerate DevOps By Offering a Certificate Service for CI/CD Pipelines

Application development teams need to move fast. Yet they often need to reinvent the wheel when it comes to machine identities such as SSL/TLS certificates. They frequently create their own security infrastructure, using a combination of Open SSL, secrets management tools, DevOps platforms, and scripts. Then, as environments and tools change, apps are migrated and regulatory frameworks change, those same developers need to spend time re-coding applications, updating scripts. or learning new certificate authority APIs.

Why Do Developers Reinvent the Wheel?

Developers prefer to stay within their existing toolchain and often view Information Security has a barrier rather than an enabler. Often, security processes for SSL/TLS certificates are antiquated and require manual steps such as submitting a ticket, which are incompatible with the dynamic, ephemeral DevOps environments. As a result, developers take on the burden of creating their own security infrastructure, even though they are not PKI experts. This diverts resources away from their core responsibilities, ultimately slowing them down.

When Machine Identities Go Bad

Managing machine identities, such as SSL/TLS certificates is boring, right? It’s not inspiring work and it’s easily overlooked or forgotten in the day to day onslaught of changes and incidents in a typical enterprise technology department. And they seem like such little things… but when certificates go bad, well, life can turn pretty dark. Here are some real-life nightmares that happened as the result of mismanagement of machine identities.

1. Expired Certificates Delayed Breach Detection

The notorious breach at Equifax — talk about reputational damage, right? Nearly 150 million customer records stolen including date of birth and social security numbers. That’s a lot of people having sleepless nights about ID fraud thanks to an error somewhere in Equifax’s approach to cybersecurity. While the initial attack was performed via a Struts vulnerability (a common one I still frequently see during application scanning), the detection of the breach took 76 days. The reason it took 76 days to detect: misconfiguration of the device inspecting encrypted traffic on the network. The reason for the misconfiguration of the device: a digital certificate that had expired ten months previously.

Still Using SHA-1 for Internal Certificates? It’s Almost Too Late to Update

How many organizations may have overlooked or delayed the migrations of SHA-1 certificates in internal environments? They are hard to find, hard to track, harder to monitor, and may not have expiration dates that would drive migration.

Everyone who didn’t feel they had to worry too much about replacing those hard-to-find internal SHA-1 certificates will now have to start worrying. Microsoft is in the process of phasing out the use of the Secure Hash Algorithm 1 (SHA-1) code-signing encryption to deliver Windows OS updates. On February 15th, 2018, Microsoft announced that customers running legacy OS versions will be required to have SHA-2 code-signing support installed on their devices by July 2019.

How to Take the Burden of Machine Identity Management Off the Backs of DevOps

When I moved into an apartment, I didn’t build scaffolding around the building to support a rope and pulley system to lift boxes of my furniture and belongings to the 19th floor. My stuff was put into an elevator with a dedicated shaft, supported by specifically designed mechanical infrastructure and a simple computer system. The latter way is much safer, more effective, and automated.

In my last post, I wrote about how many DevOps practitioners are still manually generating and managing their machine identities, especially TLS certificates. Think about all of the load balancers, servers, containers, virtual machines, and other network entities that are constantly launched and killed within a DevOps environment. They all need machine identities, yet some of those entities have lifespans of only a few hours.