Secure Coding Best Practices

Every single day, an extensive array of fresh software vulnerabilities is unearthed by diligent security researchers and analysts. Many of these vulnerabilities emerge due to the absence of secure coding practices. Exploiting such vulnerabilities can have severe consequences, as they can severely impair a business's financial or physical assets, erode trust, or disrupt critical services.

For organizations reliant on their software for their operations, it becomes imperative for software developers to embrace secure coding practices. Secure coding entails a collection of practices that software developers adopt to fortify their code against cyberattacks and vulnerabilities. By adhering to coding standards that embody best practices, developers can incorporate safeguards that minimize the risks posed by vulnerabilities in their code.

Taking AI/ML Ideas to Production

The integration of AI and ML in products has become a trend in recent years. Companies are trying to incorporate these technologies into their products to improve their efficiency and performance. And this year, particularly with the boom of ChatGPT, almost every company is trying to introduce a feature in this domain. One of the main benefits of AI and ML is their ability to learn and adapt. They can analyze data and use it to improve their performance over time. This means that products that incorporate these technologies can become smarter and more efficient over time.

Let's understand now how companies are taking their ideas to production. Usually, they start with hiring a few data scientists who will figure out what models to create to solve the problem, fine-tune them, and handover to MLOps or DevOps engineers to deploy. Your DevOps engineers may or may not know how to efficiently take these models to production. That's where you need specialized skills such as Machine learning engineers and MLOps who understand how to manage the whole process of the CI/CD/CT pipeline efficiently.

DevOps Roadmap for 2022

In the last few weeks, I met some folks in my mentoring sessions, who are new to DevOps or in the mid of their career, who were interested in knowing what to learn in 2022. DevOps skills are high in demand and there is constant learning required to keep yourself in sync with market demand.

This post is to share the notes that can help you. Let’s see some guidance based on my experience and understanding.

Machine Learning Orchestration on Kubernetes Using Kubeflow

MLOps: From Proof Of Concepts to Industrialization

In recent years, AI and Machine Learning have seen tremendous growth across industries in various innovative use cases. It is the most important strategic trend for business leaders. When we dive into a technology, the first step is usually experimentation on a small scale and for very basic use cases, then the next step is to scale up operations. Sophisticated ML models help companies efficiently discover patterns, uncover anomalies, make predictions and decisions, and generate insights, and are increasingly becoming a key differentiator in the marketplace. Companies recognise the need to move from proof of concepts to engineered solutions, and to move ML models from development to production. There is a lack of consistency in tools and the development and deployment process is inefficient. As these technologies mature, we need operational discipline and sophisticated workflows to take advantage and operate at scale. This is popularly known as MLOps or ML CI/ CD or ML DevOps. In this article, we explore how this can be achieved with the Kubeflow project, which makes deploying machine learning workflows on Kubernetes simple, portable, and scalable.

MLOps in Cloud Native World

There are Enterprise ML platforms like Amazon SageMaker, Azure ML, Google Cloud AI, and IBM Watson Studio in public cloud environments. In the case of an on-prem and hybrid open-source platform, the most notable project is Kubeflow.

Service Mesh Comparison: Istio vs Linkerd

From the latest CNCF annual survey, it is pretty clear that a lot of people are showing high interest in using a service mesh in their project and many are already using in them production. Nearly 69% are evaluating Istio and 64% are looking at Linkerd. Linkerd was the first service mesh in the market, but Istio made service meshes more popular. Both projects are cutting edge and very competitive, making it a tough choice to select one. In this blog post, we will learn more about Istio and Linkerd architecture, their moving parts, and compare their offerings to help you make an informed decision.

Introduction to Service Mesh

Over the past few years, microservices architecture has become a popular style of designing software applications. In this architecture, we breakdown the application into independently deployable services. The services are usually lightweight, polyglot in nature, and often managed by various functional teams. This architecture style works well until a certain point, when the number of these services becomes large and difficult to manage. Suddenly, they are not simple anymore. This leads to challenges in managing various aspects like security, network traffic control, and observability. A service mesh helps address these challenges.