How Containers Improve the Management of Embedded Linux Distros for IoT

The embedded Linux appliance industry is changing from making innovative apps for low-cost, low-spec devices to one where powerful hardware runs more complex applications. While resource-intensive devices will become the norm, the low-end will still be the ones delivering the volume and the backbone of the consumer industry in today’s embedded Linux Internet of Things (IoT) ecosystem. With the explosion of the connected IoT on the intelligent edge, it’s more important than ever to keep devices up to date and secure. We discuss the challenges faced by embedded engineers on managing firmware and their apps on low-spec embedded devices. Finally, we’ll describe how containers and other cloud-native technologies can help automate and make IoT Linux distros secure and portable.  

Top Three Challenges in Managing Low-Spec Embedded Devices

#1. Keeping Embedded Systems Lean Across Diverse Hardware

Most embedded devices for IoT are single-function and single-purpose, and they are fitted with minimal hardware capabilities that support their intended purpose. In addition, the diverse set of hardware can have limited flash memory with a minimum of 32 MB of NAND, NOR, or EMMC storage with a minimum RAM of 64 MB. These constraints, as well as the diversity of hardware, can limit its processing and networking capability. 

Toward a Universal Embedded Linux System

At a recent Linaro Connect event that took place this past fall, Alexander Sack (@asacasa), CTO of Pantacor, delivered a talk on the Linux Distro and how it is relevant in today's embedded world of the Internet of Things (IoT). Alexander gives us insightful context on the birth of Linux and the embedded world, and where it is going today. He spoke on the history of the Linux Distro and drew parallels with how the embedded development ecosystem is changing. Much like the early days of Linux, the embedded Linux world also needs to embrace automation and take advantage of containerization in order to make infrastructure frictionless and invisible. 

Alexander started us off with an overview of how Linux started and how it has progressed from a hobbyists/tinkerers platform to a reliable and secure OS that today basically runs the Cloud. From the early aughts (the 00s) and onward, there were many different distributions like RedHat, Debian, Suse, and others whose goal was to make Linux reliable, easy to use, and secure. These distributions were created by large, vibrant communities of developers who donated their free time to contribute to open source Linux projects. Even though Linux gained a lot of traction in those early days, it still took quite a bit of effort and technical ability to integrate a distribution before you could deploy it and use it on a server to run your applications. 

Bridging the Cloud and Embedded Developer Worlds

Embedded developers haven't always followed the same path as traditional software developers. However, the introduction of cloud and cloud-native technologies like containerization is bringing these two groups together. Embedded developers seek the benefits of Linux and containers, and the proliferation of IoT devices means we need to expand talent in both directions. 

In a recent interview with Mitch Ashley (@techstrongGroup) of TechStrong TV, Ricardo Mendoza (@ricmm), CEO of Pantacor, discussed his vision of bridging the embedded and cloud developer worlds together through an open-source platform with containers and DevOps for IoT developers.

Why You Should Care About Service Meshes

Many developers wonder why they should care about service meshes. It's a question I'm asked often in my presentations at developer meetups, conferences, and hands-on workshops about microservices development with cloud-native architecture. My answer is always the same: "As long as you want to simplify your microservices architecture, it should be running on Kubernetes."

Concerning simplification, you probably also wonder why distributed microservices must be designed so complexly for running on Kubernetes clusters. As this article explains, many developers solve the microservices architecture's complexity with service mesh and gain additional benefits by adopting service mesh in production.

Automating the Operation of Stateful Apps in Kubernetes With GitOps

Making stateful apps more manageable.

Scale and velocity are the main drivers behind Kubernetes adoption. Kubernetes allows companies today to run thousands of cloud-native applicationsincluding stateful applications like databases. But that also means managing complex workloads within large cloud-native systems can be a daunting task, especially when it comes to rolling updates or migrations. How can software teams better manage stateful applications and their many operational tasks in an efficient, predictable, and reliable manner?

You may also enjoy:  Kubernetes: The State of Stateful Apps

A few months ago, Ryan Wallner (Technical Advocate, Portworx) and Sebastian Bernheim (Customer Success, Weaveworks) spoke on:

Moving Towards a Standard Operating Model for Kubernetes

In an episode of “Let’s Talk” hosted by Swapnil Bhartiya, Weaveworks COO, Steve George discusses what Weaveworks solves for Kubernetes users.

“We’ve spoken to a lot of quite advanced Kubernetes users who have got Kubernetes up and running. They are operating in production, but their biggest complaint is that it takes too much time and effort. Most users want to just go on with their applications deployed into Kubernetes and operate it easily,” Steve George, Weaveworks.

Everyone understands how to deploy, monitor, manage, and look after Linux distributions. But in the Kubernetes world, nothing is standardized. People do things with custom-built tools. Everyone's building their own house in their own way. “What Weaveworks is doing is providing standardized workflows for how to deploy, configure, monitor, update and look after Kubernetes,” he said.

Introduction to Kubernetes Security

Kubernetes is fundamentally a complex system with lots of different potential attack vectors aimed at data theft currency mining and other threats. Brice Fernandes started us off with a discussion on how to secure your operations to Kubernetes using GitOps best practices. Liz Rice then followed up on the current state of Kubernetes security-related features as well as best practices and other tips on how to secure your cluster.

GitOps Is an Operations Model for Kubernetes

According to Brice, Kubernetes clusters were traditionally accessed by developers directly, using the command line tool `kubectl`. There are of course many issues with having your development team directly accessing the cluster in this way. The biggest problem with this is that it is really hard to audit and track who did what, when.

A Production-Ready Checklist for Kubernetes

How do you know when you're ready to run your Kubernetes cluster in production? In this blog series, we're going to look at what's typically included in a Production Readiness checklist for your cluster and your app.

These checklists were put together by Brice Fernandes (@fractallamda), a Weaveworks customer success engineer. If you're lucky enough to attend an upcoming hands-on workshop led by Brice, production readiness will be a topic that he'll be deep diving on.

Securing Developer Workflows

A few weeks ago, Weaveworks and Snyk delivered a webinar, entitled, "Secure GitOps pipelines for Kubernetes." The theme of the webinar was on how to improve the security of your development workflows — from Git to production.

Brice Fernandes, Customer Success Engineer at Weaveworks kicked off the talks with an in-depth look on what GitOps is and how it improves the overall security of your CICD pipelines.

Continuous Security for GitOps

Earlier this month, Weaveworks hosted a webinar on securing your GitOps pipelines. Speakers included Andrew Martin (@sublimino) of ControlPlane as well as Weaveworks’ customer success engineer, Brice Fernandes (@fractallamda).

Brice gave us an overview of what GitOps is, and why it is a logical and more secure way for large development teams to update applications in Kubernetes.

Feedback and Control: An Essential GitOps Component

An important and often overlooked component of GitOps is the concept of a feedback and control loop. In this post, we’re going to take a look at what this means and why this is essential to GitOps. You may not be doing GitOps if you don’t have a way of monitoring, observing, and alerting on divergences from your ‘source of truth.’

GitOps and Continuous Delivery

Briefly, GitOps is a way to do continuous delivery. It is an operations model for managing your deployments to Kubernetes. It works by keeping all of your declarations in Git alongside your code. To make a change to your cluster or to your code requires an approved pull request. And, as we’ll expand on in this post, GitOps done right means that your system can monitor and then alert you whenever there is a difference between ‘the source of truth’ kept in Git and your running cluster.

Going Cloud-Native: 6 Essential Things You Need to Know

Are you just starting on your digital transformation journey and wondering what cloud-native is, and why you need it? We just added a new page to our Kubernetes library, called "Going Cloud-Native: 6 Essential Things You Need to Know." This article discusses the key takeaways to know about the term "cloud-native." It also describes how you can take advantage of cloud-native capabilities to speed up your development team’s productivity and increase your company’s innovation output.

A Brief History of Cloud-Native

Depending on who you ask, cloud-native can mean a lot of different things. Ten years ago, it was coined by companies like Netflix who leveraged cloud technologies, going from a mail order company to one of the world’s largest consumer on-demand content delivery networks. Netflix pioneered what we’ve come to call cloud-native, reinventing, transforming and scaling how we all want to be doing software development.

Kubernetes — The Fast and Furious?

Alexis Richardson recently spoke at the two-day event Container Stack in Zurich and described the role of the CNCF and what it means to be cloud-native. After a brief introduction on the CNCF and its role, Alexis then explained the type of projects and how they get accepted to the foundation before diving into how enterprises can benefit by adopting cloud-native technology.

CNCF — A Brief History

The Cloud Native Computing Foundation (CNCF) was created three years ago as the home of Kubernetes. Kubernetes builds upon 15 years of experience of running production workloads at Google combined with best-of-breed ideas and practices from the community.

Delivering Quality at Speed With GitOps

At the inaugural online summit "Cloud Native Live" hosted by our friends at Twistlock, Weaveworks Customer Success Engineer, Brice Fernandes (@fractallambda) presented "Delivering Quality at Speed with GitOps".

Brice discussed how, by introducing and implementing GitOps best practices into your Kubernetes deployment pipelines, DevOps team can gain velocity without sacrificing quality.