Running and Debugging a Python Barcode App in a Docker Container

When creating Python barcode extensions using CPython and Dynamsoft Barcode reader, one of the major concerns you have to face is testing your code compatibility with different versions of Python.

To test if your Python barcode extension works in your Windows, Linux, or macOS, you will need to have each version of Python installed on your computer. However, it’s time-consuming.

How To Run a Docker Container on the Cloud: Top 5 CaaS Solutions

In the past few years, there has been a growing number of organizations and developers joining the Docker journey. Containerization simplifies the software development process because it eliminates dealing with dependencies and working with specific hardware. Nonetheless, the biggest advantage of using containers is down to the portability they offer. But, it can be quite confusing how to run a container on the cloud. You could certainly deploy these containers to servers on your cloud provider using Infrastructure as a Service (IaaS). However, this approach will only take you back to the issue we mentioned previously, which is, you’d have to maintain these servers when there’s a better way to do that.

Table of Contents 

  • How To Run a Docker Container on the Cloud
    • Using a Container Registry
    • Using Container-as-a-Service
  • Why Should I Use CaaS?
  • What Are the Best CaaS Solutions?
    • AWS ECS
    • AWS Lambda
    • AWS App Runner
    • Azure Container Instances
    • Google Cloud Run
  • Conclusion

Toward a Universal Embedded Linux System

At a recent Linaro Connect event that took place this past fall, Alexander Sack (@asacasa), CTO of Pantacor, delivered a talk on the Linux Distro and how it is relevant in today's embedded world of the Internet of Things (IoT). Alexander gives us insightful context on the birth of Linux and the embedded world, and where it is going today. He spoke on the history of the Linux Distro and drew parallels with how the embedded development ecosystem is changing. Much like the early days of Linux, the embedded Linux world also needs to embrace automation and take advantage of containerization in order to make infrastructure frictionless and invisible. 

Alexander started us off with an overview of how Linux started and how it has progressed from a hobbyists/tinkerers platform to a reliable and secure OS that today basically runs the Cloud. From the early aughts (the 00s) and onward, there were many different distributions like RedHat, Debian, Suse, and others whose goal was to make Linux reliable, easy to use, and secure. These distributions were created by large, vibrant communities of developers who donated their free time to contribute to open source Linux projects. Even though Linux gained a lot of traction in those early days, it still took quite a bit of effort and technical ability to integrate a distribution before you could deploy it and use it on a server to run your applications. 

How to Move Containers to IBM App Connect Enterprise

Many enterprises have IBM Integration Bus environments running hundreds of integration flows in production. You have likely read about the benefits of moving to containers, perhaps even more generally of agile integration, and you’d like to explore that. You’d also like to move to a more recent version of the product (now named IBM App Connect Enterprise). However, it is likely you have no, or at least a very limited background in container technology. How do you take the first steps to explore these new platforms and product versions? 

In this series, we are going to describe how you move to containers running IBM App Connect Enterprise. We’ll build up to more complex examples, but for this first one we’ll take the simplest possible flow, and we’ll use a Docker container environment that can easily be run on a laptop.

Implementing Non-Trivial Containerized Systems (Part 2): Containerizing With Docker

This is the second part of a multi-part series on designing and building non-trivial containerized solutions. We're making a radio station using off-the-shelf components and some home-spun software, all on top of Docker, Docker Compose, and eventually, Kubernetes.

In this part, we're going to take the architecture we cooked up in the last part, and see if we can't get it running in containers, using nothing but Docker itself.

Today, we're going to build a radio station. We drew up the architecture in our last post:

Roll up your sleeves, we're gonna start building some containers.

Interaction With Autonomous Database via Docker Container

Docker Container WhaleIn this article, I will show you to access the Autonomous Database service, one of the database services offered on Oracle Cloud infrastructure, through a Docker image. I hope it will be a useful article in terms of awareness.

As we all follow, one of the indispensable components of the application development world is container technologies. Container technologies have long been the main factor that triggers the transformation in the world of application development with the opportunities and advantages it offers. For this reason, software developers continue to build their solutions on containers.

Implementing Non-Trivial Containerized Systems, Part 1: Picking Components

So, you want to start a radio station, eh?

This is the first part of a multi-part series on designing and building non-trivial containerized solutions. We're making a radio station using off-the-shelf components and some home-spun software, all on top of Docker, Docker Compose, and eventually, Kubernetes.

In this part, we're going to explore how the different parts of the system interface with one another, to set the stage for our next post, where we Dockerize everything!

I first met Icecast (https://icecast.org/) when I worked at a web-hosting startup around the turn of the millennium. One night, one of my co-workers and I had the crazy idea to load a bunch of audio files on the networked file server and stream them to our workstations. We could listen to music while we worked 90+ hours a week. Strange times. After realizing it wasn't as simple as exporting .ogg files over HTTP, we found Icecast (and its pal, Ices2) and built a rudimentary, local-network broadcast radio station.

How To Build Docker Images for Windows Desktop Applications

Introduction

It used to be that people first downloaded their software onto a physical computer and then ran it. Now, with cloud computing, you no longer need to worry about awkward downloads. Instead, you can use all the same services online from anywhere and see updates in real-time.

Why Businesses Migrate Their Legacy Applications To the Cloud

  • Probably the first main reason for moving to the cloud is access to virtually unlimited computing resources. Cloud elasticity and scalability are essential elements of cloud computing. 
    • Cloud elasticity is the ability of a system to dynamically manage available resources based on current workload requirements.
    • Cloud Scalability is a scalable system infrastructure to meet growing workload demands while maintaining consistent performance appropriately.
  • Moving from the legacy Windows app to cloud computing lets you work anytime and anywhere so long as you have an internet connection. A cloud-based web service is accessible from any device.
  • In the current pandemic situation, team members are forced to work from their home offices. Using the cloud, your teammates can open, edit, and share documents anytime and from anywhere; they can do more together and do it better. Before the advent of the cloud-based workflow, employees had to send files back and forth as email attachments that a single user worked on simultaneously.
  • A public cloud provider owns the hardware infrastructure and is responsible for managing and maintaining it, so you don’t have to worry about maintenance. With a public cloud, you only need to focus directly on meeting your business goals.
  • Cloud computing reduces high hardware costs. You pay only for the actual consumption of resources.

Virtual Machines Vs Containers.

Containers and virtual machines (VMs) are the two main approaches to deploying multiple isolated services in the cloud. So how are they different?

Running SonarQube Inside a Docker Container

Prerequisite

To follow this article, one will need to make sure that they have docker installed in their machine. I assume you know using docker containers. I will be analyzing a maven project, so please make sure that you've one maven project with some code that you can analyze.

Introduction

Sonarqube is a prevalent tool for analyzing bugs, Vulnerabilities, Security hotspots, and some other programming standards. You can use this tool to analyze your project's source code to keep your code standing with programming standards. Using SonarQube is very easy. You can download the sonar server from the official site, but we will be running SonarQube inside a docker container to analyze our source code in this article. 

A Gentle Introduction to Using a Docker Container as a Dev Environment

Sarcasm disclaimer: This article is mostly sarcasm. I do not think that I actually speak for Dylan Thomas and I would never encourage you to foist a light theme on people who don’t want it. No matter how wrong they may be.

When Dylan Thomas penned the words, “Do not go gentle into that good night,” he was talking about death. But if he were alive today, he might be talking about Linux containers. There is no way to know for sure because he passed away in 1953, but this is the internet, so I feel extremely confident speaking authoritatively on his behalf.

My confidence comes from a complete overestimation of my skills and intelligence coupled with the fact that I recently tried to configure a Docker container as my development environment. And I found myself raging against the dying of the light as Docker rejected every single attempt I made like I was me and it was King James screaming, “NOT IN MY HOUSE!”

Pain is an excellent teacher. And because I care about you and have no other ulterior motives, I want to use that experience to give you a “gentle” introduction to using a Docker container as a development environment. But first, let’s talk about whyyyyyyyyyyy you would ever want to do that.

kbutwhytho?

Close your eyes and picture this: a grown man dressed up like a fox.

Wait. No. Wrong scenario.

Instead, picture a project that contains not just your source code, but your entire development environment and all the dependencies and runtimes your app needs. You could then give that project to anyone anywhere (like the fox guy) and they could run your project without having to make a lick of configuration changes to their own environment.

This is exactly what Docker containers do. A Dockerfile defines an entire runtime environment with a single file. All you would need is a way to develop inside of that container.

Wait for it…

VS Code and Remote – Containers

VS Code has an extension called Remote – Containers that lets you load a project inside a Docker container and connect to it with VS Code. That’s some Inception-level stuff right there. (Did he make it out?! THE TALISMAN NEVER ACTUALLY STOPS SPINNING.) It’s easier to understand if we (and by “we” I mean you) look at it in action.

Adding a container to a project

Let’s say for a moment that you are on a high-end gaming PC that you built for your kids and then decided to keep if for yourself. I mean, why exactly do they deserve a new computer again? Oh, that’s right. They don’t. They can’t even take out the trash on Sundays even though you TELL THEM EVERY WEEK.

This is a fresh Windows machine with WSL2 and Docker installed, but that’s all. Were you to try and run a Node.js project on this machine, Powershell would tell you that it has absolutely no idea what you are referring to and maybe you mispelled something. Which, in all fairness, you do suck at spelling. Remember that time in 4ᵗʰ grade when you got knocked out of the first round of the spelling bee because you couldn’t spell “fried.” FRYED? There’s no “Y” in there!

Now this is not a huge problem — you could always skip off and install Node.js. But let’s say for a second that you can’t be bothered to do that and you’re pretty sure that skipping is not something adults do.

Instead, we can configure this project to run in a container that already has Node.js installed. Now, as I’ve already discussed, I have no idea how to use Docker. I can barely use the microwave. Fortunately, VS Code will configure your project for you — to an extent.

From the Command Palette, there is an “Add Development Container Configuration Files…” command. This command looks at your project and tries to add the proper container definition.

In this case, VS Code knows I’ve got a Node project here, so I’ll just pick Node.js 14. Yes, I am aware that 12 is LTS right now, but it’s gonna be 14 in [checks watch] one month and I’m an early adopter, as is evidenced by my interest in container technology just now in 2020.

This will add a .devcontainer folder with some assets inside. One is a Dockerfile that contains the Node.js image that we’re going to use, and the other is a devcontainer.json that has some project level configuration going on.

Now, before we touch anything and break it all (we’ll get to that, trust me), we can select “Rebuild and Reopen in Container” from the Command Palette. This will restart VS Code and set about building the container. Once it completes (which can take a while the first time if you’re not on a high-end gaming PC that your kids will never know the joys of), the project will open inside of the container. VS Code is connected to the container, and you know that because it says so in the lower left-hand corner.

Now if we open the terminal in VS Code, Powershell is conspicously absent because we are not on Windows anymore, Dorthy. We are now in a Linux container. And we can both npm install and npm start in this magical land.

This is an Express App, so it should be running on port 3000. But if you try and visit that port, it won’t load. This is because we need to map a port in the container to 3000 on our localhost. As one does.

Fortunately, there is a UI for this.

The Remote Containers extension puts a “Remote Explorer” icon in the Action Bar. Which is on the left-hand side for you, but the right-hand side for me. Because I moved it and you should too.

There are three sections here, but look at the bottom one which says “Port Forwarding,” I’m not the sandwich with the most lettuce, but I’m pretty sure that’s what we want here. You can click on the “Forward a Port” and type “3000,” Now if we try and hit the app from the browser…

Mostly things, “just worked.” But the configuration is also quite simple. Let’s look at how we can start to customize this setup by automating some of the aspects of the project itself. Project specific configuration is done in the devcontainer.json file.

Automating project configuration

First off, we can automate the port forwarding by adding a forwardPorts variable and specifying 3000 as the value. We can also automate the npm install command by specifying the postCreateCommand property. And let’s face it, we could all stand to run AT LEAST one less npm install.

{
  // ...
  // Use 'forwardPorts' to make a list of ports inside the container available locally.
  "forwardPorts": [3000],
  // Use 'postCreateCommand' to run commands after the container is created.
  "postCreateCommand": "npm install",
  // ...
}

Additionally, we can include VS Code extensions. The VS Code that runs in the Docker container does not automatically get every extension you have installed. You have to install them in the container, or just include them like we’re doing here.

Extensions like Prettier and ESLint are perfect for this kind of scenario. We can also take this opportunity to foist a light theme on everyone because it turns out that dark themes are worse for reading and comprehension. I feel like a prophet.

// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.128.0/containers/javascript-node-14
{
  // ...
  // Add the IDs of extensions you want installed when the container is created.
  "extensions": [
    "dbaeumer.vscode-eslint",
    "esbenp.prettier-vscode",
    "GitHub.github-vscode-theme"
  ]
  // ...
}

If you’re wondering where to find those extension ID’s, they come up in intellisense (Ctrl/Cmd + Shift) if you have them installed. If not, search the extension marketplace, right-click the extension and say “Copy extension ID.” Or even better, just select “Add to devcontainer.json.”

By default, the Node.js container that VS Code gives you has things like git and cURL already installed. What it doesn’t have, is “cowsay,” And we can’t have a Linux environment without cowsay. That’s in the Linux bi-laws (it’s not). I don’t make the rules. We need to customize this container to add that.

Automating environment configuration

This is where things went off the rails for me. In order to add software to a development container, you have to edit the Dockerfile. And Linux has no tolerance for your shenanigans or mistakes.

The base Docker container that you get with the container configurations in VS Code is Debian Linux. Debian Linux uses the apt-get dependency manager.

apt-get install cowsay

We can add this to the end of the Dockerfile. Whenever you install something from apt-get, run an apt-get update first. This command updates the list of packages and package repos so that you have the most current list cached. If you don’t do this, the container build will fail and tell you that it can’t find “cowsay.”

# To fully customize the contents of this image, use the following Dockerfile instead:
# https://github.com/microsoft/vscode-dev-containers/tree/v0.128.0/containers/javascript-node-14/.devcontainer/Dockerfile
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-14
# ** Install additional packages **
RUN apt-get update \
  && apt-get -y install cowsay

A few things to note here…

  1. That RUN command is a Docker thing and it creates a new “layer.” Layers are how the container knows what has changed and what in the container needs to be updated when you rebuild it. They’re kind of like cake layers except that you don’t want a lot of them because enormous cakes are awesome. Enormous containers are not. You should try and keep related logic together in the same RUN command so that you don’t create unnecessary layers.
  2. That \ denotes a line break at the end of a line. You need it for multi-line commands. Leave it off and you will know the pain of many failed Docker builds.
  3. The && is how you add an additional command to the RUN line. For the love of god, don’t forget that \ on the previous line.
  4. The -y flag is important because by default, apt-get is going to prompt you to ensure you really want to install what you just tried to install. This will cause the container build to fail because there is nobody there to say Y or N. The -y flag is shorthand for “don’t bother me with your silly confirmation prompts”. Apparently everyone is supposed to know this already. I didn’t know it until about four hours ago. 

Use the command prompt to select “Rebuild Container”…

And, just like that…

It doesn’t work.

This the first lesson in what I like to call, “Linux Vertigo.” There are so many distributions of Linux and they don’t all handle things the same way. It can be difficult to figure out why things work in one place (Mac, WSL2) and don’t work in others. The reason why “cowsay” isn’t available, is that Debian puts “cowsay” in /usr/games, which is not included in the PATH environment variable.

One solution would be to add it to the PATH in the Dockerfile. Like this…

FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-14
RUN apt-get update \
  && apt-get -y install cowsay
ENV PATH="/usr/games:${PATH}"

EXCELLENT. We’re solving real problems here, folks. People like cow one-liners. I bullieve I herd that somewhere.

To summarize, project configuration (forwarding ports, installing project depedencies, ect) is done in the “devcontainer.json” and enviornment configuration (installing software) is done in the “Dockerfile.” Now let’s get brave and try something a little more edgy.

Advanced configuration

Let’s say for a moment that you have a gorgeous, glammed out terminal setup that you really want to put in the container as well. I mean, just because you are developing in a container doesn’t mean that your terminal has to be boring. But you also wouldn’t want to reconfigure your pretentious zsh setup for every project that you open. Can we automate that too? Let’s find out.

Fortunately, zsh is already installed in the image that you get. The only trouble is that it’s not the default shell when the container opens. There are a lot of ways that you can make zsh the default shell in a normal Docker scenario, but none of them will work here. This is because you have no control over the way the container is built.

Instead, look again to the trusty devcontainer.json file. In it, there is a "settings" block. In fact, there is a line already there showing you that the default terminal is set to "/bin/bash". Change that to "/bin/zsh".

// Set *default* container specific settings.json values on container create.
"settings": {
  "terminal.integrated.shell.linux": "/bin/zsh"
}

By the way, you can set ANY VS Code setting there. Like, you know, moving the sidebar to the right-hand side. There – I fixed it for you.

// Set default container specific settings.json values on container create.
"settings": {
  "terminal.integrated.shell.linux": "/bin/zsh",
  "workbench.sideBar.location": "right"
},

And how about those pretentious plugins that make you better than everyone else? For those you are going to need your .zshrc file. The container already has oh-my-zsh in it, and it’s in the “root” folder. You just need to make sure you set the path to ZSH at the top of the .zshrc so that it points to root. Like this…

# Path to your oh-my-zsh installation.
export ZSH="/root/.oh-my-zsh"


# Set name of the theme to load --- if set to "random", it will
# load a random theme each time oh-my-zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="cloud"


# Which plugins would you like to load?
plugins=(zsh-autosuggestions nvm git)


source $ZSH/oh-my-zsh.sh

Then you can copy in that sexy .zshrc file to the root folder in the Dockerfile. I put that .zshrc file in the .devcontainer folder in my project.

COPY .zshrc /root/.zshrc

And if you need to download a plugin before you install it, do that in the Dockerfile with a RUN command. Just remember to group all of these into one command since each RUN is a new layer. You are nearly a container expert now. Next step is to write a blog post about it and instruct people on the ways of Docker like you invented the thing.

RUN git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions

Look at the beautiful terminal! Behold the colors! The git plugin which tells you the branch and adds a lightning emoji! Nothing says, “I know what I’m doing” like a customized terminal. I like to take mine to Starbucks and just let people see it in action and wonder if I’m a celebrity.

Go gently

Hopefully you made it to this point and thought, “Geez, this guy is seriously overreacting. This is not that hard.” If so, I have successfully saved you. You are welcome. No need to thank me. Yes, I do have an Amazon wish list.

For more information on Remote Containers, including how to do things like add a database or use Docker Compose, check out the official Remote Container docs, which provide much more clarity with 100% less neurotic commentary.


The post A Gentle Introduction to Using a Docker Container as a Dev Environment appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Understanding Docker Concepts

Hey guys,

As a fresher, I faced a lot of challenges understanding docker concepts like how docker containers work and what they actually are. But as I grew and practiced Docker and its concepts I understood their actual meaning and how they form and how they actually are managed. Through this above video, I have tried my level best to give it a try to make you guys understand what docker container actually means and some basic docker command to shoot. Please avoid the audio quality as I made the entire video with minimum equipment. 

Docker Commands to Containerize an Application

Docker is an exciting technology when the developers are focusing on the design of their applications in a cloud-native approach. One of the key characteristics of designing a cloud-native application is to containerize the application. Designing applications such way will save you from hearing some of the words from other developers during development time.

  • "The application is not working in my local machine!"
  • "I'm facing version conflicts."
  • "Libraries are missing."

Every time we onboard new developers in our team we have to fix several issues to build and run the applications successfully in a new machine that leads the onboarding period a bit longer and forces another expert to engage when he/she is focusing on something deliverables. 

Docker Images and Containers

When we work with Docker, it is important to have basic knowledge of docker images and containers. Docker containers are created using docker images. We are going to look into basic commands to create docker containers using images.

Docker Image

An image is a read-only template with instructions for creating a Docker container. It is a combination of file system and parameters. Often, an image is based on another image with some additional customization.
We can use existing images or create our own images.

Running Linux SQL Server as a Container

Phil Factor starts a series of articles that will demonstrate the use of temporary SQL Server instances and running in Linux containers, into which we can deploy the latest database build stocked with data for development and testing work. This initial article shows how to set up a SQL Server instance inside a Linux Docker container, create some sample databases, and persist data locally.

Running Linux SQL Server as a container in a Windows Virtual Machine is valuable for development work. It saves time in setting up development and test environments. It provides a standard environment for running databases. There are limitations, however. Currently, Windows authentication isn't supported, and running containerized database applications in production isn't generally recommended due to the increasing isolation from the filesystem that can affect caching and the server's response to failure cases in the filesystem.

Docker How-to: Custom Authentication to A Private Docker Registry With NGINX, Lua, and AWS ECR

Background

Our engineering team developed a Docker Container for our application, Kloudless Enterprise, to simplify cluster using industry standard tools like Docker Swarm or Kubernetes.

However, our customers found downloading the container from our web portal was somewhat inconvenient. Users previously had to download the archived image and manually load it into their Docker daemon to use it. There also wasn’t a way to check which images were available without visiting the portal through a browser. To improve the experience, we decided to provide a private Docker Registry that would allow our users to not only pull images, but also query tags and take advantage of other useful features that the Docker Registry provides.