6 Ways Software Engineers Can Learn at Work

We know from multiple studies (e.g., 1, 2, 3) that learning at work improves both job satisfaction and organizational commitment. Both of these contribute to higher productivity and lower turnover.

Many companies use personal learning budgets and some type of "5/10/20 percent time" policy to encourage learning. These are useful but highly dependent on personal motivation and learning habits. It’s often hard for software engineers to prioritize learning over their busy day-to-day work. For example, only around 10 percent of Googlers are using their famous "20 percent time" policy.

3 Reasons You Should Talk About Release Schedules More Often

Release schedules drive many of the processes for IT teams. The problem is, business teams, don’t like release schedules. Maybe it’s because they don’t understand the need for the formal process or they feel release cycles slow down the delivery of new features and fixes. 

Whatever the reason, if you work as a developer or in DevOps, talking about release schedules with your business stakeholders is important.

3 Ways to Find Work-Life Balance in Your New Normal

You want to find the right work-life balance — but balance too often implies separating work and life into equal halves, which is nearly impossible to do in an age where they so easily bleed into one another. Who hasn’t booked a vacation from the office, or conversely, responded to a work email on vacation?

The recent shift to remote work has made it even more difficult to keep your personal and professional selves separate. You can no longer rely on a commute or the confines of office walls to divide your lives into parts. If this is really the new normal, your wellness relies on throwing away the notion of life and work as separate halves.

The Kano Model: Developing for Value and Delight

Even though we have been developing software on the mainframe platform for decades we still have ways to learn and improve. We continually face the problem of meeting user needs with the resources we have on hand. This forces us to be careful about what we choose to do—we must look at whether we are focusing on the right things. While there are basic expectations that must be met, are we providing things that excite and delight? That is, the things that make users feel connected and generate passion. Users that feel a connection with your applications can obtain greater value from them.

How to provide things that excite while still delivering on basic needs is difficult to manage, but there are tools to help. For one the Kano model created by Japanese educator, lecturer, writer, and consultant in the field of quality management, Dr. Noriaki Kano. He sought to resolve these issues with a prioritization framework. The framework focuses on the three patterns of customer expectations versus the investments organizations make to delight their customers and do what it takes to positively impact customer satisfaction.

Business optimisation architecture – Common architectural elements

In my previous article from this series I introduced a use case around business optimisation for retail stores. 
The process was laid out how I've approached the use case and how portfolio solutions are the base for researching a generic architectural blueprint.
The only thing left to cover was the order in which you'll be led through the blueprint details.

This article starts the real journey at the very top, with a generic architecture from which we'll discuss the common architectural elements one by one.

Blueprints review

As mentioned before, the architectural details covered here are base on real solutions using open source technologies. The example scenario presented here is a generic common blueprint that was uncovered researching those solutions. It's my intent to provide a blueprint that provides guidance and not deep technical details.

How to Implement Kubernetes

To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "What are the most important elements of implementing K8s for orchestrating containers?" Here’s what we learned:

Security

  • Four things: 1) security, 2) you don’t have to go “all-in” on K8s, don’t use it for databases, 3) capacity planning CPU, 4) K8s structure will mimic team structure. 
  • Networking, storage, security and monitoring, and management capabilities are all essential elements for implementing Kubernetes container orchestration. Businesses stand to realize tremendous benefits due to the fast pace at which both Kubernetes and container ecosystems are currently advancing. However, this pace also increases the challenge of keeping up with new advancements and functionality that are critical to success – especially in the area of security.

Planning

  • A lot is around planning. Prevent surprises with a lot of power and complexity. Learn how to set up the right environments. 
  • Future-proof your architecture/strategy for multi-cloud, platform-agnostic and whatever else comes our way. It’s easier to build decoupled and distributed applications with K8s that can run anywhere. Adopting different programming languages and frameworks also becomes easier and that allows you to use the best tool for the job when building our applications. There are new challenges when we enable more and more API communication over a network across our applications. Critical performance and security issues increase as you accelerate the flow of information across our services as does being able to observe the traffic and collect metrics at a much bigger scale in order to debug errors down the road. While Kubernetes allows us to deploy our workloads across multiple clouds and platforms, it's vital that we adopt platform-agnostic technology that can be deployed within our Kubernetes cluster instead of relying too much on cloud vendor solutions to secure and monitor traffic. Increasing team productivity and business value is very dependent on speed. Consolidating how we manage and operate our services across every environment, across every cloud makes it faster to both create new services and consume these services from our client applications.

Experience

  • K8s is still new though it’s been around for five years. There’s lack of expertise and talent — that’s the number one challenge. What you want from an enterprise standpoint is a standardized, shared platform to use on multiple clouds and on-prem. Containers are portable, K8s has the standard open-source API you can build a platform that can run anywhere. Create a shared platform that can run anywhere. Challenge is having the right people with the skill and then day two operations. Once in production, you have to deal with day two operation like upgrades, a new version out every three months. How to keep patched, back up, disaster recovery, scale. It abstracts the infrastructure from the developers. A declarative approach. To define the end state of what you want K8s to do, you just declare that. Tell K8s what you want, and it makes it happen. If something fails it, it will automatically recreate it. The downside of that is if something goes wrong with the system, now you have to search through multiple levels of abstraction to figure out where the problem is. To have a successful implementation, you need a team that knows what’s happening across the landscape. If it fails, you need to know the nitty-gritty details of all of the services that are running. Troubleshooting, debugging, upgrading the cluster, SLA management, day-two operations are challenging today. 
  • Learning the technology, there’s a learning curve. Every type of developer probably knows but the data engineering side is quite new. First step making it easy for developers to understand what the pieces are and how to be used. Then important aspects of data locality within the K8s cluster. Making it stateful and stateless when needed. Important concepts to explain to end-users and how to fit with K8s.

Data Locality

  • It depends on what they are trying to achieve with containers. A lot of customers want portability between on-prem and public cloud or deploying a scalable container platform. One of the aspects is the differentiation between stateless and stateful applications. Think about how to claim and reclaim storage resources deal with security, performance, reliability, availability all of the traditional data center operations topics. Containers support that through persistent volume claims and persistent volume storage. There is a shift in how developers need to think about having to take advantage of persistent storage as they move from stateless to stateful. 
  • How you divide your application into smaller services is a critical decision to get right. But for K8s specifically, it’s really important to think about how you are going to handle state: whether it’s using Stateful Sets, leveraging your provider’s block storage devices, or moving to a completely managed storage solution, implementing stateful services correctly the first time around is going to save you huge headaches. The other important question is what your application looked like before you began implementing K8s. Already had hundreds of services? Your biggest concern should be how to migrate those services in an incremental but in a seamless way. Just breaking the first few services off of your monolith. Making sure you have the infrastructure to support lots of services are going to be critical to a successful implementation.

Other

  • 1) Labels are your friends, label everything. They are the road map to be able to figure out where things are going. 2) Keep in mind you don’t know where anything is. Build your environment to be specific to a purpose, not to a location. In K8s it’s not as small as possible, it’s as small as necessary. Don’t over-engineer your environment to create a thousand tiny little things – deliver the information needed from each component.
  • We are seeing more people adopt K8s — different types of deployments, different flavors, different approaches to use. Some customers use a “build your own” approach. We are seeing people using on-prem vendors offering pre-packaged K8s distributions (e.g., Mesosphere, Docker, VMware). A lot is available on public cloud vendors. We see people adopting a consulting-based approach. The exact mix of what you pick depends on what kind of apps you are running on K8s and what kind of users you are servicing, and how advanced you are with your K8s deployments. We see a lot of reliance on cloud and on-prem (Red Hat and IBM are the most prominent). We recommend making sure you understand where you are in your journey, and who your users are to figure out the right mix. Make sure when deploying these technologies you start with people. People need to work well together when services are split between teams in terms of technology, culture, and people in engineering and ops.
  • Declarative APIs. The customer says here’s what I want and know it will happen. Applications will be better if they are stateless. Able to get its state from somewhere else like the database. Observability is a huge issue across a broad number of microservices.
  • The overall strategy of automating testing is critical. We see clients trying to find the right way to test. There is a huge variety of techniques and approaches. What needs to be tested, how are you set up, what is your maturity, what is the right level of automation? Test the right things in the right way, what tests can be run in parallel, how to deal with data management, how to leverage orchestration capability. What are the right devices you want to include? It depends on the maturity of the team and the software. Integrations, what else is your testing touching on? What are dependencies? When environments cannot take the scale and you fail in your expectations of what’s possible.
  • K8s alone won’t solve your problem. It’s not an enterprise-grade orchestration stack. You should have the same concerns for K8s as when you put software into production – security, monitoring and debugging. There are 500+ open source products for cloud-native networking. It's impossible to keep up with and maintain. K8s comes out with new releases all of the time. 
  • Think about how to manage configuration, how to use managed services, resource management, how to apply AI to K8s infrastructure. Managed services and credential management.
  • We have a consulting package where we do a lot of training around developing and managing K8s clusters. Look for micro-improvements along with the massive ecosystem with 500 different open source tools. Each is a new area of discovery for people getting into cloud-native computing. We help customers consume open-source with little to no friction with security updates.
  • The most important elements of implementing K8s to orchestrate containers is its ability to declaratively define application policies that are enforced at application runtime to maintain the desired state (e.g. the number of application pods, their types, and attributes) to ensure critical applications always remain available. Most recently, auto-scaling pods have also become a very important element to ensure predefined applications SLAs are always met. As well, the ease of deploying containers is an important element. Companies require the ability to develop, test, and deploy container-based applications quickly and seamlessly using their CI/CD pipelines.
  • I think the main thing to keep in mind is how important core infrastructure is to be successful with Kubernetes.  What I mean by this is that you need to have your ducks in a row with storage and networking especially.
  • 1) Have a plan first, driven by your goals for moving to K8s. Moving from monolithic apps to microservices running on Kubernetes has many benefits, but trying to solve every problem at the same time is a recipe for delayed migration, and frustration. Know what you’re trying to achieve (or better yet, the sequence of goals you’re trying to achieve) and design a plan to accomplish those. The roadmap is key. Think about how you stage the adoption of K8s and the migration from monolith to microservice and how that will get rolled out across the organization. There’s a tremendous amount of new technology in the cloud-native ecosystem; fold that technology into the roadmap, too. Realize that the roadmap can and will change as you gain experience with each piece of that new technology stack. 2) Don’t forget that a new implementation doesn’t eliminate the need to address all the old requirements around Operations, Security, and Compliance. Factors to consider: What kind of app are you creating? Internal, or external? Will it have customer data? How often will it be updated?  Questions to answer: who has access, and how will you enforce that access? Kubernetes to the rescue: Kubernetes provides a revolutionary way of implementing custom guardrails so that you can prevent problems before they happen. Kubernetes lets you inject custom rules and regulations right into the API server (via Admission Control) that enforce an unprecedented level of control. And because Kubernetes provides a uniform way of representing resources that used to be contained in silos (e.g. compute, storage, network), you can impose cross-silo controls. 3) Take your policy out of PDFs and put it into the code. When your infrastructure is code, and your apps are code, so too should your policy be code. The business needs developers to push code rapidly — to improve the business’s software faster, ideally, than competitors — but the business also needs that software to follow the same age-old operations, security, and compliance rules and regulations. The only way to succeed at both is to automate the enforcement of those rules and regulations by pulling them out of PDFs and wikis and moving them into the software. That’s what policy-as-code is all about.
  • Ensure the application is built as a set of independent microservices that are loosely coupled to serve the business. This helps get the most out of Kubernetes. Ensure microservices have built-in resilience (to handle failures), observability (to monitor application), and administrative features (to allow for elastic scaling, data backup, access control, and security, etc.). Essentially, having the application architected the right way is critical to reaping the benefits of Kubernetes.
  • One of the most important elements is ensuring K8s remain simple enough for developers to use. Developers are growing more committed to Kubernetes: in 2016, just under half said they were committed to the technology but by 2017, 77 percent said the same. Despite Kubernetes’ growing popularity, it is still often challenging for developers to manage manually. Our approach focuses on ensuring that clusters are configured for high availability, stability, and best practices. Kubernetes has many knobs that can be turned to limit resources, segregate components, and configure the way the system performs. It can be challenging to do this on your own so we have worked hard to provide users with a platform that has best practices baked in from the start.

Here’s who shared their insights:

6 Critical Steps to Prepare for Advanced Automation Design Rollouts

Line operators and other plant staff are the front-line for your automation system. Preparing them for any changes to advanced automation design or rollouts is critical to project success. Like that time your computer upgraded Windows without warning, sudden changes to operating systems, HMI’s, and controls are frustrating to plant personnel and bound to cause issues.

Get success with your next automation rollout by thoroughly preparing your key personnel by preparing the staff in the following critical ways.

When Meetings Attack! How To Reclaim Time for Deep Work

One of the best things I ever did for my productivity was to re-think the way I manage my daily schedule, especially when it comes to meetings. Prior to that, I never had any meaningful time for deep work that required intense thought.

Now, a few years on, my colleagues suspect I figured out a magic formula that adds extra hours to the day. In reality, it's more alchemy than sorcery, like turning lead (a calendar overrun with meetings) into gold (a schedule neatly organized into blocks).

Tips for Developing Successful IoT Applications

When it comes to developing applications and technology deployment, IoT projects are stalled almost 60 percent of the time, according to a recent Cisco survey. Since the potential of IoT technology is high, the projects can be a huge success if conducted and deployed successfully.

Though IoT application development is increasing at a rapid speed, designing and deploying IoT strategies is still more challenging than executing other software initiatives as it involves different business and operational units to work in harmony rather than giving the overall control to IT. Due to this, even the most successful organizations can find it overwhelming to create and execute a successful IoT strategy.