“Move Fast, With Safety:” CloudBees Connect 2022

Where is software delivery headed in 2022 and beyond, and what role will continuous delivery play? How have organizations like yours successfully transitioned to continuous delivery-and how can you take advantage of what they've already learned? Can feature flags help pave the way, and if so, how can you avoid the most common pitfalls as you scale?

Join us on Wednesday, February 9, 2022 for a half-day event to gain insight into the future of software development and delivery-broadly, as well as at your own organization. You'll have an opportunity to attend sessions and workshops to help you accelerate your DevOps maturity. You'll also be able to network with your like-minded peers-those in similar positions, and those who have walked your path already.

How to Build the Process and Culture Behind Using Feature Flags at Scale

Feature flags are a great way to release features quickly with very low risk — they allow software teams to make changes without re-deploying code. They have the power to make an organization’s DevOps practices more efficient, enabling testing in production. They can help developers, operations, QA, product, customer success, sales, and marketing teams deliver higher-quality features, faster.

But like many powerful tools, feature flags need to be used with care. When an organization adopts feature flags, it needs to simultaneously adopt a set of best practices for using them effectively and safely. This article goes beyond a technical “how-to” guide for implementing feature flags, and into the realm of process and culture. Sure, you can start using them today as an individual or a small team, but to truly realize the benefits of feature flags, the entire organization needs to embrace them — and without the necessary process and cultural shifts, you can accumulate a very large load of technical debt very quickly. At best, you’ll end up with bloated code, and at worst, the bloated code can lead to catastrophic events.

Is Progressive Delivery Just Continuous Delivery With Feature Flags?

If your organization has established an efficient CI/CD pipeline and you’ve made a successful transition to DevOps culture, you probably already understand the benefits of doing DevOps. Your teams share information and collaborate efficiently, and you’ve seen measurable increases in software delivery speed and quality. Aside from continuing to do what you’re doing, though, where do you go from here? How can your teams reach the next phase of DevOps maturity?

Once you’re comfortable with continuous integration and continuous delivery practices, the next step is to get really good at progressive delivery.

The First Stages of Feature Flag Adoption

For quite a while now, developers have used snippets of code to turn features on and off during runtime. These code snippets, more commonly known as feature flags, allow you to change features (or sub-sections) of your application without re-deploying. They help developers release code faster, with less risk. 

Feature flags are more prevalent than you might believe. At most companies, though, adoption begins at the grassroots level. Developers—usually within a single team, and often just a few individuals—implement a handful of feature flags in an ad hoc way. In these early stages, feature flags aren’t usually being considered at a systemic level. They’re meant to solve specific pain points at specific points in time. There is no formal program in place, and there are no plans to manage flags across teams. 

Advanced Time Series

As we enter the era of workflow automation, machine learning, and artificial intelligence, collecting and monitoring time series data is becoming more and more essential. This Refcard reviews use cases and case studies across industries and walks you through the process of collecting time series metrics from a host.

Understanding Apache Spark Failures and Bottlenecks

When everything goes according to plan, it's easy to write and understand applications in Apache Spark. However, sometimes a well-tuned application might fail due to a data change or a data layout change — or an application that had been running well so far, might start behaving badly due to resource starvation. It's important to understand underlying runtime components like disk usage, network usage, contention, and so on, so that we can make an informed decision when things go bad.

Edge Computing

Edge computing aims to solve some of the challenges of cloud computing, especially in situations where latency and bandwidth issues would otherwise put operations at risk. This Refcard provides an overview of the concept of edge computing, explores several use cases, and details how to develop an organization-wide strategy for adoption.

Getting Started With Istio

Learn the basics of Istio and explore the concept of a service mesh. This Refcard outlines how to install Istio, how to use intelligent routing, how to enable service-to-service security and access control, and much more.

Databases: Evolving Solutions and Toolsets

Software applications rely on databases to deliver data from an ever-increasing array of sources — securely, at scale, and in real time. From graph databases and specialized time-series databases to ensuring high performance and deploying across platforms, DZone's 2019 Guide to Databases dives into the technologies and best practices that help developers extract near-instantaneous insights from complex data.

Getting Started With Feature Flags

As a core component of continuous delivery, feature flagging empowers developers to release software faster, more reliably, and with more control. This Refcard provides an overview of the concept, ways to get started with feature flags, and how to manage features at scale.

API Integration Patterns

Whether you're working with on-premise, cloud, and/or third-party integrations, the questions remain the same: What is the client or user experience you need to offer? And how do you align your integration strategy with it? This Refcard explores fundamental patterns for authentication, polling, querying, and more, helping you assess your integration needs and approach the design, build, and maintenance of your API integrations in the most effective ways for your business case.

Kubernetes Monitoring Essentials

A centralized framework for monitoring your Kubernetes ecosystem offers valuable insights into how containerized workloads are running and can help you optimize them for better performance. However, as with any distributed system, monitoring Kubernetes is a complex undertaking. This Refcard first presents the primary benefits and challenges, and following, you'll learn about the fundamentals of building a Kubernetes monitoring framework, including how to capture monitoring data insights, leverage core Kubernetes components for monitoring, identify key metrics, and the critical Kubernetes components and services you should be monitoring.

Wolfram Engine Is Now Free for Developers

Wolfram Research announced today that it will make the Wolfram Engine available for free to anyone working on software development projects. You can download it for use in non-production environments here.

The Free Wolfram Engine for Developers will allow developers to implement the Wolfram Language in any standard software engineering stack. (The Wolfram Language, available in a sandbox here, is the multi-paradigm computational language behind Wolfram's best-known products, Mathematica and Wolfram Alpha.) The free engine also has full access to the Wolfram Knowledgebase and its curated, pre-trained neural networks, although you'll need to sign up for a free subscription to the Wolfram Cloud

Visa’s New Developer Platform ‘Visa Next’ Offers Open APIs

Visa launched a new platform for developers in the payments industry on Monday. Called Visa Next, it includes a set of open APIs in beta, with a suite of tools and documentation. It's designed to allow third-party applications to create their own digital Visa cards and build and manage new services for Visa cards.

A press release provided a list of actions that Visa Next APIs can perform. These actions range from making new digital card accounts on demand and tokenizing accounts for ecommerce and mobile wallets, to configuring rules around digital card use and card sharing.

Google Cloud Run: Serverless, Meet Containers

Screenshot from Cloud Run announcement video

At Google Cloud Next in San Francisco today, Google announced the beta version of Cloud Run, a new product designed to blend serverless with containerized application development. Built from Knative, Google's open source Kubernetes-based serverless platform, Cloud Run can be used to fully manage containers — or they can be turned over to an existing Google Kubernetes Cluster engine via Cloud Run on GKE, also introduced today.

Google Ends AI Ethics Board

Google announced yesterday that its week-old AI ethics advisory board is no more. The board almost immediately attracted criticism, much of it stemming from Google employees, regarding one of its chosen members.

"It’s become clear that in the current environment, [the AI ethics board] can’t function as we wanted. So we’re ending the council and going back to the drawing board," SVP of global affairs Kent Walker wrote yesterday in an update to Google's original blog post about the board. "We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics."

HPE to Teach Girl Scouts About Cybersecurity

Hewlett Packard Enterprise announced yesterday that it will partner with a Girl Scouts organization to teach girls how to protect themselves against phishing, cyberbullying, and other dangers online. 

HPE worked with Girl Scouts Nation's Capital, an organization serving the greater Washington, D.C. area, to create a game and curriculum to teach girls cybersecurity literacy. Part of the goal is to encourage girls to cultivate an interest in cybersecurity and IT and eventually pursue careers in those fields.