Some Thoughts on Toggles

Kent Beck was asking about “Feature Flags” on Twitter recently and their life cycle. Former colleague, Pete Hodgson linked back to an article he’d written to for Martin Fowler a couple of years ago, and added context.

I don’t think this article fits in that Tweet series, so it’s a standalone blog entry — much of which I’ve shared before.

OpenTracing in NodeJS, GO, Python: What, Why, How?

In previous blogs, we described how to optimize the deployment of applications and utilize guardrails for applications. These guardrails covered:

One additional guardrail in managing your application is to properly implement "Observability". As it turns out, observability is more important than ever because of the shift in application architecture (to a microservices architecture) and increased deployment pace (hourly/weekly vs. quarterly/yearly). Services are dynamically updated and are usually containerized. Hence, the traditional way of adding "monitoring" after the app deployment cannot scale.

Learn What Schematics Are and How To Use Them With Your React Apps

Developers love to create streamlined processes and programs that help us achieve efficiency. I've done a lot of live demos, and, over the years, I’ve noticed that my demos have a lot of instructions. Instead of writing everything down in scripts, I’ve started using automation tools powered by Schematics.

Schematics is a project that was released by the Angular team. In short, it provides an API that allows you to manipulate files and add new dependencies to any project that has apackage.json file. It can also work in non-Angular projects.

The Benefits of Software Composition Analysis

Software composition analysis (SCA) allows organizations to identify third-party and open-source components that have been integrated into all applications. For each of these components, it identifies:

  • Open security CVEs (if any)
  • Licenses
  • Out-of-date library versions and age

SCA easily answers the question, "Are any of my organization’s applications relying on a vulnerable library?" By offering a centralized application security platform and insightful executive-level dashboards that provide a holistic view of an organization’s application security posture, SCA offers the ability to track remediation trends and improve your remediation rate and time-to-fix.

Package Signing in PIP

A few days ago, I made this DEV.to post about how Python's PIP lacks GPG package signing. Well, it turns out that I'm wrong! It does have a package signing process after all. Except it's one of the most manual, archaic, and cumbersome security practices I've seen to date.

I discovered this method when I landed on this blog post by a core python developer yesterday. To test package signing in the way described, I created a test package called siterank, a small script to fetch Alexa ranking of given websites.

Defending Against TB-level Traffic Attacks With Advanced Anti-DDoS Systems

A Brief History of DDoS Prevention

Distributed Denial of Service (DDoS) uses a large number of valid requests to consume network resources and make services unresponsive and unavailable to legitimate users. Currently, DDoS attacks are one of the most powerful cyber-attacks to defend against.

DDoS has been around the cybersecurity world for a long time and is an old attack method. DDoS prevention has also undergone different stages.

The Future of Security Part One

To understand the current and future state of the cybersecurity landscape we spoke to and received written responses from 50 security professionals. We asked them, "What’s the future of cybersecurity from your perspective?" The most frequent responses focused on artificial intelligence, machine learning, and automation. You can read more about the future of security in Part two.

AI, ML, and Automation

  • Some of the largest investments and resources for Enterprise security exist in the network and infrastructure. With the growth of public cloud, SaaS, and mobile, the shift in security will go toward identity, data, and applications. Looking further out into the future, security vendors have not yet reaped the benefits of machine learning and AI like other industries have. It will eventually happen in security but not in the next few years. 
  • We are going to see more agent-based security embedded within our workload and implemented with the applications in microservices. There will be a distributed security position. Automation will help to handle changes. It will be absolutely critical to have higher intelligence to infer what challenges will come in future environments. We will need to deploy these in a very specific manner to get helpful insights.
  • AI technologies hold a lot of promise for the future of cybersecurity in helping organizations become truly proactive in addressing advanced threats. While it’s nowhere near capable of replacing cybersecurity expertise at the moment, we’re making progress in terms of harnessing analytics driven by AI to process larger varieties, volumes, and velocities of data more efficiently in order to produce better insights for human operators.
  • Get smarter with information. AI will be used across the board. We’re early in the game. AI will provide better contextual analysis to make better-informed decisions. A more focused approach to make security more effective.
  • Increasing automation and AI is paramount for the future. The only way to combat highly automated cyber threats is to respond with intelligent software-based solutions; humans are too limited to deal with the complexities surrounding threat-detection.
  • If you look at the growth of the internet from the 1970s and lay in the growth of cyber-attacks (essentially since 2010), it’s a scary graph. The attacks are increasing in frequency, scale, and effectiveness with success beyond data breaches and into debilitating ransomware.

    Globally, we are increasing our reliance on the Internet of Things (IoT); nearly 26,000,000,000 devices will be connected to the internet by 2020. The cybersecurity industry is going to need to leverage AI, machine learning, and deep learning more than ever in order to automate and augment the cyber workforce. Growing skills gaps and limited talent pools (estimated 3,500,000 million unfilled positions by 2021) are stretching current cyber teams beyond their limits, leaving company frontlines more vulnerable to threats.

    We will see the industry looking to AI/ML/DL to alleviate these challenges, but we must remember that new tools alone will not strengthen the company’s cybersecurity posture. We need to equally place a focus on upskilling the individuals and teams operating these new technologies in order to effectively use them to our greatest benefit. 

    The industry is already making strides in leveraging AI in cybersecurity products, many of which analyze user behavior and detect network anomalies. In the future, new products will leverage machine learning for log aggregation and enrichment, while the full scope of AI will provide intelligent advisors, feedback, and an AI adversary to practice against.  
  • Cybersecurity in the next few years will be both exciting and challenging at the same time, stemming from a few different areas:

    1. AI: The proliferation of technologies, such as Artificial Intelligence, will drive some of that. We expect challenges in 2019 to come in the form of bots implementing supervised learning techniques to better mimic human behavior in attacks, such as credential stuffing. Hackers aren’t the only group that will cause companies AI headaches; security vendors will increasingly be part of the problem. I predict there will be more false claims by security providers that their product uses AI, forcing organizations to be diligent in the procurement process to separate fact from fiction.

    2.  IoT: The threat attack surface will continue to expand as the portals to configure and control the plethora of connected devices are exposed. Hackers will increasingly be less interested in the device itself and more in what can be obtained and/or accomplished by infiltrating the control portal. One industry that showcases this vulnerability is the automotive sector, as more cities allow self-driving cars, I predict there will be a major accident as a result of a hacker taking over the controls.

    3. Cloud: As more companies adopt cloud-based apps, security approaches will need to evolve to keep pace as companies can no longer rely on solutions built into the cloud environment. Flexibility is essential in this landscape, as many legacy solutions can’t provide visibility into hybrid environments. In addition to this need for adaptability, I predict the threat landscape will continue to struggle with DDoS attacks, which are expected to increase in both size and scope. That said, I do still believe passwords will remain the dominant threat vector in 2019. Although by 2025, I anticipate that passwords will be rendered obsolete and replaced by a new security standard

Image title

Artificial Intelligence to Transform Healthcare as We Know It

The 20th century has a new dictum: everything that can be automated will be automated. Artificial intelligence’s (AI) unstoppable power is reverberating across all industries. However, in healthcare, it can be truly life-changing. Technological experts promised that AI and machine learning would transform the healthcare industry with novel applications that could streamline workforce, reduce human error, improve drug recovery, and find new, effective drugs.

The concept of AI has been around floating since 1956, but it has made a significant improvement in the last decade. From drug development to clinical research and insurance, AI applications are disrupting the way the health sector works to improve patient outcomes and reduce patients’ bills. The total public and private investments in AI in the healthcare industry are absolutely stunning. According to Allied Market Research, the artificial intelligence in the healthcare market is projected to garner $22.79 billion by 2023 with CAGR of 48.7% from 2017 to 2023.

API Concerns

To gather insights on the current and future state of API management, we asked IT professionals from 18 companies to share their thoughts. We asked them, "Do you have any concerns regarding the current state of API management?" Here's what they told us:

Design-First

  • A lot of great tools have become available for building and deploying APIs. However, design-first has not become as widely adopted as one might hope. Many times, developers are asked to provide an API, and they do just that. The person making the request for an API may have a very narrow need, and the developer can easily throw something together to satisfy that need, but then later additional users access the API and it comes up short in some way. 
  • One issue that might arise is the slow down of the applications. In some cases like creating an account, it's not unusual to have to communicate to multiple APIs to notify the action. For instance, sending a request to the analytics API, another to the payment processor to add the new user, one to add this new user to the chat & support service, and finally, one to send the welcome email.

    If these requests are made when creating the account, this can sum up to initial requests that can quickly take a few seconds to complete and might cause issues down the road. (What if the first two requests were successful, but not the last one? How can we retry them? Which one to prioritize?). That is why having some asynchronous call to external APIs implemented whenever possible, with retry mechanisms and alerting system in place a great way to mitigate these issues. 
  • The primary concern is that the market is ripe for consolidation. Solutions that are only 5-10 years old already have a “legacy” architecture that can’t keep pace with new architectural patterns like microservices. Customers need to be careful they don’t build their API capabilities around a solution that will need to be ripped out and replaced in just a few years.

Version Management

  • Dealing with the API contracts and open-API specification that tells you what an API can do. Version management or version changes to the APIs is still a challenge. Ways of communicating grow exponentially not a great way to handle. Moving so fast with features consumers have to support multiple versions at once.

Lack of Best Practices

  • A lot of progress has been made in the last few years. The big cloud providers are doing more. There's still a big learning curve for new developers. There are a lot of APIs out there that have not used best practices. For the development of new systems, the integration of APIs is still a big concern. 
  • So much of API management is still externally focused when there’s tremendous value in internal use cases. Focus on building a program internally that focuses on your internal developers. Give internal developers the information they need to take advantage of the services and get out of the way. Help your developers get their job done. If you create that portal for your internal audience where everything is well-documented and you’re guiding them down the path to get stuff done, that has a long-term benefit for internal teams and creates an innovation strategy with less fear of failure.
  • Too much of customer work is still required to put a good API product together, the vendors’ layout might be complicated and unintuitive. Lots of terms and product categories are fairly new and non-standard. As publishers switch their focus from a need to publish and focus more on the developer experience of the consumer, things will improve.
  • The lack of well-documented standards, especially when it comes to API versioning.

Other

  • How do you strike the balance between flexibility and optimization?  The public cloud is a good place to get started with small amounts of data but with large amounts of data, it starts to get complicated. We’re working to solve the throughput model of current APIs. Work is still needed on that front. With a microservices architecture, it gets more complicated as one service depends on many others. Tools like Istio and LinkerD become more important for circuit breaks and timeouts otherwise it just becomes a complete mess.
  • API management is a somewhat mature area. Key functionality blocks are pretty clear – gateway, portal, analytics, monetization capability, the key-value today is how effectively these pieces integrate together to deploy and get value from the API infrastructure. 
  • The domain is striving: complementary tools, techniques, practices, actors appear regularly - while existing ones are still being improved. The days when it was unclear whether selling an API could be a good idea, are long gone.
  • I think that there is often too much of a focus on quickly "pushing" APIs out and getting them online, without a deliberate effort to create a solid API-first culture. Runtime considerations such as security are certainly important, but everything that occurs before an API is deployed and running in production is equally important and often overlooked. 
  • Current APIM tools are heavily focused more on the provider and the consumer is an afterthought. We want to empower the API developers to know how many requests have been made by consumers in a given timeframe. We also want consumers to be able to handle situations such as an outage, API changes or retirement, etc.

    There usually is not a clear method for developers to communicate this information to the API consumers in a timely manner and the latter is oftentimes left in the dark.  Exposing this information is becoming more vital in today’s API economy. In addition, the lack of testing of an API (Contract, Automated, etc.) is something that will also need to be addressed. Implementing proper testing will ensure less likelihood of changes in the API breaking integrations, reliability and an ability to test at scale. 
  • API management is not something new. The old-style gateway is out there. It’s hard to manage. The trend happening in the market you see next-generation API gateway much lighter and endpoint as a proxy, open-source to see what’s going on, familiar with the microservices era. Using K8s backup by default. People are misrepresenting what their platform will do – suggesting they can facilitate microservices while they’re still running legacy, monolithic applications. 
  • We’re about a decade into the microservices era, but there’s still a paucity of tooling and practices to support companies who are delivering microservice APIs directly to customers.  At a standards level, the only option the OAuth2 spec provides is Scopes, and these are woefully inadequate for microservices APIs.  In anything beyond a trivial deployment, you need to reason about the resources a caller can access.

    The lack of standards around this makes authorization and access control a problem without a common solution, so everyone is left to roll their own or code to a vendor’s design. Given the asymmetry of changing API interfaces (and your access model is an absolutely critical part of your interface), we’ll be stuck with these proprietary implementations for years to come. 
  • Many API cost estimators are available, and few people understand the long-term costs they are imposing on themselves and their customers through their APIs. A conservative estimate is $25K to build an API, and $15K for your customer to integrate to it. Plus 50% annually to maintain it. This means an app with 100 APIs being consumed by 100 of your consumers is generating about $15,000,000 of cost mostly to your customers.

Here's who shared their insights:

How-To: The PGExercises PostgreSQL Tutorial Running on a Distributed SQL Database

PgExercises is a sample dataset used to power the PostgreSQL Exercises website. The site is comprised of over 80 exercises designed to be used as a companion to the official PostgreSQL documentation. The exercises on the PGExercises site range from simple SELECT statements and WHERE clauses, through JOINs and CASE statements, then on to aggregations, window functions, and recursive queries.

The dataset consists of 3 tables (members, bookings, and facilities) and table relationships as shown in the ER diagram below:

Top 10 Python Libraries You Must Know in 2019

In this article, we will discuss some of the top libraries in Python that can be used by developers to prase, clean, and represent data and implement machine learning in their existing applications.

We will be considering the following 10 libraries:

Build Your Own Google Firebase + Heroku on Kubernetes

Remember Heroku? I bet you do! I wouldn’t be surprised if your entire app is being powered by Heroku right now!

Heroku took the world by storm with its Platform-as-a-Service. Though it was originally for Ruby, they now have expanded to cover most of the languages. They have a ton of integrations and other managed server offerings.

AWS Resources That Should Be Backed Up

As many organizations have discovered first-hand, the consequences of data loss can be downright devastating, often resulting in prolonged downtime, significant damage to credibility, and major financial losses, both direct and indirect. While Amazon AWS has been heralded as a safer, more resilient alternative to on-premise computing, organizations must still think about how they can protect their AWS resources against loss by implementing a sound backup strategy.

Selecting AWS Resources for Backup

According to Amazon, AWS resources are all entities that an organization can work with, including EC2 instances, S3 buckets, and CloudFormation stacks. All AWS resources utilize a pay-as-you-go approach for pricing that’s similar to how utility companies charge for natural gas, water, and electricity.

Serverless Multi-Tier Architecture on AWS

Multi-Tier Architecture

Multi-tier architecture is also known as n-tier architecture. In such architecture, an application is developed and distributed in more than one layer. The number of layers depends on business requirements, but three-tier architecture is the preferred choice and most commonly used. 

This three-tier architecture includes the Presentation tier, the Logic tier, and the Data tier.

Why Cloud-Native Applications are the Future

Everyone is talking about applications being built in a cloud-native landscape these days. What exactly is cloud-native, and why it is so important?

Before we dig deeper, let’s look at an interesting statement. According to IDC, by 2022, 90% of all new apps will feature microservices architectures that improve the ability to design, debug, update, and leverage third-party code; 35% of all production apps will be cloud-native.

Interesting behavior using ‘__base__’ as a __slots__ string

class foo(object):
    __slots__ = ['__base__']

What's expected is for foo.__base__ to be a member_descriptor object
but instead what you get is:

>>> foo.__base__
<class 'object'>
>>> foo.__base__ is object
True

I guess something happens within the magic of class construction that makes use of __base__ as an object.

I stumbled across this because my particular class represented by foo makes use of __base__ for holding objects foo makes special instances of when called.

Just something interesting to note here, and possibly some useful knowledge so others can know to stay away from '__base__' as a class attr name.