Testing Serverless Functions

Serverless computing, or functions-as-a-service, has picked up a lot of attention and speed due to its cost-effective pay-as-you-go price offering, multi-language/runtime support, as well as its easy learning curve without any need to provide the infrastructure layer. All the major cloud providers now have a serverless computing offer as part of their services portfolio: Amazon Web Services has Lambda, Microsoft Azure has Azure Functions, and Google Cloud has Cloud Functions. Furthermore, there are on-prem/on-Kubernetes options for running serverless functions on OpenWhisk or OpenFaaS. For the sake of consistency, I will refer to all of these services as serverless functions throughout the rest of this post. 

In a microservices (or even nanoservices, as serverless functions are sometimes known) architecture, there are inherently lots of components, modules, and services that form part of an application or platform. This can make testing a chore, and sometimes a neglected part of the SDLC for these platforms. This article will explore some options and techniques for testing these types of platforms to help make this aspect of your projects easier. Testing should always be a first-class citizen, regardless of the infrastructure. Irrespective of the language, framework, or tools we use, testing is vital to ensure both sustained development velocity and the quality of our deliveries to production. 

Serverless Architecture with AWS Cloud Development Kit (CDK)

The IT world revolves around servers – we set up, manage, and scale them, we communicate with them, deploy software onto them, and restrict access to them. In the end, it is difficult to imagine our lives without them. However, in this “serverfull” world, an idea of serverless architecture arose. A relatively new approach to building applications without direct access to the servers required to run them. Does it mean that the servers are obsolete, and that we no longer should use them? In this article, we will explore what it means to build a serverless application, how it compares to the well-known microservice design, what the pros and cons are of this new method, and how to use the AWS Cloud Development Kit framework to achieve that.

Background

There was a time when the world was inhabited by creatures known as “monolith applications”. Those beings were enormous, tightly coupled, difficult to manage, and highly resource-consuming, which made the life of tech people a nightmare.

Why a Serverless Data API Might be Your Next Database

App development stacks have been improving so rapidly and effectively that today there are a number of easy, straightforward paths to push code to production, on the cloud platform of your choice. But what use are applications without the data that users interact with? Persistent data is such an indispensable piece of the IT puzzle that it’s perhaps the reason the other pieces even exist. 

Enter cloud and internet scale requirements, essentially mandating that back-end services must be independently scalable / modular subsystems to succeed. Traditionally, this requirement has been difficult in the extreme for stateful systems. No doubt, database as-a-service (DBaaS) has made provisioning, operations, and security easier. But as anyone who has tried to run databases on Kubernetes will tell you: auto scaling databases, especially ones that are easy for developers to use, remain out of reach for mere mortals.

The State of Serverless Computing 2021

Serverless computing is redefining the way organizations develop, deploy, and integrate cloud-native applications. According to an industry report, the market size of serverless computing is expected to reach 7.72 billion by 2021. A new and compelling paradigm for the deployment of cloud applications, serverless computing is at the precipice of enterprise shift towards containers and microservices.

In the year 2021, the serverless paradigm shift presents exciting opportunities to organizations by providing a simplified programming model for creating cloud applications by abstracting away most operational concerns. Major cloud vendors Microsoft, Google, and Amazon are already in the game with their respective offerings and there is no reason you shouldn't board the train.

Build a Serverless App Using Go and Azure Functions

Webhook backend is a popular use case for FaaS (Functions-as-a-service) platforms. They could be used for many use cases such as sending customer notifications to responding with funny GIFs! Using a Serverless function, it's quite convenient to encapsulate the webhook functionality and expose it in the form of an HTTP endpoint. In this tutorial, you will learn how to implement a Slack app as a Serverless backend using Azure Functions and Go. You can extend the Slack platform and integrate services by implementing custom apps or workflows that have access to the full scope of the platform allowing you to build powerful experiences in Slack.

This is a simpler version of the Giphy for Slack. The original Giphy Slack app works by responding with multiple GIFs in response to a search request. For the sake of simplicity, the function app demonstrated in this post just returns a single (random) image corresponding to a search keyword using the Giphy Random API. This post provides a step-by-step guide to getting the application deployed to Azure Functions and integrating it with your Slack workspace.

FaaS: Security Considerations to Know Before Going Serverless

Serverless architecture is becoming a compelling choice for developers and companies to host their applications. It is easy to see why with its ability to dynamically scale to meet load requirements as well as removing a lot of the complexity with deploying and maintaining applications, sometimes even removing the need for an Ops team. But what are the security considerations we should consider before choosing to go serverless?

What is Serverless Architecture?

Serverless architecture (also known as serverless computing or function as a service, FaaS) is a software architecture where applications are hosted by a third-party service. This essentially means that your application is broken into individual services, which negates the need for server software and hardware management by the developers.

Automating IT Operations With Oracle Functions

Oracle Functions is a fully managed, multi-tenant, highly scalable, functions-as-a-service platform. It's built on enterprise-grade Oracle Cloud Infrastructure components and powered by the open source Fn Project serverless platform. Along with Oracle Events, Oracle Functions can deliver powerful capabilities for infrastructure and application automation. Together, they enable services to act automatically based on state changes in infrastructure resources, a common use case for enterprise IT environments.

This post walks through an example of a function that verifies whether a compute instance is tagged correctly when it's provisioned. If the instance isn't tagged properly, the function acts to stop the instance. This practice is common in infrastructure automation; it allows resources to be audited for compliance with internal governance policies as they are created, rather than after.

Stateful Functions: An Open Source Framework for Lightweight, Stateful Applications at Scale

At Flink Forward Europe 2019, Stephan Ewen from ververica announced the release of Stateful Functions, an open-source framework that reduces the complexity of building and orchestrating distributed stateful applications at scale. Stateful Functions brings together the benefits of stream processing with Apache Flink and Function-as-a-Service (FaaS) to provide a powerful abstraction for the next generation of event-driven architectures.

In this article, we will explain the motivation behind building Stateful Functions, and why we proposed the project to the Apache Flink community as an open-source contribution.

Serverless on GCP: A Comprehensive Guide

Like many other marketing buzzwords, the concept of "serverless" has taken on a life of its own, which can make it difficult to understand what serverless actually means. What it really means is that the cloud provider fully manages server infrastructure all the way up to the application layer. For example, GCE isn't serverless because, while Google manages the physical server infrastructure, we still have to deal with patching operating systems, managing load balancers, configuring firewall rules, and so on. Serverless means we merely worry about our application code and business logic and nothing else. This concept extends beyond pure compute though, including things like databases, message queues, stream processing, machine learning, and other types of systems.

There are several benefits to the serverless model. First, it allows us to focus on building products, not managing infrastructure. These operations-related tasks, while important, are not generally things that differentiate a business. It's just work that has to be done to support the rest of the business. With cloud —and serverless in particular — many of these tasks are becoming commoditized, freeing us up to focus on things that matter to the business.

Demystifying Lambda in VPC and Its Confusing Error

Suddenly, my AWS Lambda function stopped working. Upon invocation, not a single line of code was executing, and I was just getting this error from the console:

"Calling the invoke API action failed with this message: Lambda was not able to access EC2's API using the Lambda Execution Role to set up the Lambda function."

Deconstructing Serverless Computing Part 3: Ninety-Nine Platforms, But How Do You Choose One?

“All problems in computer science can be solved by another level of indirection.” — David Wheeler

The first parts of the series have given you a taste of what serverless computing is and the use cases it enables, along with its benefits and drawbacks. Armed with this knowledge, you might be planning to give serverless a try before attempting to migrate all your applications to this new architecture. As you start Googling around, you quickly realize there are tens of FaaS platforms, each promising unique capabilities. As confusion settles in your mind and you start getting tired of all the boastful claims made by FaaS providers, one question makes its way to the forefront of your mind: Which one should you choose?

In this post, we will navigate the tangled ecosystem of FaaS offerings, both managed and open-source, and determine their main capabilities and key differentiating features. The goal is to be able to make an informed choice depending on your requirements and use cases.

Comparing Serverless Architecture Providers: AWS, Azure, Google, IBM, and Other FaaS Vendors

According to the RightScale 2018 State of the Cloud report, serverless architecture penetration rate increased to 75 percent. Aware of what serverless means, you probably know that the market of cloudless architecture providers is no longer limited to major vendors such as AWS Lambda or Azure Functions. Now we have a range of cloud providers to choose from. But, why would anybody switch to serverless architecture? And what is the difference between all those providers and services they offer?

Where Does Serverless Come From?

To answer that question, let’s roll back a bit. Fourteen years ago, cloud technologies began being adopted in IT. The market had to change rapidly, as every year brought new approaches to app development. First, businesses mostly utilized the IaaS (Infrastructure-as-a-Service) approach. It entailed renting servers and moving the infrastructure to clouds, but teams still had to deal with server setup. Then came the gradual dismissal of manual server operation, and PaaS (Platform-as-a-Service) appeared. PaaS providers offered a more complete application stack, like operating systems and databases to run in the cloud and be managed by the vendor. But that wasn’t enough.