Leveraging FastAPI for Building Secure and High-Performance Banking APIs

In today's fast-paced digital world, the banking industry relies heavily on robust and secure APIs to deliver seamless services to customers. FastAPI, a modern web framework for building APIs with Python, has gained significant popularity due to its exceptional performance, scalability, and ease of development. In this blog post, we will explore the importance of FastAPI for developing banking APIs and how it can empower financial institutions to deliver efficient and secure services to their customers also discuss the implementation of automated test cases using the BDD framework.

  1. Unmatched Performance: FastAPI is built on top of Starlette, a high-performance asynchronous framework. It leverages Python's asynchronous capabilities to handle multiple requests concurrently, resulting in blazing-fast response times. For banking APIs that require quick response times, FastAPI ensures that transactions, queries, and account information can be retrieved swiftly, providing customers with an excellent user experience.
  2. Type Safety and Documentation: FastAPI's strong typing system, powered by Pydantic, allows developers to define clear data models and request/response schemas. This type of safety ensures that the data passed to and from the API is accurate and consistent. Additionally, FastAPI generates interactive and automatically documented APIs based on the defined models, making it easier for developers and other stakeholders to understand and consume the API.
  3. Security and Authentication: Banking APIs handle sensitive customer data, and security is of utmost importance. FastAPI provides built-in security features such as OAuth2 authentication, token validation, and request validation, enabling developers to implement robust security measures to protect customer information. Furthermore, FastAPI seamlessly integrates with other security frameworks and tools, allowing the implementation of various authentication and authorization mechanisms, including two-factor authentication and encryption, to meet the stringent security requirements of the banking industry.
  4. Scalability and Extensibility: FastAPI's asynchronous architecture enables horizontal scaling, allowing banking APIs to handle a large volume of concurrent requests. Financial institutions can easily scale their API infrastructure based on user demand without sacrificing performance. Additionally, FastAPI's modular design and compatibility with other Python libraries provide developers with the flexibility to extend functionality by integrating with existing banking systems, databases, or third-party services.
  5. Automated Testing and Debugging: FastAPI encourages and facilitates automated testing with tools like pytest and pytest-bdd. These testing frameworks enable developers to write comprehensive tests, ensuring the correctness and stability of the API. FastAPI's integration with the Swagger UI and ReDoc documentation tools further simplifies testing and debugging by providing an interactive interface to explore and validate API endpoints.

Here's an example of a parameterized FastAPI code that creates a banking REST API to connect to a SQL Server database, extract account summary and user details, and return the JSON response. The parameter values are passed using a separate configuration file. Let's go step by step.

MVP Product Costing $100,000+ Without QA Testing. Is It Possible?

Recently, we released the beta version of our platform for crypto traders, which allows real-time analysis of the growth and decline charts of most cryptocurrencies as well as historical data. Currently, the product is undergoing closed testing with a group of investors. We are receiving feedback on the quality of the platform, fixing any bugs that arise, and preparing for the next stage of developing a full-fledged version of the product.

During the initial discussions with the client, we discussed the prospects of launching the project within 3-3.5 months, taking into account the scope of work, and assessed it as unlikely (50% probability of launching a "working" product) and high-risk (90% probability of failure). If you're interested, I can talk about the method of risk assessment in future articles. Here's why we came to this conclusion:

Writing an Interpreter: Implementation

Part 1 can be found here.

Lexer

The Lexer serves as the most basic element. Its primary function involves iterating through the characters present in the source code. It may combine certain characters to create a single token and subsequently generate a token object with its associated type. This object is then added to the resulting list.

Understanding Dependencies…Visually!

Show of hands, how many of us truly understand how your build automation tool builds its dependency tree? Now, lower your hand if you understand because you work on building automation tools. Thought so!

One frustrating responsibility of software engineers is understanding your project's dependencies: what transitive dependencies were brought in and by whom; why v1.3.1 is used when v1.2.10 was declared; what resulted when the transitive dependencies changed; how did multiple versions of the same artifact occur?

Networking and Community Building Opportunities at WordCamp Europe 2023

wordcamp europe 2023WordCamp Europe 2023 is so close, we can practically taste the moussaka! Yes, this year’s WCEU is in Athens, Greece on the 8–10th of June. The whole event is gearing up to be one of the best ever, with a high level of attendance post-COVID-19. As such, you’ll find a lot of ways to mingle, connect, and network with every other WordPress attendee.

Integrate Cucumber in Playwright With Java

Cucumber is a popular behavior-driven development (BDD) framework that allows you to write executable specifications in a natural language format. It promotes collaboration between stakeholders, developers, and testers by providing a common language that everyone can understand. Cucumber supports various programming languages, including Java, and provides a rich set of features for defining and executing test scenarios.

Playwright, on the other hand, is a powerful open-source automation framework for web browsers that supports multiple programming languages, including Java. It provides a high-level API for automating web browsers such as Chrome, Firefox, and Safari. Playwright offers robust browser automation capabilities, including interactions with web pages, taking screenshots, handling cookies, and much more.

Strategies for Reducing Total Cost of Ownership (TCO) For Integration Solutions

Integration Solutions

Integration solutions play a vital role in connecting systems, applications, and data across an organization. While implementing these solutions is essential, it's equally important to minimize the Total Cost of Ownership (TCO) associated with their development, operation, and maintenance. By adopting cost-effective strategies, organizations can optimize their investments and achieve greater value from their integration initiatives. In this article, we explore effective approaches to reduce TCO for integration solutions.

1. Define Clear Objectives and Requirements

Before embarking on an integration project, it is crucial to define clear objectives and requirements. This step ensures that the solution aligns with the strategic goals of the organization and helps avoid unnecessary development and maintenance costs caused by scope creep. By establishing well-defined objectives and requirements from the outset, you can maintain focus throughout the project, optimize resource allocation, and minimize the risk of unexpected expenses and delays.

User Safety and Privacy Protection in the Age of AI Chatbots in Healthcare

The use of AI chatbots in healthcare necessitates a comprehensive approach to address vital considerations. From data training to security measures and ethical practices, a wide range of precautions must be implemented. Human monitoring, user education, and mitigating the risks of anthropomorphism are crucial aspects to focus on.

Find out how continuous monitoring and feedback promote transparency, user safety, privacy protection, and the provision of reliable information.

AI and Cybersecurity Protecting Against Emerging Threats

Threats against technology are also growing exponentially along with technology. Cybercrime is big business; hackers are breaking into systems and stealing data using ever-more-advanced methods. Artificial Intelligence may hold the answer to defeating these nefarious forces. AI can assist in identifying new threats as they emerge in real-time and even foresee future assaults before they happen by employing machine learning algorithms and predictive analytics.

Cybersecurity should be a top priority for organizations to safeguard digital assets and consumer data. For security teams, AI can be a potent tool for network visibility, anomaly detection, and threat automation.

Cucumber Selenium Tutorial: A Comprehensive Guide With Examples and Best Practices

Cucumber is a well-known Behavior-Driven Development (BDD) framework that allows developers to implement end-to-end testing. The combination of Selenium with Cucumber provides a powerful framework that will enable you to create functional tests in an easy way.

It allows you to express acceptance criteria in language that business people can read and understand, along with the steps to take to verify that they are met. The Cucumber tests are then run through a browser-like interface that allows you to see what's happening in your test at each step.

Primitive Objects In JavaScript: When To Use Them (Part 2)

Writing programs in JavaScript is approachable at the beginning. The language is forgiving, and you get accustomed to its affordances. With time and experience working on complex projects, you start to appreciate things like control and precision in the development flow.

Another thing you might start to appreciate is predictability, but that’s way less of a guarantee in JavaScript. While primitive values are predictive enough, objects aren’t. When you get an object as an input, you need to check for everything:

  • Is it an object?
  • Does it have that property you’re looking for?
  • When a property holds undefined, is that its value, or is the property itself missing?

It’s understandable if this level of uncertainty leaves you slightly paranoid in the sense that you start to question all of your choices. Subsequently, your code becomes defensive. You think more about whether you’ve handled all the faulty cases or not (chances are you have not). And in the end, your program is mostly a collection of checks rather than bringing real value to the project.

By making objects primitive, many of the potential failure points are moved to a single place — the one where objects are initialized. If you can make sure that your objects are initialized with a certain set of properties and those properties hold certain values, you don’t have to check for things like the existence of properties anywhere else in your program. You could guarantee that undefined is a value if you need to.

Let’s look at one of the ways we can make primitive objects. It’s not the only way or even the most interesting one. Rather, its purpose is to demonstrate that working with read-only objects doesn’t have to be cumbersome or difficult.

Note: I also recommend you to check the first part of the series, where I covered some aspects of JavaScript that help bring objects closer to primitive values, which in return allows us to benefit from common language features that aren’t usually associated with an object, like comparisons and arithmetic operators.

Making Primitive Objects In Bulk

The most simple, most primitive (pun intended) way to create a primitive object is the following:

const my_object = Object.freeze({});

This single line results in an object that can represent anything. For instance, you could implement a tabbed interface using an empty object for each tab.

import React, { useState } from "react";

const summary_tab = Object.freeze({});
const details_tab = Object.freeze({});

function TabbedContainer({ summary_children, details_children }) {
    const [ active, setActive ] = useState(summary_tab);

    return (
        <div className="tabbed-container">
            <div className="tabs">
                <label
                    className={active === summary_tab ? "active" : ""}
                    onClick={() => {
                        setActive(summary_tab);
                    }}
                >
                    Summary
                </label>
                <label
                    className={active === details_tab ? "active": ""}
                    onClick={() => {
                        setActive(details_tab);
                    }}
                >
                    Details
                </label>
            </div>
            <div className="tabbed-content">
                {active === summary_tab && summary_children}
                {active === details_tab && details_children}
            </div>
        </div>
    );
}

export default TabbedContainer;

If you’re like me, that tabs element just screams to be reworked. Looking closely, you’ll notice that tab elements are similar and need two things, such as an object reference and a label string. Let’s include the label property in the tabs objects and move the objects themselves into an array. And since we’re not planning to change tabs in any way, let’s also make that array read-only while we’re at it.

const tab_kinds = Object.freeze([
    Object.freeze({ label: "Summary" }),
    Object.freeze({ label: "Details" })
]);

That does what we need, but it is verbose. The approach we’ll look at now is often used to hide repeating operations to reduce the code to just the data. That way, it is more apparent when the data is incorrect. What we also want is to freeze objects (including the array) by default rather than it being something we have to remember to type out. For the same reason, the fact that we have to specify a property name every time leaves room for errors, like typos.

To easily and consistently initialize arrays of primitive objects, I use a populate function. I don’t actually have a single function that does the job. I usually create one every time based on what I need at the moment. In the particular case of this article, this is one of the simpler ones. Here’s how we’ll do it:

function populate(...names) {
    return function(...elements) {
        return Object.freeze(
            elements.map(function (values) {
                return Object.freeze(names.reduce(
                    function (result, name, index) {
                        result[name] = values[index];
                        return result;
                    },
                    Object.create(null)
                ));
            })
        );
    };
}

If that one feels dense, here’s one that’s more readable:

function populate(...names) {
    return function(...elements) {
        const objects = [];

        elements.forEach(function (values) {
            const object = Object.create(null);

            names.forEach(function (name, index) {
                object[name] = values[index];
            });

            objects.push(Object.freeze(object));
        });

        return Object.freeze(objects);
    };
}

With that kind of function at hand, we can create the same array of tabbed objects like so:

const tab_kinds = populate(
    "label"
)(
    [ "Summary" ],
    [ "Details" ]
);

Each array in the second call represents the values of resulting objects. Now let’s say we want to add more properties. We’d need to add a new name to the first call and a value to each array in the second call.

const tab_kinds = populate(
    "label",
    "color",
    "icon"
)(                                          
    [ "Summary", colors.midnight_pink, "💡" ],
    [ "Details", colors.navi_white, "🔬" ]
);

Given some whitespace, you could make it look like a table. That way, it’s much easier to spot an error in huge definitions.

You may have noticed that populate returns another function. There are a couple of reasons to keep it in two function calls. First, I like how two contiguous calls create an empty line that separates keys and values. Secondly, I like to be able to create these sorts of generators for similar objects. For example, say we need to create those label objects for different components and want to store them in different arrays.

Let’s get back to the example and see what we gained with the populate function:

import React, { useState } from "react";
import populate_label from "./populate_label";

const tabs = populate_label(
    [ "Summary" ],
    [ "Details" ]
);

const [ summary_tab, details_tab ] = tabs;

function TabbedContainer({ summary_children, details_children }) {
    const [ active, setActive ] = useState(summary_tab);

    return (
        <div className="tabbed-container">
            <div className="tabs">
                {tabs.map((tab) => (
                    <label
                        key={tab.label}
                        className={tab === active ? "active" : ""}
                        onClick={() => {
                            setActive(tab);
                        }}
                    >
                        {tab.label}
                    </label>
                )}
            </div>
            <div className="tabbed-content">
                {summary_tab === active && summary_children}
                {details_tab === active && details_children}
            </div>
        </div>
    );
}

export default TabbedContainer;

Using primitive objects makes writing UI logic straightforward.

Using functions like populate is less cumbersome for creating these objects and seeing what the data looks like.

Check That Radio

One of the alternatives to the approach above that I’ve encountered is to retain the active state — whether the tab is selected or not — stored as a property of the tabs object:

const tabs = [
    {
        label: "Summary",
        selected: true
    },
    {
        label: "Details",
        selected: false
    },
];

This way, we replace tab === active with tab.selected. That might seem like an improvement, but look at how we would have to change the selected tab:

function select_tab(tab, tabs) {
    tabs.forEach((tab) => tab.selected = false);
    tab.selected = true;
}

Because this is logic for a radio button, only a single element can be selected at a time. So, before setting an element to be selected, we first need to make sure that all the other elements are unselected. Yes, it’s silly to do it like that for an array with only two elements, but the real world is full of longer lists than this example.

With a primitive object, we need a single variable that represents the selected state. I suggest setting the variable on one of the elements to make it the currently-selected element or setting it to undefined if your implementation allows for no selection.

With multi-choice elements like checkboxes, the approach is almost the same. We replace the selection variable with an array. Each time an element is selected, we push it to that array, or in the case of Redux, we create a new array with that element present. To unselect it, we either splice it or filter out the element.

let selected = []; // Nothing is selected.

// Select.
selected = selected.concat([ to_be_selected ]);

// Unselect.
selected = selected.filter((element) => element !== to_be_unselected);

// Check if an element is selected.
selected.includes(element);

Again, this is straightforward and concise. You don’t need to remember if the property is called selected or active; you use the object itself to determine that. When your program becomes more complex, those lines would be the least likely to be refactored.

In the end, it is not a list element’s job to decide whether it is selected or not. It shouldn’t hold this information in its state. For example, what if it’s simultaneously selected and not selected in several lists at a time?

Alternative To Strings

The last thing I’d like to touch on is an example of string usage I often encounter.

Text is a good trade-off for interoperability. You define something as a string and instantly get a representation of a context. It’s like getting an instant energy rush from eating sugar. As with sugar, the best case is that you get nothing in the long term. That said, it is unfulfilling, and you inevitably get hungry again.

The problem with strings is that they are for humans. It’s natural for us to distinguish things by giving them a name. But a program doesn’t understand the meaning of those names.

Most code editors and integrated development environments (IDEs) don’t understand strings. In other words, your tools won’t tell you whether or not the string is correct.

Your program only knows whether two strings are equal or not. And even then, telling whether strings are equal or unequal doesn’t necessarily provide an insight into whether or not any of those strings contain a typo.

Objects provide more ways to see that something is wrong before you run your program. Because you cannot have literals for primitive objects, you would have to get a reference from somewhere. For example, if it’s a variable and you make a typo, you get a reference error. There are tools that could catch that sort of thing before the file is saved.

If you were to get your objects from an array or another object, then JavaScript won’t give you an error when the property or an index does not exist. What you get is undefined, and that’s something you could check for. You have a single thing to check. With strings, you have surprises you might want to avoid, like when they’re empty.

Another use of strings I try to avoid is checking if we get the object we want. Usually, it’s done by storing a string in a property named id. Like, let’s say we have a variable. In order to check if it holds the object we want, we might need to check if a string in the id property matches the one we expect it to. To do that, we would first check if the variable holds an object. If the variable does hold an object, but the object lacks the id property, then we get undefined, and we’re fine. However, if we have one of the bottom values in that variable, then we are unable to ask for the property directly. Instead, we have to do something to either make sure that only objects arrive at this point or to do both checks in place.

const myID = "Oh, it's so unique";

function magnification(value) {
    if (value && typeof value === "object" && value.id === myID) {
        // do magic
    }
}

Here’s how we can do the same with primitive objects:

import data from "./the file where data is stored";

function magnification(value) {
    if (value === data.myObject) {
        // do magic
    }
}

The benefit of strings is that they are a single thing that could be used for internal identification and are immediately recognizable in logs. They sure are easy to use right out of the box, but they are not your friend as the complexity of a project increases.

I find there’s little benefit in relying on strings for anything other than output to the user. The lack of interoperability of strings in primitive objects could be solved gradually and without the need to change how you handle basic operations, like comparisons.

Wrapping Up

Working directly with objects frees us from the pitfalls that come with other methods. Our code becomes simpler because we write what your program needs to do. By organizing your code with primitive objects, we are less affected by the dynamic nature of JavaScript and some of its baggage. Primitive objects give us more guarantees and a greater degree of predictability.

Further Reading On SmashingMag