How to Add an Animated Background in WordPress (2 Methods)

Do you want to create an animated background for your WordPress website?

An animated background can add some visual appeal to your website. It will make your site more attractive and memorable, leaving a lasting impression on your visitors.

In this guide, we will show you how to add a particle background in WordPress using particle.js, a JavaScript animation library.

How to Add an Animated Background in WordPress

Why Add an Animated Background in WordPress?

Customizing your website background might seem unimportant. That said, it can actually shape visitors’ first impressions of your brand and affect their experience on your site.

An animated background can enhance your website’s visual appeal, making it more interactive and eye-catching to visitors. It gives the impression that your WordPress site uses a high-quality and innovative design.

Many websites also use animated effects when they want to celebrate a special occasion.

For example, you may see eCommerce stores adding animated snowflakes or falling Christmas trees on their web pages to create a festive atmosphere for the holiday season.

An example of a Christmas particle background

For more tips to get festive on your website, check out our guide on how to spread the holiday spirit on your WordPress site.

Some websites also use a preloader background animation on their website.

With this, visitors can get the sense that the site is loading, making them more likely to wait patiently for the web page elements to appear. You can read our article on adding a preloader background animation for more information.

In this guide, we will show you how to add an animated background using particle.js. If you want to find out what that is, just continue to the next section.

What Is particle.js?

particle.js is a JavaScript library that lets you create stunning visual effects with particles, which are small, graphical, animated elements.

These particles can be customized by size, color, shape, and movement. They also respond to user interactions, such as mouse movements or clicks, to add an extra layer of engagement to your website.

Now that you know what particle.js is, let’s see how you can use it to add an animated background in WordPress. There are two methods for beginners, and you can navigate through this guide with the quick links below:

The first method is to use SeedProd, which is the best WordPress page builder plugin on the market. It offers a built-in and highly customizable particle background feature.

With it, you can choose one of the particle animations that are already available or add a custom one yourself. It’s also possible to modify the number of particles, animation movements, and hover effects to suit your preferences.

For more information about SeedProd, you can check out our in-depth SeedProd review. We’ve covered everything, including the customization options, template and block choices, and third-party integrations.

The SeedProd page builder plugin for WordPress

In this guide, we will be using the premium version of SeedProd, as the particle background feature is available there.

To use SeedProd, you will need to install and activate the plugin first. You can find more details about this in our beginner’s guide on installing a WordPress plugin.

After that, simply copy-paste your license key to the plugin. Just go to your WordPress dashboard, navigate to SeedProd » Settings, and insert the license key in the appropriate field. Then, click ‘Verify Key.’

Adding a SeedProd license key to WordPress

If you want to customize your theme first before adding a particle background in WordPress, then you can follow our guide on how to easily create a custom theme with SeedProd.

Now, you need to open the drag-and-drop builder for the page you want to insert the particle background into. If you have created a theme with SeedProd, then you should already have some pages added in WordPress for you.

Next, simply go to Pages » All Pages and hover your cursor over a page, like a homepage, about page, or something else. Then, choose the ‘Edit with SeedProd’ button.

Clicking Edit with SeedProd on a WordPress page

If this option does not appear on your end, don’t worry.

Just click the ‘Edit’ button instead, and in the block editor, click the ‘Edit with SeedProd’ button.

Clicking Edit with SeedProd inside the WordPress block editor

You should now be inside the page builder of SeedProd.

Just hover your cursor over the page section where you want to add the particle background in WordPress and select it. You will know that you’ve selected a section if a purple border and toolbar appear at the top of it.

Once you have clicked on a section, the Section sidebar on the left should show up.

All you have to do now is switch to the ‘Advanced’ tab and toggle the ‘Enable Particle Background’ setting.

Enabling the particle background settings in SeedProd

There are several Particle Background settings you can configure.

One is Style, where you can choose any of the available animation effects, which are Polygon, Space, Snow, Snowflakes, Christmas, Halloween, and Custom.

We will talk more about adding a custom particle background animation later in the article.

Configuring the basic particle background settings in SeedProd

There is also Opacity, which controls how opaque the animation looks, and Flow Direction, which sets the direction that the particles should head toward.

For certain particle styles, you can customize their color, too.

However, for Christmas and Halloween, there are no color settings, as the particles are in the form of images.

What the Christmas particle background in SeedProd looks like

Below Color is ‘Advanced Settings.’ Enabling it lets you customize the Number of Particles, Particle Size, Move Speed, and Enable Hover Effect.

With the last feature, the particles will move according to the direction of your mouse. Note that this won’t work when you view the website in the page builder area or if the content within the section takes up the entire space of that section.

The particle background's advanced settings in SeedProd

And that’s all you need to do.

Once you’ve finished customizing your WordPress particle background, you can click the ‘Save’ button at the top right corner to publish the changes. You can also choose the ‘Preview’ button to see what the particle background looks like.

Saving or previewing changes in SeedProd

Creating a Custom Particle Background for Your Website

If the animated effects available don’t suit your needs, then you can also create a custom one. What you should do is select the ‘Custom’ style in the Particle Background settings.

After that, click the link in the line ‘Please visit the link here and choose required attributes for particle.’

Selecting the Custom style and clicking the link provided in SeedProd to make a custom particle background

On this website, you can customize your desired particle design, its interactivity, and the background color.

Within the ‘particles’ setting, you can adjust the particle numbers, color, shape, size, opacity, lines that link the particles, and movement.

Editing the Particles settings in Vincent Garreau's particle.js website

Below that is ‘interactivity.’

This is where you can adjust how the particles behave when you hover over them and click on them.

The particle interactivity settings in Vincent Garreau's website

Finally, you have ‘page background (css).’ Here, you can change the background color of the particle animation and modify its size, position, and repetition.

If needed, you can upload a custom background image URL, too.

The particle page background settings in Vincent Garreau's website

Once you are done, you can click the ‘Download current config (json)’ button at the bottom.

This will download the particle background’s JSON code file, which you need to open using a text editor app. Keep the text editor window open as you continue to the next steps.

Downloading the JSON file for the particle background

Now, let’s go back to the SeedProd page builder.

Navigate to the Particle Background menu within the Advanced settings again. Then, copy and paste the JSON code into the appropriate text box.

You should now see your particle background in the preview section.

Inserting the JSON code  in the particle background settings of SeedProd

Click ‘Preview’ to see what the particle background looks like on the front end and ‘Save’ to finalize the changes.

Here’s an example of what the particle background may look like:

Example of an animated particle background made with SeedProd

Method 2: Adding an Animated Background With Particle Background WP (Free)

The second method is a free alternative to using SeedProd. For this, you will need the Particle Background WP plugin.

Like before, make sure to install and activate the Particle Background WP plugin. If you need some guidance, you can check out our guide on how to install a WordPress plugin.

After the plugin is active, go to Particle Background from the WordPress dashboard. Here, you will see several sections.

One is Deploy. It includes a shortcode for the finished particle background if you want to add it later to your pages or posts.

You can also tick the ‘Add to front page’ and/or ‘Add to blog page’ boxes to automatically insert the background into those pages.

Configuring the Particle Background WP Deploy settings

Scrolling down, you will see the Content section, which looks slightly like the classic editor. This is where you can add some text on top of the particle background.

If you know HTML, then you can add some HTML code to customize the text. Alternatively, you can click ‘Add Media’ to insert images or files from the WordPress media library.

Inserting some text in the Particle Background WP plugin

Below are the Settings for the WordPress particle background animation. You can adjust the Particle Density, which controls how close and far the particles are, the particle’s Dot Color, and the Background Color. It’s also possible to make the background transparent.

One downside of this WordPress plugin is you can’t adjust the particle shape the same way you would with SeedProd. So, that’s something to consider if you are looking to use this plugin.

Configuring the Particle Background WP's animated particle background settings

And you are done!

Here’s an example of what the particle animated background looks like using this WordPress plugin.

An animated background example using Particle Background WP plugin

Do Animated Backgrounds Slow Down Websites?

If not done right, animated backgrounds can slow down your website. But there are ways to avoid this.

For particle backgrounds, the number of particles and how fast they move can affect how quickly your page loads. More particles and faster movement need more processing power, which can slow things down.

To fix this, you can try different settings for particle density and speed to find what works best for your website. During this process, you can run WordPress speed tests to see the effects.

It’s also a good idea to only use animated backgrounds on pages where they matter the most. You don’t need them everywhere, or they might get boring.

Lastly, to keep your website fast with a particle background, make sure to follow the best practices for website speed. You can learn more in our ultimate guide on making WordPress faster.

We hope this article has helped you learn how to add an animated particle background in WordPress. You may also want to check out our guide on how to add a YouTube video as fullscreen background in WordPress and our expert pick of the best WordPress drag-and-drop page builders.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add an Animated Background in WordPress (2 Methods) first appeared on WPBeginner.

Passkeys: A No-Frills Explainer On The Future Of Password-Less Authentication

Passkeys are a new way of authenticating applications and websites. Instead of having to remember a password, a third-party service provider (e.g., Google or Apple) generates and stores a cryptographic key pair that is bound to a website domain. Since you have access to the service provider, you have access to the keys, which you can then use to log in.

This cryptographic key pair contains both private and public keys that are used for authenticating messages. These key pairs are often known as asymmetric or public key cryptography.

Public and private key pair? Asymmetric cryptography? Like most modern technology, passkeys are described by esoteric verbiage and acronyms that make them difficult to discuss. That’s the point of this article. I want to put the complex terms aside and help illustrate how passkeys work, explain what they are effective at, and demonstrate what it looks like to work with them.

How Passkeys Work

Passkeys are cryptographic keys that rely on generating signatures. A signature is proof that a message is authentic. How so? It happens first by hashing (a fancy term for “obscuring”) the message and then creating a signature from that hash with your private key. The private key in the cryptographic key pair allows the signature to be generated, and the public key, which is shared with others, allows the service to verify that the message did, in fact, come from you.

In short, passkeys consist of two keys: a public and private. One verifies a signature while the other verifies you, and the communication between them is what grants you access to an account.

Here’s a quick way of generating a signing and verification key pair to authenticate a message using the SubtleCrypto API. While this is only part of how passkeys work, it does illustrate how the concept works cryptographically underneath the specification.

const message = new TextEncoder().encode("My message");

const keypair = await crypto.subtle.generateKey(
  { name: "ECDSA", namedCurve: "P-256" },
  true,
  [ 'sign', 'verify' ]
);

const signature = await crypto.subtle.sign(
  { name: "ECDSA", hash: "SHA-256" },
  keypair.privateKey,
  message
);

// Normally, someone else would be doing the verification using your public key
// but it's a bit easier to see it yourself this way
console.log(
  "Did my private key sign this message?",
  await crypto.subtle.verify(
    { name: "ECDSA", hash: "SHA-256" },
    keypair.publicKey,
    signature,
    message
  )
);

Notice the three parts pulling all of this together:

  1. Message: A message is constructed.
  2. Key pair: The public and private keys are generated. One key is used for the signature, and the other is set to do the verification.
  3. Signature: A signature is signed by the private key, verifying the message’s authenticity.

From there, a third party would authenticate the private key with the public key, verifying the correct pair of keys or key pair. We’ll get into the weeds of how the keys are generated and used in just a bit, but for now, this is some context as we continue to understand why passkeys can potentially erase the need for passwords.

Why Passkeys Can Replace Passwords

Since the responsibility of storing passkeys is removed and transferred to a third-party service provider, you only have to control the “parent” account in order to authenticate and gain access. This is a lot like requiring single sign-on (SSO) for an account via Google, Facebook, or LinkedIn, but instead, we use an account that has control of the passkey stored for each individual website.

For example, I can use my Google account to store passkeys for somerandomwebsite.com. That allows me to prove a challenge by using that passkey’s private key and thus authenticate and log into somerandomwebsite.com.

For the non-tech savvy, this typically looks like a prompt that the user can click to log in. Since the credentials (i.e., username and password) are tied to the domain name (somerandomwebsite.com), and passkeys created for a domain name are only accessible to the user at login, the user can select which passkey they wish to use for access. This is usually only one login, but in some cases, you can create multiple logins for a single domain and then select which one you wish to use from there.

So, what’s the downside? Having to store additional cryptographic keys for each login and every site for which you have a passkey often requires more space than storing a password. However, I would argue that the security gains, the user experience from not having to remember a password, and the prevention of common phishing techniques more than offset the increased storage space.

How Passkeys Protect Us

Passkeys prevent a couple of security issues that are quite common, specifically leaked database credentials and phishing attacks.

Database Leaks

Have you ever shared a password with a friend or colleague by copying and pasting it for them in an email or text? That could lead to a security leak. So would a hack on a system that stores customer information, like passwords, which is then sold on dark marketplaces or made public. In many cases, it’s a weak set of credentials — like an email and password combination — that can be stolen with a fair amount of ease.

Passkeys technology circumvents this because passkeys only store a public key to an account, and as you may have guessed by the name, this key is expected to be made accessible to anyone who wants to use it. The public key is only used for verification purposes and, for the intended use case of passkeys, is effectively useless without the private key to go with it, as the two are generated as a pair. Therefore, those previous juicy database leaks are no longer useful, as they can no longer be used for cracking the password for your account. Cracking a similar private key would take millions of years at this point in time.

Phishing

Passwords rely on knowing what the password is for a given login: anyone with that same information has the same level of access to the same account as you do. There are sophisticated phishing sites that look like they’re by Microsoft or Google and will redirect you to the real provider after you attempt to log into their fake site. The damage is already done at that point; your credentials are captured, and hopefully, the same credentials weren’t being used on other sites, as now you’re compromised there as well.

A passkey, by contrast, is tied to a domain. You gain a new element of security: the fact that only you have the private key. Since the private key is not feasible to remember nor computationally easy to guess, we can guarantee that you are who you say we are (at least as long as your passkey provider is not compromised). So, that fake phishing site? It will not even show the passkey prompt because the domain is different, and thus completely mitigates phishing attempts.

There are, of course, theoretical attacks that can make passkeys vulnerable, like someone compromising your DNS server to send you to a domain that now points to their fake site. That said, you probably have deeper issues to concern yourself with if it gets to that point.

Implementing Passkeys

At a high level, a few items are needed to start using passkeys, at least for the common sign-up and log-in process. You’ll need a temporary cache of some sort, such as redis or memcache, for storing temporary challenges that users can authenticate against, as well as a more permanent data store for storing user accounts and their public key information, which can be used to authenticate the user over the course of their account lifetime. These aren’t hard requirements but rather what’s typical of what would be developed for this kind of authentication process.

To understand passkeys properly, though, we want to work through a couple of concepts. The first concept is what is actually taking place when we generate a passkey. How are passkeys generated, and what are the underlying cryptographic primitives that are being used? The second concept is how passkeys are used to verify information and why that information can be trusted.

Generating Passkeys

A passkey involves an authenticator to generate the key pair. The authenticator can either be hardware or software. For example, it can be a hardware security key, the operating system’s Trusted Platform Module (TPM), or some other application. In the cases of Android or iOS, we can use the device’s secure enclave.

To connect to an authenticator, we use what’s called the Client to Authenticator Protocol (CTAP). CTAP allows us to connect to hardware over different connections through the browser. For example, we can connect via CTAP using an NFC, Bluetooth, or a USB connection. This is useful in cases where we want to log in on one device while another device contains our passkeys, as is the case on some operating systems that do not support passkeys at the time of writing.

A passkey is built off another web API called WebAuthn. While the APIs are very similar, the WebAuthn API differs in that passkeys allow for cloud syncing of the cryptographic keys and do not require knowledge of whom the user is to log in, as that information is stored in a passkey with its Relying Party (RP) information. The two APIs otherwise share the same flows and cryptographic operations.

Storing Passkeys

Let’s look at an extremely high-level overview of how I’ve stored and kept track of passkeys in my demo repo. This is how the database is structured.

Basically, a users table has public_keys, which, in turn, contains information about the public key, as well as the public key itself.

From there, I’m caching certain information, including challenges to verify authenticity and data about the sessions in which the challenges take place.

Again, this is only a high-level look to give you a clearer idea of what information is stored and how it is stored.

Verifying Passkeys

There are several entities involved in passkey:

  1. The authenticator, which we previously mentioned, generates our key material.
  2. The client that triggers the passkey generation process via the navigator.credentials.create call.
  3. The Relying Party takes the resulting public key from that call and stores it to be used for subsequent verification.

In our case, you are the client and the Relying Party is the website server you are trying to sign up and log into. The authenticator can either be your mobile phone, a hardware key, or some other device capable of generating your cryptographic keys.

Passkeys are used in two phases: the attestation phase and the assertion phase. The attestation phase is likened to a registration that you perform when first signing up for a service. Instead of an email and password, we generate a passkey.

Assertion is similar to logging in to a service after we are registered, and instead of verifying with a username and password, we use the generated passkey to access the service.

Each phase initially requires a random challenge generated by the Relying Party, which is then signed by the authenticator before the client sends the signature back to the Relying Party to prove account ownership.

Browser API Usage

We’ll be looking at how the browser constructs and supplies information for passkeys so that you can store and utilize it for your login process. First, we’ll start with the attestation phase and then the assertion phase.

Attest To It

The following shows how to create a new passkey using the navigator.credentials.create API. From it, we receive an AuthenticatorAttestationResponse, and we want to send portions of that response to the Relying Party for storage.

const { challenge } = await (await fetch("/attestation/generate")).json(); // Server call mock to get a random challenge

const options = {
 // Our challenge should be a base64-url encoded string
 challenge: new TextEncoder().encode(challenge),
 rp: {
  id: window.location.host,
  name: document.title,
 },
 user: {
  id: new TextEncoder().encode("my-user-id"),
  name: 'John',
  displayName: 'John Smith',
 },
 pubKeyCredParams: [ // See COSE algorithms for more: https://www.iana.org/assignments/cose/cose.xhtml#algorithms
  {
   type: 'public-key',
   alg: -7, // ES256
  },
  {
   type: 'public-key',
   alg: -256, // RS256
  },
  {
   type: 'public-key',
   alg: -37, // PS256
  },
 ],
 authenticatorSelection: {
  userVerification: 'preferred', // Do you want to use biometrics or a pin?
  residentKey: 'required', // Create a resident key e.g. passkey
 },
 attestation: 'indirect', // indirect, direct, or none
 timeout: 60_000,
};

// Create the credential through the Authenticator
const credential = await navigator.credentials.create({
 publicKey: options
});

// Our main attestation response. See: https://developer.mozilla.org/en-US/docs/Web/API/AuthenticatorAttestationResponse
const attestation = credential.response as AuthenticatorAttestationResponse;

// Now send this information off to the Relying Party
// An unencoded example payload with most of the useful information
const payload = {
 kid: credential.id,
 clientDataJSON: attestation.clientDataJSON,
 attestationObject: attestation.attestationObject,
 pubkey: attestation.getPublicKey(),
 coseAlg: attestation.getPublicKeyAlgorithm(),
};

The AuthenticatorAttestationResponse contains the clientDataJSON as well as the attestationObject. We also have a couple of useful methods that save us from trying to retrieve the public key from the attestationObject and retrieving the COSE algorithm of the public key: getPublicKey and getPublicKeyAlgorithm.

Let’s dig into these pieces a little further.

Parsing The Attestation clientDataJSON

The clientDataJSON object is composed of a few fields we need. We can convert it to a workable object by decoding it and then running it through JSON.parse.

type DecodedClientDataJSON = {
 challenge: string,
 origin: string,
 type: string
};

const decoded: DecodedClientDataJSON = JSON.parse(new TextDecoder().decode(attestation.clientDataJSON));
const {
 challenge,
 origin,
 type
} = decoded;

Now we have a few fields to check against: challenge, origin, type.

Our challenge is the Base64-url encoded string that was passed to the server. The origin is the host (e.g., https://my.passkeys.com) of the server we used to generate the passkey. Meanwhile, the type is webauthn.create. The server should verify that all the values are expected when parsing the clientDataJSON.

Decoding TheattestationObject

The attestationObject is a CBOR encoded object. We need to use a CBOR decoder to actually see what it contains. We can use a package like cbor-x for that.

import { decode } from 'cbor-x/decode';

enum DecodedAttestationObjectFormat {
  none = 'none',
  packed = 'packed',
}
type DecodedAttestationObjectAttStmt = {
  x5c?: Uint8Array[];
  sig?: Uint8Array;
};

type DecodedAttestationObject = {
  fmt: DecodedAttestationObjectFormat;
  authData: Uint8Array;
  attStmt: DecodedAttestationObjectAttStmt;
};

const decodedAttestationObject: DecodedAttestationObject = decode(
 new Uint8Array(attestation.attestationObject)
);

const {
 fmt,
 authData,
 attStmt,
} = decodedAttestationObject;

fmt will often be evaluated to "none" here for passkeys. Other types of fmt are generated through other types of authenticators.

Accessing authData

The authData is a buffer of values with the following structure:

Name Length (bytes) Description
rpIdHash 32 This is the SHA-256 hash of the origin, e.g., my.passkeys.com.
flags 1 Flags determine multiple pieces of information (specification).
signCount 4 This should always be 0000 for passkeys.
attestedCredentialData variable This will contain credential data if it’s available in a COSE key format.
extensions variable These are any optional extensions for authentication.

It is recommended to use the getPublicKey method here instead of manually retrieving the attestedCredentialData.

A Note About The attStmt Object

This is often an empty object for passkeys. However, in other cases of a packed format, which includes the sig, we will need to perform some authentication to verify the sig. This is out of the scope of this article, as it often requires a hardware key or some other type of device-based login.

Retrieving The Encoded Public Key

The getPublicKey method can retrieve the Subject Public Key Info (SPKI) encoded version of the public key, which is a different from the COSE key format (more on that next) within the attestedCredentialData that the decodedAttestationObject.attStmt has. The SPKI format has the benefit of being compatible with a Web Crypto importKey function to more easily verify assertion signatures in the next phase.

// Example of importing attestation public key directly into Web Crypto
const pubkey = await crypto.subtle.importKey(
  'spki',
  attestation.getPublicKey(),
  { name: "ECDSA", namedCurve: "P-256" },
  true,
  ['verify']
);

Generating Keys With COSE Algorithms

The algorithms that can be used to generate cryptographic material for a passkey are specified by their COSE Algorithm. For passkeys generated for the web, we want to be able to generate keys using the following algorithms, as they are supported natively in Web Crypto. Personally, I prefer ECDSA-based algorithms since the key sizes are quite a bit smaller than RSA keys.

The COSE algorithms are declared in the pubKeyCredParams array within the AuthenticatorAttestationResponse. We can retrieve the COSE algorithm from the attestationObject with the getPublicKeyAlgorithm method. For example, if getPublicKeyAlgorithm returned -7, we’d know that the key used the ES256 algorithm.

Name Value Description
ES512 -36 ECDSA w/ SHA-512
ES384 -35 ECDSA w/ SHA-384
ES256 -7 ECDSA w/ SHA-256
RS512 -259 RSASSA-PKCS1-v1_5 using SHA-512
RS384 -258 RSASSA-PKCS1-v1_5 using SHA-384
RS256 -257 RSASSA-PKCS1-v1_5 using SHA-256
PS512 -39 RSASSA-PSS w/ SHA-512
PS384 -38 RSASSA-PSS w/ SHA-384
PS256 -37 RSASSA-PSS w/ SHA-256

Responding To The Attestation Payload

I want to show you an example of a response we would send to the server for registration. In short, the safeByteEncode function is used to change the buffers into Base64-url encoded strings.

type AttestationCredentialPayload = {
  kid: string;
  clientDataJSON: string;
  attestationObject: string;
  pubkey: string;
  coseAlg: number;
};

const payload: AttestationCredentialPayload = {
  kid: credential.id,
  clientDataJSON: safeByteEncode(attestation.clientDataJSON),
  attestationObject: safeByteEncode(attestation.attestationObject),
  pubkey: safeByteEncode(attestation.getPublicKey() as ArrayBuffer),
  coseAlg: attestation.getPublicKeyAlgorithm(),
};

The credential id (kid) should always be captured to look up the user’s keys, as it will be the primary key in the public_keys table.

From there:

  1. The server would check the clientDataJSON to ensure the same challenge is used.
  2. The origin is checked, and the type is set to webauthn.create.
  3. We check the attestationObject to ensure it has an fmt of none, the rpIdHash of the authData, as well as any flags and the signCount.

Optionally, we could check to see if the attestationObject.attStmt has a sig and verify the public key against it, but that’s for other types of WebAuthn flows we won’t go into.

We should store the public key and the COSE algorithm in the database at the very least. It is also beneficial to store the attestationObject in case we require more information for verification. The signCount is always incremented on every login attempt if supporting other types of WebAuthn logins; otherwise, it should always be for 0000 for a passkey.

Asserting Yourself

Now we have to retrieve a stored passkey using the navigator.credentials.get API. From it, we receive the AuthenticatorAssertionResponse, which we want to send portions of to the Relying Party for verification.

const { challenge } = await (await fetch("/assertion/generate")).json(); // Server call mock to get a random challenge

const options = {
  challenge: new TextEncoder().encode(challenge),
  rpId: window.location.host,
  timeout: 60_000,
};

// Sign the challenge with our private key via the Authenticator
const credential = await navigator.credentials.get({
  publicKey: options,
  mediation: 'optional',
});

// Our main assertion response. See: <https://developer.mozilla.org/en-US/docs/Web/API/AuthenticatorAssertionResponse>
const assertion = credential.response as AuthenticatorAssertionResponse;

// Now send this information off to the Relying Party
// An example payload with most of the useful information
const payload = {
  kid: credential.id,
  clientDataJSON: safeByteEncode(assertion.clientDataJSON),
  authenticatorData: safeByteEncode(assertion.authenticatorData),
  signature: safeByteEncode(assertion.signature),
};

The AuthenticatorAssertionResponse again has the clientDataJSON, and now the authenticatorData. We also have the signature that needs to be verified with the stored public key we captured in the attestation phase.

Decoding The Assertion clientDataJSON

The assertion clientDataJSON is very similar to the attestation version. We again have the challenge, origin, and type. Everything is the same, except the type is now webauthn.get.

type DecodedClientDataJSON = {
  challenge: string,
  origin: string,
  type: string
};

const decoded: DecodedClientDataJSON = JSON.parse(new TextDecoder().decode(assertion.clientDataJSON));
const {
  challenge,
  origin,
  type
} = decoded;

Understanding The authenticatorData

The authenticatorData is similar to the previous attestationObject.authData, except we no longer have the public key included (e.g., the attestedCredentialData ), nor any extensions.

Name Length (bytes) Description
rpIdHash 32 This is a SHA-256 hash of the origin, e.g., my.passkeys.com.
flags 1 Flags that determine multiple pieces of information (specification).
signCount 4 This should always be 0000 for passkeys, just as it should be for authData.

Verifying The signature

The signature is what we need to verify that the user trying to log in has the private key. It is the result of the concatenation of the authenticatorData and clientDataHash (i.e., the SHA-256 version of clientDataJSON).

To verify with the public key, we need to also concatenate the authenticatorData and clientDataHash. If the verification returns true, we know that the user is who they say they are, and we can let them authenticate into the application.

Here’s an example of how this is calculated:

const clientDataHash = await crypto.subtle.digest(
  'SHA-256',
  assertion.clientDataJSON
);
// For concatBuffer see: <https://github.com/nealfennimore/passkeys/blob/main/src/utils.ts#L31>
const data = concatBuffer(
  assertion.authenticatorData,
  clientDataHash
);

// NOTE: the signature from the assertion is in ASN.1 DER encoding. To get it working with Web Crypto
//We need to transform it into r|s encoding, which is specific for ECDSA algorithms)
//
// For fromAsn1DERtoRSSignature see: <https://github.com/nealfennimore/passkeys/blob/main/src/crypto.ts#L60>'
const isVerified = await crypto.subtle.verify(
  { name: 'ECDSA', hash: 'SHA-256' },
  pubkey,
  fromAsn1DERtoRSSignature(signature, 256),
  data
);

Sending The Assertion Payload

Finally, we get to send a response to the server with the assertion for logging into the application.

type AssertionCredentialPayload = {
  kid: string;
  clientDataJSON: string;
  authenticatorData: string;
  signature: string;
};

const payload: AssertionCredentialPayload = {
  kid: credential.id,
  clientDataJSON: safeByteEncode(assertion.clientDataJSON),
  authenticatorData: safeByteEncode(assertion.authenticatorData),
  signature: safeByteEncode(assertion.signature),
};

To complete the assertion phase, we first look up the stored public key, kid.

Next, we verify the following:

  • clientDataJSON again to ensure the same challenge is used,
  • The origin is the same, and
  • That the type is webauthn.get.

The authenticatorData can be used to check the rpIdHash, flags, and the signCount one more time. Finally, we take the signature and ensure that the stored public key can be used to verify that the signature is valid.

At this point, if all went well, the server should have verified all the information and allowed you to access your account! Congrats — you logged in with passkeys!

No More Passwords?

Do passkeys mean the end of passwords? Probably not… at least for a while anyway. Passwords will live on. However, there’s hope that more and more of the industry will begin to use passkeys. You can already find it implemented in many of the applications you use every day.

Passkeys was not the only implementation to rely on cryptographic means of authentication. A notable example is SQRL (pronounced “squirrel”). The industry as a whole, however, has decided to move forth with passkeys.

Hopefully, this article demystified some of the internal workings of passkeys. The industry as a whole is going to be using passkeys more and more, so it’s important to at least get acclimated. With all the security gains that passkeys provide and the fact that it’s resistant to phishing attacks, we can at least be more at ease browsing the internet when using them.

Knip: An Automated Tool For Finding Unused Files, Exports, And Dependencies

Let’s face it. Most of us favor creating new features and user interfaces over maintenance tasks such as code cleanup, project configuration, and dependency management.

Lots of the boring and repetitive things, like formatting and linting, are mostly solved problems with tools like Prettier, ESLint, and TypeScript.

Yet there’s another area that often doesn’t receive much attention: handling unused files, exports, and dependencies. This is especially true as projects grow over time, negatively impacting maintainability as well as our enthusiasm for it. These unused artifacts often go unnoticed because they’re typically hard to find.

Where do you start looking for unused things? I bet you’ve done a global search for a dependency to find out whether it’s still used. Did you know you can right-click a file and “Find File References” in VS Code? These are the things you shouldn’t have to do. They’re tedious tasks that can — and should — be automated.

There’s Got to Be a Better Way

When I was in an expanding codebase, the number of unused files and exports in it kept growing. Tracking them became more and more difficult, so I started looking for automated solutions to help.

I was happy I found some existing tools, such as ts-prune. After some initial runs, I discovered that our codebase required a lot of configuration and still produced many false positives. I researched how other codebases try to stay tidy and realized there’s a huge opportunity in this space.

Along the road, I also found depcheck, which deals with a lot of complexity and customizations in JavaScript projects. And then there’s unimported, which also does a notable job of finding dangling files and unused dependencies.

But none of these tools handled my project very well. The fact that I would have to use a combination of them wouldn’t be a showstopper, but I couldn’t get the configurations right for handling the project’s customizations without reporting too many false positives or ignoring too much of the project and leaving a large blind spot.

In the end, tools in this area are only used when they are fully automated and able to cover the whole project. It also didn’t help that none of the existing tools support monorepos, a structure for repositories that has recently gained widespread popularity. Last but not least, both ts-prune and depcheck are in maintenance mode, meaning they would likely never support monorepos.

Working Towards A Solution

I’m motivated to automate things and keep projects in solid shape.

Great developer experience (DX) keeps developers happy and productive.

So I started developing an internal tool for my project to see if I could do better. It started as an internal script that handled only the specifics of that particular repository, and throughout the journey, I kept realizing what a blessing and a curse this is. Automating boring stuff is a winning concept, but getting it right is such a difficult challenge — but one that I was willing to try.

Along the road, I also got more and more convinced that a single tool to find all categories of unused things was a good idea: each of them requires reading and parsing source files, and since this is a relatively expensive task, I think the efficient path is to track them in one go.

Introducing Knip

So, what is Knip? I think it’s best categorized as a project linter. It picks up where ESLint ends. Where ESLint handles individual files, Knip lints the repository as a whole. It connects all the dots — in terms of files, imports, exports, and dependencies — and reports what is unused.

Roughly speaking, there are two ways to look at Knip. In greenfield projects, it’s great to install Knip and let it grow with the project. Keep the repository tidy and run Knip manually or in an automated continuous integration (CI) environment.

Another way to look at Knip is in larger projects. Knip is a great companion for housekeeping as far as identifying unused files, exports, and dependencies. There may be false positives initially, but it’s much easier to skip them compared to finding needles in a haystack on your own.

Additionally, during or after large refactoring efforts, Knip can be a great assistant for cleaning things up. It’s only human to miss or forget things that are no longer used, even more so when the things are not close to the refactoring work.

Knip works with traditional JavaScript and modern TypeScript, has built-in support for monorepos, works with any package manager, and has many features, small and large, that help you maintain your projects.

How Knip Works

Knip starts with one or more entry files and calculates the dependency tree, helping it know all the files that are used while marking the remaining files as unused.

Meanwhile, Knip keeps track of imported external dependencies and compares them against the dependencies in package.json to report both unused dependencies and dependencies that are used but not listed. It also keeps track of internal imports and exports to report unused exports.

Let me give you a better look under the hood.

Production Mode

In its default mode, Knip analyzes the whole project, including both production and non-production files, such as tests, configurations, Storybook stories, devDependencies, and so on.

But this mode might miss opportunities for trimming. For instance, files or exports imported only by tests are normally not reported as unused. However, when the export is not used anywhere else, you can delete both the export and its tests!

This is why Knip has a production mode. This mode is more strict than the default mode, where Knip will use only production code as entry files and only consider dependencies (excluding devDependencies).

Scripts

Many projects use command line tools that come with dependencies. For instance, after installing ESLint, you can use eslint, and Angular makes ng available in "scripts" in package.json. Knip connects dependencies with binaries and tells you when they are unused or missing.

But there’s more. CI environments, like Azure and GitHub Actions, are configured with YAML files that may also use the same command line tools.

And finally, custom scripts may use command line tools by spawning child processes, either using native Node.js APIs or with libraries like zx or execa.

Knip has a growing number of such detections to keep your projects neat and tidy, refactor after refactor. Yet what is so interesting about those scripts? They may be complicated to parse, making it difficult to extract their dependencies. Let’s look at an example:

node -r @scope/package/register --experimental-loader ts-node/esm/transpile-only ./dir

Here, we can find @scope/package and ts-node and dir/index.ts are dependencies of this script. Honestly, this is just the tip of the iceberg. I promise a few regular expressions won’t be enough!

Now, if this script is updated or removed, Knip will tell you if any dependency or file is no longer used. On the other hand, Knip will also tell you if a dependency is used but not listed explicitly in package.json. (You shouldn’t be relying on transitive dependencies anyway, right?)

Plugins

There’s an abundance of tooling available in the JavaScript ecosystem. And each tool has its configurations. Some allow YAML or JSON, and some allow (or require) you to write configurations in JavaScript or even TypeScript.

Since there’s no generic way to handle the myriad of variations, Knip supports plugins. Knip has plugins for tools that may reference dependencies that should be listed in package.json.

What matters to plugins is how dependencies are referenced. They might be imported in JavaScript like any other source file, and Knip can also parse them as such. Yet, they’re often referenced as strings, much the same as what ESLint does with the extends and plugins options. Dependencies can be specified in implicit ways, such as prettier, which means the eslint-config-prettier dependency. Storybook has the builder: "webpack5" configuration that requires the @storybook/builder-webpack5 and @storybook/manager-webpack5 dependencies.

Compilers

Knip parses all sorts of JavaScript and TypeScript files, including ES modules and CommonJS modules.

But some frameworks work with non-standard files, such as Vue, Svelte, MDX, and Astro. Knip allows you to configure compilers to include these types of files so they can also be included in the analysis.

Performance

Until version 2, Knip used ts-morph to calculate the dependency graph (and much more). This worked great initially because it abstracted away the TypeScript back end.

But to support monorepos and compilers while maintaining good performance, I realized I had to work with the TypeScript back end directly. This required a lot of effort, but it does provide a lot more flexibility, as well as opportunities for more optimizations. For example, Knip can traverse the abstract syntax tree (AST) of any file only once to find everything it needs.

Configuration Hints

When Knip reports a false positive, you can configure it to ignore that dependency. Then, when Knip no longer reports the false positive, it will report that the configuration can be updated, and you can remove the ignored items.

Reporters

Knip comes with a default reporter and has a few additional reporters. There’s a compact reporter, a JSON reporter, and one that uses the CODEOWNERS file to show the code owner(s) with each reported issue.

Knip also allows you to define a custom reporter. Knip will call your function with the results of the analysis. You can then do anything with it, like writing the results to a file or sending it to a service to track progress over time.

What’s Next For Knip?

Naturally, I’m not done working on Knip. These are a few of the things I have in mind.

More Plugins = Less Configuration

My hope with Knip is that as more and more people start to use Knip, they will report false positives: something is reported as unused but is in use.

A false positive usually has one of the following three causes:

  • A framework or tool is used that Knip does not yet have a plugin for,
  • The configuration might need improvement; for instance, add an entry file that Knip didn’t know about or ignore something with a hard-to-find reference, or
  • Knip has a bug.

As Knip becomes better with bug fixes and plugins, more projects benefit because they need less configuration. Maintenance will become more enjoyable and easier for everyone!

When you are using Knip and enjoying it, don’t let false positives scare you away, but report them instead. Please provide a reproducible test case, and I’m sure we can work it out. Additionally, I’ve written a complete guide detailing how to write a new plugin for Knip.

Auto-Fix

Much like ESLint, Knip will have a --fix option to automatically fix all sorts of issues. The idea is that this can automatically take care of things such as:

  • Remove the export keyword for unused exports,
  • Uninstall unused dependencies and install unlisted dependencies, and
  • Delete unused files.

Given enough interest from the community, I’m excited to start building this feature!

Integrations

Integrations such as a VS Code plugin or a GitHub Action sound like cool opportunities. I’m happy to collaborate and see where we can take it.

Demo Knip

I think the best way to understand Knip is to get your hands on it. So, I’ve created a CodeSandbox template that you can fork and spin up Knip in a new terminal with npm run knip.

Conclusion

Knip aims to help you maintain your JavaScript or TypeScript repositories, and I’m very happy lots of projects are already cut by Knip daily.

There is lots of room for improvement, bugs to fix, documentation to improve, and plugins to add! Your ideas and contributions are absolutely welcome — and encouraged — over at github.com/webpro/knip.

How To Build Server-Side Rendered (SSR) Svelte Apps With SvelteKit

I’m not interested in starting a turf war between server-side rendering and client-side rendering. The fact is that SvelteKit supports both, which is one of the many perks it offers right out of the box. The server-side rendering paradigm is not a new concept. It means that the client (i.e., the user’s browser) sends a request to the server, and the server responds with the data and markup for that particular page, which is then rendered in the user’s browser.

To build an SSR app using the primary Svelte framework, you would need to maintain two codebases, one with the server running in Node, along with with some templating engine, like Handlebars or Mustache. The other application is a client-side Svelte app that fetches data from the server.

The approach we’re looking at in the above paragraph isn’t without disadvantages. Two that immediately come to mind that I’m sure you thought of after reading that last paragraph:

  1. The application is more complex because we’re effectively maintaining two systems.
  2. Sharing logic and data between the client and server code is more difficult than fetching data from an API on the client side.
SvelteKit Simplifies The Process

SvelteKit streamlines things by handling of complexity of the server and client on its own, allowing you to focus squarely on developing the app. There’s no need to maintain two applications or do a tightrope walk sharing data between the two.

Here’s how:

  • Each route can have a server.page.ts file that’s used to run code in the server and return data seamlessly to your client code.
  • If you use TypeScript, SvelteKit auto-generates types that are shared between the client and server.
  • SvelteKit provides an option to select your rendering approach based on the route. You can choose SSR for some routes and CSR for others, like maybe your admin page routes.
  • SvelteKit also supports routing based on a file system, making it much easier to define new routes than having to hand-roll them yourself.
SvelteKit In Action: Job Board

I want to show you how streamlined the SvelteKit approach is to the traditional way we have been dancing between the SSR and CSR worlds, and I think there’s no better way to do that than using a real-world example. So, what we’re going to do is build a job board — basically a list of job items — while detailing SvelteKit’s role in the application.

When we’re done, what we’ll have is an app where SvelteKit fetches the data from a JSON file and renders it on the server side. We’ll go step by step.

First, Initialize The SvelteKit Project

The official SvelteKit docs already do a great job of explaining how to set up a new project. But, in general, we start any SvelteKit project in the command line with this command:

npm create svelte@latest job-list-ssr-sveltekit

This command creates a new project folder called job-list-ssr-sveltekit on your machine and initializes Svelte and SvelteKit for us to use. But we don’t stop there — we get prompted with a few options to configure the project:

  1. First, we select a SvelteKit template. We are going to stick to using the basic Skeleton Project template.
  2. Next, we can enable type-checking if you’re into that. Type-checking provides assistance when writing code by watching for bugs in the app’s data types. I’m going to use the “TypeScript syntax” option, but you aren’t required to use it and can choose the “None” option instead.

There are additional options from there that are more a matter of personal preference:

If you are familiar with any of these, you can add them to the project. We are going to keep it simple and not select anything from the list since what I really want to show off is the app architecture and how everything works together to get data rendered by the app.

Now that we have the template for our project ready for us let’s do the last bit of setup by installing the dependencies for Svelte and SvelteKit to do their thing:

cd job-listing-ssr-sveltekit
npm install

There’s something interesting going on under the hood that I think is worth calling out:

Is SvelteKit A Dependency?

If you are new to Svelte or SvelteKit, you may be pleasantly surprised when you open the project’s package.json file. Notice that the SvelteKit is listed in the devDependencies section. The reason for that is Svelte (and, in turn, SvelteKit) acts like a compiler that takes all your .js and .svelte files and converts them into optimized JavaScript code that is rendered in the browser.

This means the Svelte package is actually unnecessary when we deploy it to the server. That’s why it is not listed as a dependency in the package file. The final bundle of our job board app is going to contain just the app’s code, which means the size of the bundle is way smaller and loads faster than the regular Svelte-based architecture.

Look at how tiny and readable the package-json file is!

{
    "name": "job-listing-ssr-sveltekit",
    "version": "0.0.1",
    "private": true,
    "scripts": {
        "dev": "vite dev",
        "build": "vite build",
        "preview": "vite preview",
        "check": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json",
        "check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch"
    },
    "devDependencies": {
        "@sveltejs/adapter-auto": "^2.0.0",
        "@sveltejs/kit": "^1.5.0",
        "svelte": "^3.54.0",
        "svelte-check": "^3.0.1",
        "tslib": "^2.4.1",
        "typescript": "^4.9.3",
        "vite": "^4.0.0"
    },
    "type": "module"
}

I really find this refreshing, and I hope you do, too. Seeing a big list of packages tends to make me nervous because all those moving pieces make the entirety of the app architecture feel brittle and vulnerable. The concise SvelteKit output, by contrast, gives me much more confidence.

Creating The Data

We need data coming from somewhere that can inform the app on what needs to be rendered. I mentioned earlier that we would be placing data in and pulling it from a JSON file. That’s still the plan.

As far as the structured data goes, what we need to define are properties for a job board item. Depending on your exact needs, there could be a lot of fields or just a few. I’m going to proceed with the following:

  • Job title,
  • Job description,
  • Company Name,
  • Compensation.

Here’s how that looks in JSON:

[{
    "job_title": "Job 1",
    "job_description": "Very good job",
    "company_name": "ABC Software Company",
    "compensation_per_year": "$40000 per year"
}, {
    "job_title": "Job 2",
    "job_description": "Better job",
    "company_name": "XYZ Software Company",
    "compensation_per_year": "$60000 per year"
}]

Now that we’ve defined some data let’s open up the main project folder. There’s a sub-directory in there called src. We can open that and create a new folder called data and add the JSON file we just made to it. We will come back to the JSON file when we work on fetching the data for the job board.

Adding TypeScript Model

Again, TypeScript is completely optional. But since it’s so widely used, I figure it’s worth showing how to set it up in a SvelteKit framework.

We start by creating a new models.ts file in the project’s src folder. This is the file where we define all of the data types that can be imported and used by other components and pages, and TypeScript will check them for us.

Here’s the code for the models.ts file:

export type JobsList = JobItem[]

export interface JobItem {
  job_title: string
  job_description: string
  company_name: string
  compensation_per_year: string
}

There are two data types defined in the code:

  1. JobList contains the array of job items.
  2. JobItem contains the job details (or properties) that we defined earlier.
The Main Job Board Page

We’ll start by developing the code for the main job board page that renders a list of available job items. Open the src/routes/+page.svelte file, which is the main job board. Notice how it exists in the /src/routes folder? That’s the file-based routing system I referred to earlier when talking about the benefits of SvelteKit. The name of the file is automatically generated into a route. That’s a real DX gem, as it saves us time from having to code the routes ourselves and maintaining more code.

While +page.svelte is indeed the main page of the app, it’s also the template for any generic page in the app. But we can create a separation of concerns by adding more structure in the /scr/routes directory with more folders and sub-folders that result in different paths. SvelteKit’s docs have all the information you need for routing and routing conventions.

This is the markup and styles we’ll use for the main job board:

<div class="home-page">
  <h1>Job Listing Home page</h1>
</div>

<style>
  .home-page {
    padding: 2rem 4rem;
    display: flex;
    align-items: center;
    flex-direction: column;
    justify-content: center;
  }
</style>

Yep, this is super simple. All we’re adding to the page is an <h1> tag for the page title and some light CSS styling to make sure the content is centered and has some nice padding for legibility. I don’t want to muddy the waters of this example with a bunch of opinionated markup and styles that would otherwise be a distraction from the app architecture.

Run The App

We’re at a point now where we can run the app using the following in the command line:

npm run dev -- --open

The -- --open argument automatically opens the job board page in the browser. That’s just a small but nice convenience. You can also navigate to the URL that the command line outputs.

The Job Item Component

OK, so we have a main job board page that will be used to list job items from the data fetched by the app. What we need is a new component specifically for the jobs themselves. Otherwise, all we have is a bunch of data with no instructions for how it is rendered.

Let’s take of that by opening the src folder in the project and creating a new sub-folder called components. And in that new /src/components folder, let’s add a new Svelte file called JobDisplay.svelte.

We can use this for the component’s markup and styles:

<script lang="ts">
  import type { JobItem } from "../models";
  export let job: JobItem;
</script>

<div class="job-item">
  <p>Job Title: <b>{job.job_title}</b></p>
  <p>Description: <b>{job.job_description}</b></p>
  <div class="job-details">
    <span>Company Name : <b>{job.company_name}</b></span>
    <span>Compensation per year: <b>{job.compensation_per_year}</b></span>
  </div>
</div>

<style>
  .job-item {
    border: 1px solid grey;
    padding: 2rem;
    width: 50%;
    margin: 1rem;
    border-radius: 10px;
  }

  .job-details {
    display: flex;
    justify-content: space-between;
  }
</style>

Let’s break that down so we know what’s happening:

  1. At the top, we import the TypeScript JobItem model.
  2. Then, we define a job prop with a type of JobItem. This prop is responsible for getting the data from its parent component so that we can pass that data to this component for rendering.
  3. Next, the HTML provides this component’s markup.
  4. Last is the CSS for some light styling. Again, I’m keeping this super simple with nothing but a little padding and minor details for structure and legibility. For example, justify-content: space-between adds a little visual separation between job items.

Fetching Job Data

Now that we have the JobDisplay component all done, we’re ready to pass it data to fill in all those fields to be displayed in each JobDisplay rendered on the main job board.

Since this is an SSR application, the data needs to be fetched on the server side. SvelteKit makes this easy by having a separate load function that can be used to fetch data and used as a hook for other actions on the server when the page loads.

To fetch, let’s create yet another new file TypeScript file — this time called +page.server.ts — in the project’s routes directory. Like the +page.svelte file, this also has a special meaning which will make this file run in the server when the route is loaded. Since we want this on the main job board page, we will create this file in the routes directory and include this code in it:

import jobs from ’../data/job-listing.json’
import type { JobsList } from ’../models’;

const job_list: JobsList = jobs;

export const load = (() => {
  return {
    job_list
  };
})

Here’s what we’re doing with this code:

  1. We import data from the JSON file. This is for simplicity purposes. In the real app, you would likely fetch this data from a database by making an API call.
  2. Then, we import the TypeScript model we created for JobsList.
  3. Next, we create a new job_list variable and assign the imported data to it.
  4. Last, we define a load function that will return an object with the assigned data. SvelteKit will automatically call this function when the page is requested. So, the magic for SSR code happens here as we fetch the data in the server and build the HTML with the data we get back.
Accessing Data From The Job Board

SvelteKit makes accessing data relatively easy by passing data to the main job board page in a way that checks the types for errors in the process. We can import a type called PageServerData in the +page.svelte file. This type is autogenerated and will have the data returned by the +page.server.ts file. This is awesome, as we don’t have to define types again when using the data we receive.

Let’s update the code in the +page.svelte file, like the following:

<script lang="ts">
  import JobDisplay from ’../components/JobDisplay.svelte’;
  import type { PageServerData } from ’./$types’;

  export let data: PageServerData;
</script>

<div class="home-page">
  <h1>Job Listing Home page</h1>

  {#each data.job_list as job}
    <JobDisplay job={job}/>
  {/each}
</div>

<style>....</style>

This is so cool because:

  1. The #each syntax is a Svelte benefit that can be used to repeat the JobDisplay component for all the jobs for which data exists.
  2. At the top, we are importing both the JobDisplay component and PageServerData type from ./$types, which is autogenerated by SvelteKit.

Deploying The App

We’re ready to compile and bundle this project in preparation for deployment! We get to use the same command in the Terminal as most other frameworks, so it should be pretty familiar:

npm run build

Note: You might get the following warning when running that command: “Could not detect a supported production environment.” We will fix that in just a moment, so stay with me.

From here, we can use the npm run preview command to check the latest built version of the app:

npm run preview

This process is a new way to gain confidence in the build locally before deploying it to a production environment.

The next step is to deploy the app to the server. I’m using Netlify, but that’s purely for example, so feel free to go with another option. SvelteKit offers adapters that will deploy the app to different server environments. You can get the whole list of adapters in the docs, of course.

The real reason I’m using Netlify is that deploying there is super convenient for this tutorial, thanks to the adapter-netlify plugin that can be installed with this command:

npm i -D @sveltejs/adapter-netlify

This does, indeed, introduce a new dependency in the package.json file. I mention that because you know how much I like to keep that list short.

After installation, we can update the svelte.config.js file to consume the adapter:

import adapter from ’@sveltejs/adapter-netlify’;
import { vitePreprocess } from ’@sveltejs/kit/vite’;

/** @type {import(’@sveltejs/kit’).Config} */
const config = {
    preprocess: vitePreprocess(),

    kit: {
        adapter: adapter({
            edge: false, 
            split: false
        })
    }
};

export default config;

Real quick, this is what’s happening:

  1. The adapter is imported from adapter-netlify.
  2. The new adapter is passed to the adapter property inside the kit.
  3. The edge boolean value can be used to configure the deployment to a Netlify edge function.
  4. The split boolean value is used to control whether we want to split each route into separate edge functions.

More Netlify-Specific Configurations

Everything from here on out is specific to Netlify, so I wanted to break it out into its own section to keep things clear.

We can add a new file called netlify.toml at the top level of the project folder and add the following code:

[build]
  command = "npm run build"
  publish = "build"

I bet you know what this is doing, but now we have a new alias for deploying the app to Netlify. It also allows us to control deployment from a Netlify account as well, which might be a benefit to you. To do this, we have to:

  1. Create a new project in Netlify,
  2. Select the “Import an existing project” option, and
  3. Provide permission for Netlify to access the project repository. You get to choose where you want to store your repo, whether it’s GitHub or some other service.

Since we have set up the netlify.toml file, we can leave the default configuration and click the “Deploy” button directly from Netlify.

Once the deployment is completed, you can navigate to the site using the provided URL in Netlify. This should be the final result:

Here’s something fun. Open up DevTools when viewing the app in the browser and notice that the HTML contains the actual data we fetched from the JSON file. This way, we know for sure that the right data is rendered and that everything is working.

Note: The source code of the whole project is available on GitHub. All the steps we covered in this article are divided as separate commits in the main branch for your reference.

Conclusion

In this article, we have learned about the basics of server-side rendered apps and the steps to create and deploy a real-life app using SvelteKit as the framework. Feel free to share your comments and perspective on this topic, especially if you are considering picking SvelteKit for your next project.

Further Reading On SmashingMag

JSON to PDF Magic: Harnessing LaTeX and JSON for Effortless Customization and Dynamic PDF Generation

In this tutorial article, we show how to control the layout and content of the PDF document via the JSON body of the API call with DynamicDocs API. We present three different JSON examples and produce three different types of documents using the same end-point. In each case, making changes to the PDF is done by editing the JSON payload. To do this, we utilize the power of LaTeX to control the layout of the PDF document.

What is LaTeX?

LaTeX is a programming language specifically designed for creating PDF documents. Source files in LaTeX have a .tex extension and are usually compiled into PDF format. While it's highly popular in academia for crafting dissertations and scholarly articles, LaTeX is also excellent for producing visually appealing documents. Its strength lies in the separation of content and stylistic rules, allowing for precise control over the document's appearance. However, a significant drawback of LaTeX is the steep learning curve involved in mastering the language for document layout.

How To Use AI Tools To Skyrocket Your Programming Productivity

Programming is fun. At least, that’s the relationship I would love to have with programming. However, we all know that with the thrills and joys of programming, there comes a multitude of struggles, unforeseen problems, and long hours of coding. Not to mention — too much coffee.

If only there were a way to cut out all of the menial struggles programmers face daily and bring them straight to the things they should be spending their energy on thinking and doing, such as critical problem-solving, creating better designs, and testing their creations.

Well, in recent times, we’ve been introduced to exactly that.

The start of this year marked the dawn of a huge shift towards Artificial Intelligence (AI) as a means of completing tasks, saving time, and improving our systems. There is a whole new realm of use cases with the rise of AI and its potential to seriously impact our lives in a positive manner.

While many have concerns swirling about AI taking over jobs (and yes, programmers have been raised up), I take an entirely different perspective. I believe that AI has the ability to skyrocket your productivity in programming like nothing before, and over the last couple of months, I have been able to reap the benefits of this growing wave.

Today, I want to share this knowledge with you and the ways that I have been using AI to supersize my programming output and productivity so that you’ll be able to do the same.

Case Study: ChatGPT

If you have missed much of the recent news, let me get you up to speed and share with you the main inspiration for this guide. In late November 2022, OpenAI announced its latest chatbot — ChatGPT, which took the world by storm with over a million sign-ups in its first week.

It was an extremely powerful tool that had never been seen before, blowing people away with its capabilities and responses. Want a 30-word summary of a 1000-word article? Throw it in, and in a few seconds, you’ve just saved yourself a long read. Need an email sales copy for a programming book that teaches you how to code in O(1) speed, written in the style of Kent Beck? Again, it will be back to you in a few seconds. The list of ChatGPT use cases goes on.

However, as a programmer, what really got me excited was ChatGPT’s ability to understand and write code. GPT-3, the model that ChatGPT runs on, has been trained on a wide range of text, including programming languages and code excerpts. As a result, it can generate code snippets and explanations within a matter of seconds.

While there are many AI tools other than ChatGPT that can help programmers boost their productivity, such as Youchat and Cogram, I will be looking at ChatGPT as the main tool for this guide, due to the fact that it is publicly available for free at OpenAI’s website and that it has a very gentle learning curve for a wide range of applications.

And again, before we continue, I would like to re-emphasize that

AI tools such as ChatGPT are meant to streamline your workflow, not take over and replace your thinking and problem-solving.

That being said, let’s see how I used ChatGPT to skyrocket my programming productivity.

Common Problems And How ChatGPT Can Help

To help shine a light on this topic, I have compiled five of the most common ways that I have used ChatGPT to overcome problems that any programmer would experience daily. Those five problems are the following:

  1. Programmer’s block,
  2. Long hours of debugging,
  3. Understanding long documentation,
  4. Developer testing,
  5. Code optimization.

It’s a lot to cover, but I’ll give you real-life examples so that you will be able to take the knowledge from this guide and use it to your own advantage. Let’s jump straight in.

Programmer’s Block

Programmer’s block is the programmer’s equivalent of writer’s block and is one of the most common problems that many programmers, myself included, face regularly. When tasked with a significant programming job, such as getting started with building a multiple-page website in HTML, CSS, and JavaScript, it’s easy to get caught in the weeds of not knowing where to start.

Or you could be knee-deep in a coding project and hit a roadblock in finding a solution for a problem. It’s often a frustrating scenario to be in.

ChatGPT has been an excellent solution for that. Let’s look at the example above in having programmer’s block before embarking on a large programming task. Suppose I’m looking to start a new project. After surfing for software company ideas, I decided to develop a sleek and modern online store in HTML, CSS, and JavaScript to sell my latest programming book.

While ChatGPT won’t be able to hit the mark in producing the entire project, a great way to use it is to generate the skeleton for you to begin. Throwing in a detailed prompt for the task, this is what you get:

Prompt: Can you provide a basic structure for a sleek and modern single online store landing page, with red as the main color, for my new programming book in HTML and CSS?

index.html

<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Your Book Title</title>
  <link rel="stylesheet" href="style.css">
</head>
<body>
  <header>
    <nav>
      <ul>
        <li><a href="#">Home</a></li>
        <li><a href="#">About</a></li>
        <li><a href="#">Contact</a></li>
      </ul>
    </nav>
  </header>
  <main>
    <section>
      <h1>Programming book title</h1>
      <p>A comprehensive guide to becoming a proficient programmer</p>
      <button>Learn More</button>
    </section>
    <section>
      <h2>Features of the Book:</h2>
      <ul>
        <li>Step-by-step tutorials for learning core programming concepts</li>
        <li>In-depth explanations of commonly used programming languages</li>
        <li>Hands-on exercises and real-world projects to reinforce learning</li>
        <li>Case studies to illustrate the application of programming in various industries</li>
        <li>Tips and tricks for efficient and effective programming practices</li>
      </ul>
    </section>
  </main>
  <footer>
    <p>Copyright 2023</p>
  </footer>
</body>

style.css

body {
  font-family: Arial, sans-serif;
  display: flex;
  flex-direction: column;
  align-items: center;
  margin: 0;
  padding: 0;
}

header, footer {
  background-color: red;
  color: white;
  display: flex;
  justify-content: center;
  align-items: center;
  width: 100%;
  height: 50px;
  box-shadow: 0px 4px 10px rgba(0, 0, 0, 0.25);
}

nav ul {
  display: flex;
  list-style: none;
  margin: 0;
  padding: 0;
}

nav a {
  color: white;
  display: block;
  padding: 10px 20px;
  text-decoration: none;
}

main {
  display: flex;
  flex-direction: column;
  align-items: center;
  width: 100%;
  height: calc(100vh - 100px);
}

section {
  max-width: 800px;
  padding: 40px;
  margin-top: 100px;
  text-align: center;
  background-color: white;
  box-shadow: 0px 4px 10px rgba(0, 0, 0, 0.25);
  border-radius: 10px;
  margin-bottom: 20px;
  margin-left: auto;
  margin-right: auto;
}

h2 {
  margin-top: 40px;
}

ul {
  list-style: none;
  margin-top: 20px;
  padding-left: 0;
}

li {
  margin-bottom: 10px;
}

That gives us the following template to start our work with:

Nothing impressive, but it does provide a great starting point as opposed to a blank HTML file.

Now, of course, I would highly recommend that the most you create with this is the barebones structure. We are using these tools to abstract away menial tasks like creating the HTML structure so that you can focus on more important things like implementing your own CSS styles and HTML features.

HTML and CSS templates are already widely-used concepts. However, with AI, we can now create more personalized templates and basic code structures, getting us from staring at a blank HTML file to a workable skeleton in a matter of minutes.

Use it to create a starting platform for you to get over your programmer’s block, but for the fine details and exact features, your programming knowledge will still be irreplaceable.

Nevertheless, I have been using it to get numerous programming projects up and running. I had this sentence-length counter made from scratch easily within an hour by creating the base template and adding on what I wanted after. I find that being able to jump-start that process makes my programming workflow much more streamlined and enjoyable.

Long Hours Of Debugging

Another common frustration every programmer knows is debugging. Debugging is an extremely time-intensive aspect of programming that can often be very draining and leave programmers at roadblocks, which is detrimental to a productive programming session.

Fortunately, AI is able to cut out a lot of the frustration of debugging, while at the same time, it does not replace the job of programmers in having strong fundamentals in knowing how to debug. At the current time, most AI tools are not able to spot every single flaw in your code and suggest the correct changes to make; hence, it is still essential that you are capable of debugging code.

However, AI is a great supplementary tool to your debugging skills in two main ways:

  1. Understanding runtime errors;
  2. Providing context-aware suggestions.

Understanding Runtime Errors

When faced with errors that you have never seen before in your code, a common reaction would be to hit Google and spend the next chunk of your time surfing through forums and guides to try and find a specific answer for something like the following:

Uncaught TypeError: Cannot read property 'value' of undefined.

Rather than spending your time frantically searching the web, a simple prompt can provide everything you would need for the most part.

Providing Context-Aware Suggestions

The other way in which I’ve been able to get help from ChatGPT for debugging is through its context-aware suggestions for errors. Traditionally, even though we may find the answers for what our program’s bugs are online, it is oftentimes difficult to put the errors and solutions into context.

Here is how ChatGPT handles both of these scenarios with a simple prompt.

Prompt: I found the error “Uncaught TypeError: Cannot read property value of undefined.” in my Python code. How do I resolve it?

With this, I have been able to cut out a lot of time that I would have been spending surfing for answers and turn that time into producing error-free code. While you still have to have good knowledge in knowing how to implement these fixes, using AI as a supplementary tool in your debugging arsenal can provide a huge boost in your programming productivity.

Understanding Long Documentation

Another fantastic way to use AI tools to your advantage is by streamlining long documentation into digestible information that comes when having to use APIs or libraries. As a Natural Language Processing model, this is where ChatGPT excels.

Imagine you’re working on a new web development project, but want to use Flask for the first time. Traditionally, you might spend hours scrolling through pages of dense documentation from Flask, trying to find the precise information you need.

With a tool like ChatGPT, you’ll be able to streamline this problem and save an immense amount of time in the following ways:

  • Generate concise summaries
    You’ll be able to automatically summarize long code documentation, making it easier to quickly understand the key points without having to read through the entire document.
  • Answer specific questions
    ChatGPT can answer specific questions about the code documentation, allowing you to quickly find the information you need without having to search through the entire document.
  • Explain technical terms
    If you are having trouble understanding some terms in the documentation, rather than navigating back to extensive forum threads, ChatGPT can explain technical terms in simple language, making it easier for non-technical team members to understand the code documentation.
  • Provide examples
    Similarly to debugging, you can get relatable examples for each code concept in the documentation, making it easier for you to understand how the code works and how you can apply it to your own projects.
  • Generate code snippets
    ChatGPT can generate code snippets based on the code documentation, allowing you to experiment with use cases and tailor the examples to your specific needs.

It’s like having a search engine that can understand the context of your query and provide the most relevant information. You’ll no longer be bogged down by pages of text, and you can focus on writing and testing your code. Personally, I have been able to blast through numerous libraries, understand and apply them for my own needs in a fraction of the time I normally would.

Developer Testing

Developer testing is one of the cornerstone skills that a programmer or developer must have in order to create bulletproof programs and applications. However, even for experienced programmers, a common problem in developer testing is that you won’t know what you don’t know.

What that means is that in your created test cases, you might miss certain aspects of your program or application that could go unnoticed until it reaches a larger audience. Oftentimes, to avoid that scenario, we could spend hours on end trying to bulletproof our code to ensure that it covers all its bases.

However, this is a great way that I’ve been able to incorporate AI into my workflow as well.

Having AI suggest tests that cover all edge cases is a great way to provide an objective and well-rounded testing phase for your projects.

It also does so in a fraction of the time you would spend.

For example, you are working on the same product landing page for your programming book from earlier. Now, I’ve created a proper product page that involves a form with the following fields for you to process:

script.js

// Get references to the form elements.
const form = document.getElementById("payment-form");
const cardNumber = document.getElementById("card-number");
const expiryDate = document.getElementById("expiry-date");
const cvv = document.getElementById("cvv");
const submitButton = document.getElementById("submit-button");

// Handle form submission.
form.addEventListener("submit", (event) => {
  event.preventDefault();

  // Disable the submit button to prevent multiple submissions.
  submitButton.disabled = true;

  // Create an object to hold the form data.
  const formData = {
    cardNumber: cardNumber.value,
    expiryDate: expiryDate.value,
    cvv: cvv.value,
  };

  // Perform client-side validation on the form data
  if (!validateFormData(formData)) {
    // If the data is invalid, re-enable the submit button and return
    submitButton.disabled = false;
    return;
  }

  // Send the form data to the server.
  // The following is just an example and should not be used in a real-world scenario
  // as it lacks security measures and proper payment gateway integration.
  fetch("/charge", {
    method: "POST",
    body: JSON.stringify(formData),
    headers: {
      "Content-Type": "application/json",
    },
  })
    .then((response) => response.json())
    .then((data) => {
      // Handle the server response.
      if (data.success) {
        // Payment was successful.
        alert("Payment successful!");
      } else {
        // Payment failed.
        alert("Payment failed. Please try again.");
        submitButton.disabled = false;
      }
    })
    .catch((error) => {
      console.error(error);
      alert("An error occurred. Please try again.");
      submitButton.disabled = false;
    });
});

// Perform client-side validation on the form data.
function validateFormData(data) {
  // Example validation checks.
  if (!data.cardNumber || data.cardNumber.length !== 16) {
    alert("Please enter a valid card number.");
    return false;
  }
  if (!data.expiryDate || data.expiryDate.length !== 5) {
    alert("Please enter a valid expiry date in the format MM/YY.");
    return false;
  }
  if (!data.cvv || data.cvv.length !== 3) {
    alert("Please enter a valid CVV.");
    return false;
  }
  return true;
}

Knowing that there are many invalid inputs that customers might use, you want to create some test cases to validate your form’s functionalities. Rather than spending copious amounts of time coming up with numerous test cases, let’s let AI identify any edge cases we missed to bulletproof our tests.

Prompt: Can you help to come up with some edge cases to ensure that the form does not accept any invalid inputs in collecting payment details?

test.js

// Test Case 1: Check if form doesn't submit when fields are empty.
document.querySelector("form").addEventListener("submit", function(e) {
  e.preventDefault();
  var name = document.querySelector("input[name='name']").value;
  var address = document.querySelector("input[name='address']").value;
  var cardNumber = document.querySelector("input[name='card-number']").value;
  var expiryDate = document.querySelector("input[name='expiry-date']").value;
  var cvv = document.querySelector("input[name='cvv']").value;

  if (!name || !address || !cardNumber || !expiryDate || !cvv) {
    alert("All fields are required");
    return;
  }

  // Check if all the fields contain valid inputs.
  if (!validateName(name) || !validateAddress(address) || !validateCardNumber(cardNumber) || !validateExpiryDate(expiryDate) || !validateCVV(cvv)) {
    return;
  }

  // Continue with payment processing.
});

// Test Case 2: Check if name input is valid.
function validateName(name) {
  var nameRegEx = /^[a-zA-Z ]+$/;
  if (!nameRegEx.test(name)) {
    alert("Invalid name. Please enter a valid name");
    return false;
  }
  return true;
}

// Test Case 3: Check if address input is valid.
function validateAddress(address) {
  var addressRegEx = /^[a-zA-Z0-9\s,'-]*$/;
  if (!addressRegEx.test(address)) {
    alert("Invalid address. Please enter a valid address");
    return false;
  }
  return true;
}

// Test Case 4: Check if card number input is valid.
function validateCardNumber(cardNumber) {
  var cardNumberRegEx = /^\d{16}$/;
  if (!cardNumberRegEx.test(cardNumber)) {
    alert("Invalid card number. Please enter a valid card number");
    return false;
  }
  return true;
}

The need to create all sorts of test cases, which is usually a daunting and repetitive task, after a while of working on the same project, can make programming feel very menial. However, with the assistance of AI, this process can be streamlined and, at the same time, improved to cover more bases in your developer testing.

Code Optimization

As it stands, AI still has a long way to go in helping to optimize code and can’t actually do the optimization itself. However, it can still help to provide some useful insights and give some pointers to improving your programming. Here are the most common ways that I have used ChatGPT in optimizing my code for performance:

  • Code Suggestions
    Most simply, it can suggest code snippets or alternative solutions to improve the performance of your existing code.
  • Best Practices
    Having been trained on a wide range of code patterns, ChatGPT can help you follow best practices for coding and software design, leading to more efficient and optimized code.
  • Refactoring
    It helps to reorganize existing code to improve its efficiency and maintainability without affecting its functionality.
  • Knowledge Sharing
    There are many scenarios where your code can be implemented simply through a single import or with other programming languages, libraries, and frameworks. ChatGPT’s suggestions help ensure you are making informed decisions on the best implementations for your needs.

Of course, the bulk of these still requires you to optimize your code manually. However, using AI to gain insights and suggestions for this can be a great way to improve your productivity and produce higher-quality code.

AI Is Amazing, But It Does Have Its Limitations

Now that we have seen what AI can do for you and your programming productivity, I would imagine you are bubbling with ideas on how you are going to start implementing these in your programming workflows.

However, it is essential to keep in mind that these models are fairly new and still have a long way to go regarding reliability and accuracy. These are just some of the limitations that AI, specifically, ChatGPT, has:

  • Limited Understanding
    AI algorithms like ChatGPT have a limited understanding of code and may not fully understand the implications and trade-offs of certain programming decisions.
  • Training Data Limitations
    The quality and relevance of AI algorithms’ output depend on the quality and scope of the training data. For example, ChatGPT was only trained on data dating to 2021. Any updates in programming languages since then may not be reflected.
  • Bias
    AI algorithms can be biased towards specific patterns or solutions based on the data they were trained on, leading to suboptimal or incorrect code suggestions.
  • Lack of Context
    AI algorithms may struggle to understand the context and the desired outcome of a specific coding task, leading to generic or irrelevant advice. While this can be minimized with specific prompts, it is still difficult to generate solutions to more complicated problems.

Nevertheless, these limitations are a small price for the multitude of benefits that AI tools provide. While I am an advocate of using AI to boost your programming productivity, keeping in mind these limitations are crucial when using AI in your workflows as it is important to ensure that the information or code you are producing is reliable, especially if you are using it in a professional setting.

With the current limitations, AI should only be used as a means to assist your current skills, not to replace them. Hence, with that in mind, use it tactfully and sparingly to achieve a good balance in boosting your productivity but not detracting from your skills as a programmer.

How Else Will AI Improve Programmers’ Lives?

While I have mainly talked about the technical aspects of programming that AI can help in, there are many other areas where AI can help to make your life as a programmer much easier.

We are just at the tip of the iceberg in this incoming wave of AI. Many new use cases for AI appear every day with the potential to improve programmers’ lives even further. In the future, we are likely to see many new integrations of AI in many of our daily software uses as programmers.

There already exists general writing software, which could be useful for programmers in creating code and API documentation. These have been around for a while and have become widely accepted as a tool that helps, not replaces.

General productivity and notetaking tools that use AI have also been a big hit, especially for programming students who have to plow through large amounts of information every day. All in all, where there is a labor-intensive task that can be resolved, AI will likely be making headway in those areas.

Wrapping Up

To wrap things up, I will end with a reminder from the opening of this guide. I believe that there is massive potential in becoming well-versed with AI, not as a means to replace our work, but as a means to improve it.

With the right knowledge of what you can and, more importantly, cannot do, AI can be an extremely valuable skill to have in your programming arsenal and will undoubtedly save you copious amounts of time if used correctly.

Hence, rather than fearing the incoming wave of new AI technology, I encourage you to embrace it. Take the knowledge you have learned from the guide and tailor it to your own needs and uses. Every programmer’s workflows are different, but with the right principles and a good knowledge of the limitations of AI, the benefits are equally available to everyone.

So all that’s left for you to do is to reap the supersized benefits that come with integrating AI into your current workflows and see your programming productivity skyrocket as it has for me. And if there’s one thing to remember, it’s to use AI as your assistant, not your replacement.

How To Protect Your App With A Threat Model Based On JSONDiff

Security changes constantly. There’s a never-ending barrage of new threats and things to worry about, and you can’t keep up with it all. It feels like every new feature creates expanding opportunities for hackers and bad guys.

Threat model documents give you a framework to think about the security of your application and make threats manageable. Building a threat model shows you where to look for threats, what to do about them, and how to prevent them in the future. It provides a tool to stay safe so you can focus on delivering a killer application, knowing that your security is taken care of.

This article will show you how to create a threat model document. We’ll review JSONDiff.com and build a threat model for it, and we’ll show how small architectural changes can have a gigantic impact on the security of your application.

Who Do You Trust?

Every time you use a computer, you trust many people. When you make an application, you’re asking other people to trust you, but you’re also asking them to trust everything you depend on.

Your threat model makes it clear who you’re trusting, what you’re trusting them with, and why you should trust them.

What Is A Threat Model?

A threat model is a document where you write down three things:

  1. The architecture of your application,
  2. The potential threats to your application,
  3. The steps you’re taking to mitigate those threats.

It’s really that simple. You don’t need complex tools or a degree in security engineering. All you need is an understanding of your application and a framework for where to look for threats.

This article will show how to build your own threat model using JSONDiff as a sample. You can also take a look at the complete threat model for JSONDiff to see the finished document.

Threat Models Start With Architecture

All threat models start with a deep understanding of your architecture. You need to understand the full stack of your application and everything it depends on. Documenting your architecture is always a good idea; you can start anytime. You’re architecting from the moment you start picking the tools you’ll use.

Here are some basic questions to answer for your architecture:

  • Where does my application run?
  • Where is my application hosted?
  • What dependencies does my application have?
  • Who has access to my application?
  • Where does my application store data?
  • Where does my application send data?
  • How does my application manage users and credentials?

Give a brief overview of your application and then document how it works. Start by drawing a picture of your application. Keep it simple. Show the major pieces that make your application run and how they interact.

Let’s start by looking at the overall architecture of JSONDiff.

JSONDiff is a simple application that runs in a browser. The source code is stored on GitHub.com, and it’s open source. It can run in two modes:

  1. The public version is at JSONDiff.com.
  2. A private version users can run in a Docker container.

We’ll draw the architecture in relation to what runs in the client and what runs on the server. For this drawing, we won’t worry about where the server is running and will just focus on the public version.

Drawing your architecture can be one of the trickiest steps because you’re starting with a blank page and have to choose a representation that makes sense for your application. Sometimes you’ll want to talk about larger pieces; other times, you’ll want to focus on smaller chunks and user actions. Ask yourself what someone would need to know to understand your security, and write that.

JSONDiff is a single-page web application using jQuery. In this case, it makes sense to focus on the pieces that run on the server, the pieces that run in the browser, and how they work.

The first step to any architecture is a brief description of what the application is. You need to set the stage and let anyone reading the architecture know some basic information:

  • What does the application do?
  • Who’s using it?
  • Why are they using it?

JSONDiff is a browser-based application that compares JSON data. It takes two JSON documents, compares them semantically, and shows the differences. JSONDiff is free for anyone and anywhere. It’s used by developers to find differences in their JSON documents that are difficult to find with a standard text-editor diff tool or in GitHub.

The architecture diagram looks like this:

The architecture is simple: Nginx hosts the site, and most of the code is in the jdd.js file. But it brings up many good questions:

  • How does JSONDiff load JSON data?
  • Does it ever send the data it loads anywhere?
  • Does it store the data?
  • Where do the ads come from?

Write down all of the questions your architecture diagram brings up, and answer them in your threat model. Having those questions written down gives you a place to start understanding the threats.

Let’s focus on the first question and show how to dig into it with a security mindset.

There are two ways to load the JSON data you want to compare. You can load it in the browser by copying and pasting it or by choosing a file. That interaction is very well understood, and there isn’t much of a threat there. You can also load the JSON data by specifying an URL, which opens a big can of worms.

Specifying a URL to load the data is a very useful feature. It makes comparing large documents easier, and you can send someone else a URL with the JSON documents already loaded. It also brings up a lot of issues.

The same-origin policy prevents JavaScript applications running in browsers from loading random URLs. There are very good reasons that this policy exists. JSONDiff is subverting the policy, and that should make your security spidey-sense tingle.

JSONDiff uses a proxy to enable this feature. The proxy.php file is a simple proxy that will load JSON data from anywhere.

Loading random data sounds like a recipe for a cross-site request forgery (CSRF) attack. That’s a risk.

All applications have risks; we manage those risks with mitigations. In this case, the proxy risk has two mitigations:

  1. The proxy can only load data that are already publicly available on the Internet.
  2. The file that’s loaded by the proxy is never executed.

Our threat model will include this risk and show how we mitigated it. In fact, each threat needs to show how much risk there is and what we did to mitigate each risk.

Let’s take a look at where threats appear in your application.

Threats

There are many categories of threats through the development and deployment lifecycles. It’s helpful to split threats into different categories and document those potential threats for our application, while we’re starting to plan, design, implement, deploy, and test that software or service.

For every threat we identify, we need to describe two pieces:

  • The threat
    What is the specific threat we’re worried about here? How could it be exploited in our application? How serious could that exploit be?
  • Mitigation
    How are we going to mitigate that threat?

Code Threats

Many threats start with the code you write. Here are a few categories of coding issues to think about:

Weak Cryptography

“Does your application use SSL or TLS for secure network connections?”

If you are, make sure that you’re using the latest recommended versions.

“Does your application encrypt data or passwords?”

Make sure you’re using the latest hashing algorithms and not the older ones like MD5 or SHA-1.

“Did you implement your own encryption algorithm?”

Don’t. Just don’t. There’s almost never a good reason to implement your own encryption.

SQL Injection

SQL injection attacks happen when a user enters values in an application that are sent directly to a database without being sanitized (like Bobby Tables). This can inject malicious code that alters the original SQL query to retrieve, change, or delete data inside the SQL database.

Avoid injection attacks by not trusting any inputs coming from users. Your threat model should address any place you’re taking user input and saving it anywhere.

JSONDiff never saves any of the JSON data it compares. If we added that feature, we’d be open to many types of injection attacks. It doesn’t matter if we saved the JSON to a SQL database like PostgreSQL, a NoSQL database like MongoDB, or a file system. We’d mitigate that threat by making sure to sanitize our inputs and never trusting data from users.

Cross-Site Scripting (XSS)

Malicious scripts can be injected into web applications, making browsers run those scripts in a trusted context; that allows them to steal user tokens, passwords, cookies, and session data. This injection attack happens when a user saves or references code from somewhere else and gets that code to run in the application security context.

JSONDiff doesn’t let users save anything, but you can build URLs to preload the documents to compare like this:

This is a clear threat to address in the threat model. If someone referenced malicious code in an URL like this and sent it to someone, they could run it without realizing the risk. JSONDiff mitigates this threat by using a custom parser for the inputs and making sure that none of them get executed. We can test that with ‘evil’ JSON and JavaScript files like this:

Consider all of the inputs to your application and how you’re making sure they can’t cause problems.

Cross-Site Request Forgery (CSRF)

CSRF attacks wait for you to log in and then use your credentials to steal data and make changes. Session-based unique CSRF tokens can be used to prevent such an attack. Examine everywhere your application uses sessions. What are you doing to make sure sessions can’t be shared or stolen?

JSONDiff doesn’t have sessions, so there’s nothing to steal. Adding the ability to manage sessions and login would create a large set of new threats. The threat model would need to address protecting the session token, making sure that sessions can’t be reused, and ensuring that sessions can’t be stolen, among other things.

Logging Sensitive Information

Your logs aren’t secure, so don’t put any sensitive information there. Logging passwords or other sensitive customer information is the most common security issue in building an application: developers log some activity or error, and that contains the token or password information or personal information about the user.

What are you doing to make sure that developers don’t log sensitive information? Make sure your code review includes looking at logging output. There are also password scanners you can run over your log files to find likely passwords.

Code Review And Separation Of Duties

Trust, but verify, as some people on your team will be malicious. Everyone on your team makes mistakes — trust your team, but verify.

The best way to verify this is to separate the roles within your team. Allowing one person to change code, test it, and push it to production without any oversight presents a risk. Separation of duties splits the stages of your pipeline to production into multiple stages. There are four clear stages in every application that you should separate as much as possible:

  1. Writing the code,
  2. Reviewing the changes,
  3. Testing the functionality,
  4. Deploying the application.

For small projects, these roles may overlap or be part of an automated process. Even when the pipeline is fully automatic, you can still separate the functions. For example, making sure that the owner of a given area didn’t write all the tests for that area ensures that someone else is verifying the functionality. In well-run projects, these roles can switch so everyone gets a turn to write code as well as review it or write tests as well as do deployments.

JSONDiff is an open-source application that makes review much easier. For closed-source applications, you can use the Pull Request mechanism in Git to ensure all code is reviewed for the issues mentioned above. Spend time with your team and teach them what they should look for during code review.

Static code analysis tools also help detect security threats and other issues. These tools include linters and code checkers like JSHint, along with more comprehensive security scanners. These tools look at your source code and find problems based on the specific programming language you’re using. OWASP maintains a list of static analysis tools.

Many security scanners use common vulnerabilities and exposures (CVE) databases to know what issues to look for. Integrating these tools into your build process ensures that all your changes will be scanned.

The code for JSONDiff was scanned by JSHint, and all issues were fixed, or so I thought. It turned out that I scanned the JavaScript, but I missed the server side. My co-author Terry ran the SonarQube lint scanner and found an error in the PHP proxy:

This small fix is a great example of how a second pair of eyes can help you find problems.

Third-Party Threats

Your application has dependencies and probably a lot of them. They may be from other groups or open-source projects. The list of all these dependencies and the versions they use makes up a Software Bill of Materials (SBOM).

When the teams who maintain the projects you depend on find security issues, they report them in a CVE database. Other security professionals report CVEs as well. Third-party scanners look at those databases and make sure you aren’t using dependencies with known security issues.

Static application security testing (SAST) tools like Snyk can also scan third-party threats and report vulnerabilities in the libraries you’re using. Those vulnerabilities are then scored by severity, so you know how seriously to take each threat.

Tools like NPM have built-in vulnerability checking for dependencies. Integrating vulnerability checks in your build process mitigates that threat.

Data Security Threats

Protecting your application means protecting the application data. Always make sure your data is transmitted and stored with confidentiality, integrity, and availability.

Here are some of the risks to data security:

  • Accidental data loss or destruction,
  • Malicious access to confidential data like financial data,
  • Unauthorized access from various partners or employees,
  • Natural disasters or uncontrollable hazards like earthquakes, floods, fire, or war.

To mitigate those risks, we can implement these actions:

  • Protect the data with strong passwords, and define the policy for password expiration.
  • Categorize the data with different classes and usage, and define the different roles that can access different levels of data.
  • Always do an authorization check to make sure only a permitted user with the corresponding role can access that level of data.
  • Deploy various security tools like firewalls and antivirus software.
  • Encrypt your data at rest (when it’s stored somewhere).
  • Encrypt your data in transit (when it’s moving between two points).

JSONDiff doesn’t store any data. Let’s think about the in-transit threat:

  • The threat
    JSONDiff loads data from any URL to compare. How are we protecting that data?
  • Mitigation
    JSON uses SSL encryption when loading data if it’s available and always uses SSL to encrypt data sent to the browser.

Runtime Threats

After the application is deployed and running, we need to consider the runtime threats.

The best way to find runtime threats is a penetration test from a team of experts. Pen-test teams pretend they’re hackers and attack your application. They attack your external interfaces and look for SQL injection, cross-site scripting, denial of service (DDOS) attacks, privilege escalation attacks, and many more problems.

If you can afford an external pen-test team, then use one, but you can also do this yourself. Pen-test tools like the OWASP ZAP proxy perform dynamic scanning on your application endpoints and report common threats.

Threats To Stability

Availability attacks try to disrupt your application instead of hacking it. High availability and redundant designs mitigate the threat of these attacks.

There are several things we can consider to build up plans for those threats:

  • High-availability infrastructure, including the network and server. If we deploy the application via the cloud, we can consider using multiple regions or zones and set up a load balancer.
  • Redundancy for the system and data. This will improve stability and availability, but the cost will be high. You can balance stability and cost: Only make your most critical components redundant.
  • Monitoring of system and setup alerts if the system might be running at capacity in various components. There could be a malicious activity that will destroy your infrastructure, and monitoring the health of your system availability will be critical.
  • Backup and restore plans. If security threats take the system down, how can we quickly bring it back up? We need to build a plan for backing up and restoring.
  • Handling outages of dependent services. We need to build up some fallback plans, design and implement circuit breakers, and keep dependent services from breaking the entire application.

Building A Data Recovery Plan

What can disrupt your application or system? Think about human error, hardware failure, data center power outages, natural disasters, and cybersecurity attacks.

Business continuity and disaster recovery (BCDR) design will be critical to ensure that your organization, users or customers, and employees can do business with minimal disruption.

For an organization like a company, you’ll need to create a business continuity plan. That means first assessing your people, IT infrastructure, and application. Identify people’s roles and responsibilities for your business continuity plan and recovery solutions.

If you’re deploying your application in a cloud-based environment, you need to deploy it across multiple regions or multiple cloud providers. The critical part is the data storage for the system and application: All data should have point-in-time replication, allowing your application or service to be restored soon from a secondary data center or a different country or continent.

Your BCDR solution should be regularly tested every year (or even more often), and your plan should be frequently reviewed and improved by the people in your organization.

The Worst-Case Scenario

Threat models provide a framework to imagine the worst-case scenario, which helps you think outside the box and come up with novel threats.

So what’s the worst-case scenario for JSONDiff? It probably involves the proxy.php script. We already know to focus on the proxy, and there have been some severe PHP exploits in the past. The proxy.php file is also the only part that runs on the server side. That makes it the weakest link.

If I was able to hack the proxy, I could change the way it works. Maybe I could fool it into returning different content. I can’t run malicious code with that content, but I could trick someone into thinking two JSON documents were the same when they weren’t; I might be able to do something malicious with that.

I could go even further and think about what would happen if someone hacked into the server and changed the contents of the code, but now I’m just back to credential management, which is already covered in the threat model.

This reminds us to keep up to date with PHP versions, so we get the latest security fixes.

Thinking of the worst-case scenario sends you in different directions and improves your threat model.

We’re Just Scratching The Surface

We’re just scratching the surface of all the threats to think about when building a threat model. Mitre has an excellent matrix of threats to think about when building your own threat model. OWASP also maintains a Top 10 list of security risks and a Threat Modeling Cheat Sheet that everyone should be familiar with.

The most important takeaway is that you should think about all the ways people interact with your application and all the ways your application interacts with other systems. Any time you can simplify those interactions, you’re reducing your vulnerability to threats.

For more complex threat models, making a threat diagram is also useful. Tools like draw.io have specific shapes for threat modeling diagrams:

What If I Can’t Mitigate A Threat?

You can’t mitigate every threat. For JSONDiff, a threat I have no control over is Google AdSense, which adds dynamic content to JSONDiff.com. I don’t get to check that content first. I can’t verify every ad that Google might show. I also can’t force Google to go through a security review process for my site. In the end, I just have to trust Google.

In the rare cases when you have a threat you can’t mitigate or minimize, the best you can do is settle for transparency. Be as open and honest about that threat as possible. Document it. Let your users or customers know, so they can make their own choices about whether the risk is worth it.

Build Your Threat Model Early

Threat models help the most when begun early in the process. Start putting your threat model together as soon as you pick technologies. Decisions about how you’ll manage users, where you’ll store data, and where your application runs all have a major impact on the threat model of your application.

Working on the threat model early, when it’s easier to make architectural changes, makes it easier to fend off threats.

Communicating Your Threat Model

The previous section showed you how to start creating your threat model. What should you do with it once you’re done?

There are a few potential audiences for your threat model:

  • Security reviewers
    If you create an application for any security-conscious company, it will want to do a security review. Your threat model will be a requirement for that process. Having a threat model ahead of time will give you a giant head start.
  • Auditors
    Security auditors will always look for a threat model. They want to make sure you’ve thought through the threats to your application.
  • Yourself
    Use your own threat model to manage your threats. Have the team keep it up to date while you’re adding new features. Making sure that team members update the threat model will force them to think of any potential threats they’re adding when they make changes.
  • Everyone
    If your project allows it, then share your threat model with everyone. Show the people who trust your application the potential threats and how you’re handling them. Openness reassures them and helps them appreciate all the work you’ve done to make your application secure.
Keep Improving Your Threat Model

We talked about the most important steps in constructing a threat model, but threats are a constantly moving target. We need to build up a management plan for security incidents, defining how to respond to any threats we learn about from internal or external sources.

Every incident you find should end up in your threat model. Document how you found it, how you fixed it, and what you did to make sure it never happens again. Every application has security issues; what matters is how well you handle them. This is a continuous process of improvement:

  1. Build the architecture to understand what the application is for.
  2. Identify the application threats.
  3. Think about how to mitigate the identified vulnerabilities.
  4. Validate the threat model with other experts in your area.
  5. Review the threat model, and make updates every time you find a new threat.
Threat Models Let Me Sleep At Night

I make threat models for myself. I want to sleep at night instead of staring at the ceiling and wondering what security holes I’ve missed. I want to focus on delighting my users without constantly worrying about security. Threat models give me a framework to do that.

I make threat models for my customers. I want them to know that I take their security seriously, and I’m thinking about keeping them secure. I want to show them what I’m doing and help them understand so they can judge for themselves.

Threat models are flexible and grow or shrink as much as you need. They provide a tool for you to reassure your users about security and allow you to sleep at night. Now you know why you need one for your application, too.

JSON in Kotlin

In any web service that receives and transmits data to and from a server, the first and last events will usually be transforming the data from the format used by the web request into the format that the web server will handle, and vice versa; these operations are called deserialization and serialization, respectively. For some web services, the thought put towards this part of the flow of data is focused solely on how to configure the serialization mechanism so it works properly. However, there are some scenarios for which every CPU cycle counts, and the faster the serialization mechanism can work, the better. This article will explore the development and performance characteristics of four different options for working with the serialization of JSON messages—GSON, Jackson, JSON-B, and Kotlinx Serialization, using both the Kotlin programming language and some of the unique features that Kotlin offers compared to its counterpart language, Java.

Setup

Since its first release in 2017, Kotlin has grown by leaps and bounds within the JVM community, becoming the go-to programming language for Android development as well as a first-class citizen in major JVM tools like Spring, JUnit, Gradle, and more. Among the innovations it brought to the JVM community compared to Java was the data class, a special type of class that is to be used primarily as a holder of data (in other words, a Data Transfer Object, or DTO) and automatically generates base utility functions for the class like equals(), hashcode(), copy(), and more. This will form the base of the classes that will be used for the performance tests, the first of which being PojoFoo. “Pojo” stands for “Plain Old Java Object,” signifying using only basic class types of the Java programming language:

Getting Internationalization (i18n) Right With Remix And Headless CMS

This article is a sponsored by Storyblok

How much of a language barrier is there still in the 21st century? You, as the reader, are probably very familiar with English, but what about others?

Nowadays, most of us have often heard the importance of accessibility, better performance, and better UX or DX. You might not hear or often see about i18n compared with these topics. But if you see facts and numbers from the statistics, you might find some surprising results about i18n and its impact. Let’s find out about that together.

i18n And l10n

Before we go through the impact of i18n, let’s learn the difference between the two terminologies.

  • i18n
    i18n stands for internationalization. Between the first character “i” and the last character “n” from this word, there are 18 characters. i18n describes implementing the structures and features for your applications to be ready to localize content.
  • l10n
    l10n stands for localization. Between the first character “l” and the last character “n” from this word, there are ten characters. l10n means translating content into the languages for users who are accessing from specific regions.

As a follow-up, i18n contains a programmatic process to implement features for content editors and translators to be able to start the l10n process from the UI.

Why Does i18n Matters That Much?

To see the importance of i18n, let’s look at the numbers and statistics for objective information. You will see the numbers of some facts below, and before reading further, let’s guess what those numbers stand for.

  • 5.07 billion
  • 25.9%
  • 74.1%
  • China
  • Asia

The first fact shows a tremendous amount of the numbers. 5.07 billion is the number of users in this world in 2020. The world population in 2021 was 7.837 billion, nearly 8 billion. More than half of the world's population has access to content on the internet and apps.

Based on the number of users in this world, there’s another research about the most common languages used on the internet. Looking at the chart, most of us pay attention to the highest number on this diagram: 25.9%, English.

The second highest is the rest of the languages, 23.1%. Also, suppose you gather the rest of the percentage except English. In that case, you may realize out of the over 5 billion users, 74.1% of the users are accessing the content in any other language.

After going through these facts, we can now talk about why internationalizing and localizing your content for Asia and China, in particular, is crucial. China has the most internet users worldwide. As a result, more than half of all internet users globally are from Asia.

Based on what we saw, probably, we can not ignore localizing content. It will improve UX if these huge amounts of users worldwide could have localized content. After knowing the potential impact of proper i18n, let’s look at the fundamental logic.

How i18n Works At A Basic Level

Regardless of the technology to implement i18n, there are three ways to determine languages and regions.

  1. The location of the IP address
  2. Accept-Language header/Navigator.languages
  3. Identifiers in URLs

Using the IP address detects the region of the users and allows them to access content in their regional languages. However, the users’ IP address does not necessarily match their language preference. Moreover, location analysis prevents the sites from being crawled by search engines.

Using Accept-Language header or Navigator.languages is another possible approach to implement i18n. However, this approach provides language information but not regional information.

i18n is not just about localizing content. It includes improving UX as well. For example, creating identifiers in URLs enhances UX. It also helps to divide localized content into the dedicated system. We will cover how it’s possible to implement such a system in the “A Combination Of Remix And CMS” section.

Typically, identifiers in URLs exist in three different patterns:

  1. Differentiate by domains (i.e. hello.es, hello.jp)
  2. URL parameters (i.e. hello.com?loc=de)
  3. Localized sub-directories (i.e. hello.com/es, hello.com/ja)

To follow the same-origin policy for better SEO, localized sub-directories can be used.

Based on the interesting facts and the fundamental logic for implementing i18n, we’ll talk about frameworks and libraries, as some of them use i18n libraries.

Libraries

In order to not reinvent the wheel whenever we want to implement i18n into our projects, developers have different libraries, tools, and services that can be used to facilitate the work. If we are working with React or React-based frameworks, we have different options available. Let’s talk about some of them.

Format.js

Format.js is a modular collection of JavaScript libraries that we can use to implement i18n logic in both the client and the server. This group of libraries is focused on formatting numbers, dates, and strings. It offers different functionalities and tooling and runs in the browser as well as in the Node.js runtime. It integrates with various frameworks, like Vue and React, so that we can use its functionalities on our Remix projects. You can read more about it on the official React Intl's docs.

i18next

Another alternative that we can evaluate for our project is i18next. This JavaScript library goes beyond the standard i18n features, providing a whole suite to manage i18n in our projects. We can detect users’ language, cache translations, and even install plugins and extensions. As it was built in JavaScript, we can use this tool for websites as well as for mobile and desktop applications.

What About Remix?

When creating a website using Remix, we have different options to consider. As it is a React-based framework, we can use any of the previously mentioned libraries. However, we will go through two approaches that can fit better in your Remix projects. First, we will see how to localize content using remix-i18next, a Remix specific library for i18n. Second, we will use a headless content management system as the source of our content’s different languages/locales.

remix-i18next

Based on i18next, Sergio Xalambrí, one of the main Remix contributors, created remix-i18next. This library offers similar features and modules as the JavaScript library but focusing on Remix concepts and approaches. Easy to set up and use, production-ready, and without any requirement or dependency. Let’s have a closer look at how to implement i18n into our Remix projects using remix-i18next.

First of all, we need to install some npm packages:

npm install remix-i18next i18next react-i18next i18next-browser-languagedetector i18next-http-backend i18next-fs-backend

All of them will help us to manage i18n on both the server and the client side of our website. We will also use them to set up our backend and define the logic that will detect the language from the user.

Now, we should add some configuration that will be used across the website from both the client and the server side. Let’s create a couple of JSON files with the translations of the different character strings that we’ll use on our website:

{
  "intro": "Hello everyone!"
}
{
  "intro": "Hola a todos!"
}

By naming the files “common.json”, we’re defining the namespace for the strings that we’ll list in them.

Now, let’s create a file called i18n.js. This file contains different configuration settings that we’ll use at the moment of initializing our i18n server.

export default {
  supportedLngs: ["en", "es"],
  fallbackLng: "en",
  defaultNS: "common",
  // Disabling suspense is recommended
  react: { useSuspense: false },
};

You can see more configuration options in the official i18next docs.

Now, create the file i18next.server.js, which contains logic that will be used in the entry.server.jsx file of our Remix project.

import Backend from "i18next-fs-backend";
    import { resolve } from "node:path";
    import { RemixI18Next } from "remix-i18next";
    import i18n from "~/i18n"; // The configuration file we created

    let i18next = new RemixI18Next({
      detection: {
        supportedLanguages: i18n.supportedLngs,
        fallbackLanguage: i18n.fallbackLng,
      },
      i18next: {
        ...i18n,
        backend: {
          loadPath: resolve('./public/locales/{{lng}}/{{ns}}.json'),
        },
      },
      backend: Backend,
    });

    export default i18next;
We’re basically initializing a new i18n server that will run with our Remix backend. We’re specifying the location of the JSON files containing the translations to be used. Let’s add these features to our main Remix config files. First, we add some logic to be able to translate content client side. To do that, let’s edit our entry.client.jsx file:
import i18next from "i18next";
import LanguageDetector from "i18next-browser-languagedetector";
import Backend from "i18next-http-backend";
import { I18nextProvider, initReactI18next } from "react-i18next";
import { getInitialNamespaces } from "remix-i18next";
import i18n from "./i18n"; // The configuration file we created

i18next
  .use(initReactI18next)
  .use(LanguageDetector)
  .use(Backend)
  .init({
    ...i18n, // The same config we created for the server
    ns: getInitialNamespaces(),
    backend: {
      loadPath: "/locales/{{lng}}/{{ns}}.json",
    },
    detection: {
      order: ["htmlTag"],
      caches: [],
    },
  })
  .then(() => {
    // After i18next init, hydrate the app
    hydrateRoot(
      document,
      // Wrap RemixBrowser in I18nextProvider
      <I18nextProvider i18n={i18next}>
        <RemixBrowser />
      </I18nextProvider>
    );
  });

We need to wait to ensure translations are loaded before the hydration in order to keep our web app interactive.

Let’s add the logic to the entry.server.jsx file now:

import { createInstance } from "i18next";
import Backend from "i18next-fs-backend";
import { resolve } from "node:path";
import { I18nextProvider, initReactI18next } from "react-i18next";
import i18next from "./i18next.server"; // The backend file we created
import i18n from "./i18n"; // The configuration file we created

...

export default async function handleRequest(
...
) {
  // We create a new instance of i18next
  let instance = createInstance();

  // We can detect the specific locale from each request
  let lng = await i18next.getLocale(request);
  // The namespaces the routes about to render wants to use
  let ns = i18next.getRouteNamespaces(remixContext);

  await instance
    .use(initReactI18next)
    .use(Backend)
    .init({
      ...i18n,// The config we created
      lng, // The locale we detected from the request
      ns,
      backend: {
        loadPath: resolve("./public/locales/{{lng}}/{{ns}}.json"),
      },
    });

  return new Promise((resolve, reject) => {
    ...

    let { pipe, abort } = renderToPipeableStream(
      
        {" "}
      ,
      ...
    );
    ...
  });
}

Identifying users’ preferred language will allow us, among other things, to redirect them to certain routes.

Now we can start using the functionalities provided by remix-i18next to detect the user’s locale and deliver translated content based on that. Let’s edit the root.jsx file:

...

import { json } from "@remix-run/node";
import { useChangeLanguage } from "remix-i18next";
import { useTranslation } from "react-i18next";
import i18next from "~/i18next.server";

...

export let loader = async ({ request }) => {
  let locale = await i18next.getLocale(request);
  return json({ locale });
};

export let handle = {
  i18n: "common",
};

export default function App() {
  // Get the locale from the loader
  let { locale } = useLoaderData();
  let { i18n } = useTranslation();

  useChangeLanguage(locale);

  return (
    <html lang={locale} dir={i18n.dir()}>
      ...
    </html>
  );
}

The useChangeLanguage hook will change the language of the instance to the locale detected by the loader. Whenever we do something to change the language this locale will change, and i18next will load the correct translations.

Now, we are able to translate content in any route:

import { useTranslation } from "react-i18next";

export default function MyPage() {
  let { t } = useTranslation();
  return <h1>{t("intro")}</h1>;
}

We use the t() function to show translated strings based on the list of messages that we defined in our JSON files.

In this example, we use one default namespace, but we can set up multiple namespaces if we want. You can read more about the t() function in the official i18next docs.

In case we want to translate content server side, we can use the getFixedT method inside our loaders and actions:

import i18next from "~/i18next.server";

...

export let loader = async ({ request }) => {
  let t = await i18next.getFixedT(request);
  let title = t("intro");
  return json({ title });
};
A Combination Of Remix And CMS

Together, we explored the available options to implement i18n with Remix. At the beginning of this article, we learned that i18n could result in hugely improved UX and SEOUX, and SEO. As part of UX, it’s also important to include better DX.

The approach above creates translation files at the source code level. Also, we don’t have the logic to implement identifiers in URLs. To achieve this, let’s look at the approach of integrating a CMS. In this article, we’ll use Storyblok, which offers three different approaches to localizing content and handles to determine the languages and regions.

Note: If you want to create the connection between your Remix app and Storyblok, there’s a 5-minute tutorial that explains just how to do that.

After that, you can quickly clone a starter space by using this magic link to have all the necessary components and field types. This example space covers an approach called folder-level translation. We’ll cover what it is about in the next section. https://app.storyblok.com/#!/build/181387

Choose Between Three Approaches

Storyblok has three ways to create the layout to store localized content and determine languages and regions.

  1. Folder-level translation: Divide localized content in folder-level.
  2. Field-level translation: Translate in field-type level.
  3. Space-level translation: Dedicate spaces (environments or repositories) into certain localized content.

To cover identifiers in URLs, folder-level translation works perfectly, as each folder will only contain relevant localized content.

Also, identifiers can be modified from the folder settings via the slug.

By modifying the slug from the folder settings screen, this localized identifier in the URL appears in all stories inside this Japanese folder. For example, the about page inside of the Japanese folder already has a localized identifier in the URL.

To programmatically generate content pages, Remix has a feature called Splats, catching all slugs regardless of the nested levels. Naming a file $.jsx will enable the catch-all slug fundamental function.

app
├── root.jsx
└── routes
    ├── files
    │   └── $.jsx
    └── files.jsx

The difference between dynamic segments from Remix is that splats still match at the next /. Therefore, splats will capture everything in the path. If the URL path is hello.com/ja/about/something, the splat route has a special param to capture the trailing segments of the URL.

export async function loader({ params }) {
  params["*"]; // "ja/about/something"
}

Using the splat route’s special param, let’s edit $.jsx file.

export default function Page() {
 // useLoaderData returns JSON parsed data from loader func
 let story = useLoaderData();
 story = useStoryblokState(story, {
   resolveRelations: ["featured-posts.posts", "selected-posts.posts"]
 });
 return <StoryblokComponent blok={story.content} />
};
// loader is Backend API & Wired up through useLoaderData
export const loader = async ({ params, preview = false }) => {
 let slug = params["*"] ?? "home";
 slug = slug.endsWith("/") ? slug.slice(0, -1) : slug;
 let sbParams = {
   version: "draft",
   resolve_relations: ["featured-posts.posts", "selected-posts.posts"],
 };
 // …
 let { data } = await getStoryblokApi().get(cdn/stories/${slug},
 sbParams);
 return json(data?.story, preview);
};

HINT: In the section “Choose between 3 approaches”, we didn’t cover all three approaches, but if you’d like to know more, all approaches are documented below.

Summary

We learned together the facts and statistics to know the impacts and the importance of i18n and saw how Remix handles several options to implement advanced i18n. Interestingly, a better i18n experience provides better SEO and UX. Hopefully, this article provided you with new knowledge and insightful learnings.

Video version of this article https://portal.gitnation.org/contents/lets-remix-to-localize-content-1072
Remix docs https://remix.run/docs/en/v1
remix-i18next https://github.com/sergiodxa/remix-i18next
Storyblok docs https://www.storyblok.com/docs/guide/introduction

Understanding Authentication In Websites: A Banking Analogy

There is a strange ritual that web developers around the world have been perpetuating from the dawn of computers to modern days. Without this ritual, the web application cannot really exist. It is not “ready to go live,” a web developer would say. This ritual is the implementation of authentication.

You may have been through this ritual multiple times. But do you really understand what is happening?

This article is my attempt at making this ritual less obscure. It shouldn’t have to take a wizard to implement a good authentication system. Just a good banker!

Your Password Matched. Now What?

Authentication is the process of attributing a given request to a unique entity, usually a person. It then enables authorization, the process of allowing (or not) said person to access a resource, be it a web page or an API endpoint.

I will focus on the part I’ve always found the most difficult to understand: what happens after the user just submitted the login form and their credentials were confirmed as valid by the backend.

  app.post("/login", async (request, response) => {
    const { body } = request;
    const user = await getUserByCredentials(body.username, body.password);
    if (user) {
      // NOW WHAT???
    }

Your typical login endpoint. This article is about what happens afterward.

We are going to talk about tokens, CORS, cookies, and headers. Those concepts are sometimes quite abstract. To make them more real, I’ll take a shot at a banking analogy.

Authentication Relies On Tokens Like Banks Rely On Money

Let’s consider two common patterns:

  • JSON Web Tokens aka JWT.
    It’s an open standard. You’ll find useful resources at the bottom of this article if you want to go further.
  • Session-based authentication.
    It’s a pattern more than a standard. So the implementation may vary, but I’ll try to give an overview of how it works.

As a front-end developer, you may have no control over this design decision, so it’s important that you understand how both work.

As a full-stack or backend developer, you have control over this choice, so it’s even more important that you understand what you are doing. I mean, it’s kinda your job :)

A JWT Is Like A Banknote

With JSON Web Token-based authentication (JWT), during the login step, the server will pass the client a token that contains encrypted information, a “claim.” When the client makes a request, they pass this token alongside the request.

The server verifies that this token is indeed a valid token that it itself generated earlier. Then the server reads the token payload to tell who you are, based on a user ID, username, email, or whatever the payload is. Finally, it can authorize or refuse access to the resource.

Think of tokens as banknotes. A banker can print new banknotes like a server can create tokens. Money lets you buy things, or if I rephrase, it lets you access paid goods and services. The JWT lets you access secure services the same way.

If you want to use your money, either at the bank to credit your account or in a store, people will double-check that your banknotes are real. A real banknote is one that has been created by the bank itself. When the banknote is confirmed to be legit, they can read the value on it to tell how many things you can buy.

JWTs work the same, the server can verify that they indeed created this token earlier, so it’s a real one, and then they can read the payload it contains to confirm who you are and what your permissions are.

Revoking a token can be hard. The server has to maintain a list of blacklisted tokens that are no longer valid, like the bank would need to list robbed banknotes’ unique numbers.

Session-Based Authentication Is Like A Credit Card

From the front-end developer standpoint, the main difference between JWT and session-based authentication lies in what the server will return in case of a successful login.

In session-based authentication, instead of having a claim full of user information, you just get a “session id.”

From the backend developer’s standpoint, session-based authentication involves an additional database collection or table to store the sessions. A session is just an object very similar to the JWT, which can contain, for instance, a user id and an expiration date.

This means that the server keeps a lot of control over the authentication. It can log a user out by removing the session from the database, whereas it cannot destroy JWTs, since it doesn’t store them.

Session-based authentication is comparable to credit cards. You go to the bank to open your account. The banker will check your id and issue a credit card with an expiration date. This is the authentication process.

The credit card itself has barely any value: it’s just plastic and a microchip whose purpose is to uniquely identify your account. It can easily be revoked by the bank, usually after a certain period of time, but also when you declare your card stolen.

When you want to pay for something, the store will communicate with your bank to check that your credit card is valid and has a positive balance.

When you access an authenticated resource, the server will check if the session id matches an open session and then if this session matches a user that is authorized to access this resource.

Tokens Are As Precious As Banknotes And Credit Cards. Don’t Get Them Stolen!

It’s easy for companies and banks to store money. They have those big reinforced-steel safes with alarms and sometimes even dragons. Likewise, it’s easy for servers to talk securely to one another.

But websites are like private persons. They can’t have dragons, so they have to imagine specific approaches to store their precious tokens.

Where Do You Store Your Money?

Whatever the pattern you pick to authenticate users, JSON web tokens and session ids are precious things. In the remainder of this article, I will treat them similarly, apart from a few technical subtleties. I’ll call either of them the “authentication token,” the thing you show the server to prove that you are authenticated, like money for paying stuff.

Since you have to pass the token alongside every request you make to the server — you need to store it somewhere on the client side.

Here, I’d like to focus more specifically on the case of websites, where the web browser is in charge of sending the requests. You’ll find many articles and documentation in the wild that do not clarify whether they apply to server-server communication or browser-server. The latter is a trickier, uncontrolled context that deserves its own specific patterns.

So, the banker just handed you either a new credit card or a bunch of banknotes. Where do you stash them? In a compartment concealed in the heel of your shoe, in your mailbox, beneath your mattress, or maybe, in your fancy fanny pack?

Not All Requests Are Born Equal: Loading A Web Page vs. Calling An API

We can divide authenticated content into two categories: web pages and API endpoints. From a technical standpoint, both are server endpoints. The former returns HTML content or files, the latter — any kind of content, often JSON data. They differ in the way they are consumed.

Web pages are accessed via a web browser that triggers GET requests automatically, depending on the current URL.

APIs are called using JavaScript code. This reliance on APIs and JavaScript is at the core of the Jamstack, REST, and Single-Page-Application philosophies.

Thus, let’s differentiate two use cases:

  1. You want to secure access to the web page itself. Meaning the user can’t even see the structure of your page or any HTML.
  2. You want to secure calls from your web page to your API. Anyone can see your web page structure, but they can’t get any data.

If you combine both, unauthenticated users can’t see the page structure and can’t get any data from your API.

Use Case 1: Securing A Web Page Access With Cookies

I’ll cut it short, to secure web page access, you have no other choice but to use cookies, ideally HTTP-Only, secure cookies.

From a technical standpoint, a cookie is a piece of data that should be sent alongside every request to a specific domain. They are part of the HTTP protocol.

That’s a pretty broad scope. When a website cookie popup differentiates “essential cookies” from other cookies, those essential cookies are often dedicated to authentication. The others are just a way to keep track of the product pages you’ve visited to later shove them in your face to the point of literally optimizing the billboard displays in your neighborhood as if you really, really wanted to buy two freezers within the same month.

Setting a cookie is possible in both directions. Client-side JavaScript can set cookies to be sent to the server. That’s how advertisement tracking works (though GDPR is forcing advertisers to use server-side logic more and more these days).

But the server can also tell the browser to store a cookie using the Set-Cookie header in the response.

This convenient feature lets the backend set what we call “HTTP-only” cookies: cookies that cannot be read by JavaScript. And “secure” cookies are only set through HTTPS for better security.

Important note: Using HTTP-only cookies prevents XSS attacks (script injection to steal the token) but not CSRF attacks (forging requests on behalf of your authenticated users, basically impersonating them). You need to combine multiple strategies to parry all possible attacks. I invite you to read more on these topics if you manage a non-trivial website.

During the authentication process, the Set-Cookie header should be used in response to the “login” request so the client becomes authenticated via a token.

Cookies are like a wallet. You can store fidelity cards in it and a picture of your cat, but also duller things, like your money. It’s in your pocket or your handbag, so it’s hard to access for other people (HTTP-only, no JavaScript access) if you don’t take it out in unsafe places (secure). Yet, you always have your token with you if needed.

Browsers rely a lot on cookies, and they send them automatically alongside every request they trigger as long as the domain and path match, so you don’t have to worry about them. If you carry your wallet with you anytime you go out, you’re always ready to pay.

Use Case 2: Securing API Calls. The Cookie Scenario

Now, let’s authenticate calls to our API, which feeds the website with data. The easiest option would be to use a cookie, as we did in the previous section, to secure web pages. But contrary to the web page, there is also another option — using web storage. Which one is the most suited is debatable. Let me start with cookies.

The cookie that contains the auth token is sent automatically alongside requests triggered by the browser when you navigate a website or when you submit a form.

But requests triggered by JavaScript code do not fall into this category. When you trigger a request programmatically, you need a bit more configuration to send cookies.

Remember that the cookie should be HTTP-only, so you can’t read the cookie content with JavaScript nor inject your token in the Authorization header. You could do that with a cookie that is not HTTP-only, but then using a cookie doesn’t make sense in the first place.

All existing libraries capable of sending requests have to be based on the browser JavaScript API aka the “Browser Object Model,” which means either XMLHttpRequest or fetch. You have to rely on their configuration to enable sending HTTP-only cookies alongside the request.

In the case of fetch, this option is named credentials. Hopefully, for you, fellow front-end developers, it’s very easy to use.

fetch(
"http://localhost:8000/account/signup", 
{ 
  method: "POST",
  body: JSON.stringify({username:"foo", password:"barbar"}), 
  // This will include credentials (here, cookies) in the request
  credentials: "include"
})

With XHR, the option is named withCredentials and behaves similarly.

Note: Your server must also confirm that it indeed accepts credentials by setting the Access-Control-Allow-Credentials header to true in the responses. This option is symmetrical. It controls both sending cookies to the server, but also accepting the Set-Cookie header from it. So don’t forget to set it to true for the call to the login endpoint.

If the credentials option is not set and the server answers with a Set-Cookie header… the browser will silently ignore this attempt to set the cookie without even a warning. When this happens, start by double-checking your credentials option and then CORS if applicable (see next section).

Oops, I’ve almost forgotten your banking analogy! It’s a tough one but let’s say that you probably have your credit card or a few banknotes always there in your wallet. But some services expect a prepaid card, and you may need to explicitly remember to take them with you. That’s the best analogy I could achieve. Comment if you can do better!

Return Of Use Case 2: Securing API Calls. The Web Storage Scenario

Remember, I said earlier that for API calls, you have the choice to either store the token in an HTTP-only cookie, like you would do to secure a web page, or to store the token in the web storage. By web storage, we mean either sessionStorage or localStorage.

I’ve seen and used both approaches, cookies and web storage, but I sincerely don’t have the expertise to assert that one is better than the other. I’ll stick to describing how it works so you can make an enlightened decision.

Codewise, they are much easier to use than HTTP-only cookies because web storage is available to JavaScript code. You can get the token from the login response payload, store it in localStorage, and insert it into the Authorization header in later requests. You have full control of the process without relying on some implicit browser behavior.

headers: {
    “Authorization”: `Bearer ${window.localStorage.get(“auth_token”)}`
}

Web storage is immune to certain attacks, namely Cross-Site Request Forgery. That’s because, contrary to cookies, they are not automatically tied to requests, so an attacker will have trouble forcing you to give your token. Think of a piggy bank. You can’t just get money out of it. It’s, however, very sensitive to script injection, where an attacker manages to run JavaScript and reads the token.

Therefore, if you go the web storage way, you’ll want to explore strategies that prevent XSS attacks from happening in the first place.

JWT Specifics: What Information They Should Contain And How To Handle The Refresh Token

You should avoid storing critical information in a JSON web token, even when using HTTP-only cookies. If someone still manages to steal and decrypt the token, they will be able to impersonate the user, but at least they won’t be able to get much information from the token itself.

You can afford to lose a five-dollar banknote; it’s more problematic to lose a $5,000 banknote. By the way, if such a thing as a $5,000 banknote exists, please send me a specimen to prove it. Thank you.

JWTs are meant to be short-lived, like banknotes, which would wear quickly when used a lot. Since they are hard to revoke, giving them 5-minute lifetime guarantees that even a stolen JWT token cannot be a big problem.

Since you don’t want the user to log in again every 5 minutes, the token can be combined with a longer-lived refresh token. This second token can be used to get a new JWT.

A best practice is to store the refresh token in a cookie on a very specific path like /refresh (so it’s not sent with all requests, contrary to the JWT) and to make it a one-time-only token.

A refresh token is like bringing your ID card to get more money from your bank account. You don’t do that often, and you probably want to store your ID even more safely than your banknotes.

CORS Is How Shops Communicate With Their Bank

Formally, CORS (Cross-Origin Resource Sharing) is a protocol used by servers in order to instruct browsers to selectively relax some of the Same-Origin Policy (SOP) restrictions on network access for some trusted clients.

Less formally, the server can tell your browser that the request you send is legit based on the request’s origin. If the server says it’s not legit, the browser won’t send the request.

For instance, https://www.foobar.com, https://qarzed.fake, and http://localhost:8080 are possible origins. The port and protocol matter.

An API hosted on https://api.foobar.com will most often allow requests only from https://www.foobar.com and https://mobile.foobar.com.

I will focus here on API calls triggered with fetch in JavaScript, and only if the API lives in another Web origin. CORS also affects more advanced use cases like embedding images from another website in your own website, iframe, navigating from one site to another, and so on, but that’s beyond the scope of authentication per se.

Websites are like stores. People can enter them and try to buy things. But the store also has its own complicated internal back office logic. Namely, it relies on a bank to process payments.

Any shop owner can try to connect to any bank. Any website can try to connect to any API. But the payment won’t be processed by the bank if they don’t have a company account there.

Likewise, if you host the website on www.foobar.com and the API on www.foobar.com/api, there is one origin only www.foobar.com. No need for CORS because there is no “cross-origin” call involved. You’ll need to bother about CORS if, instead, you host the API on api.foobar.com. We have two distinct origins, api.foobar.com and www.foobar.com.

This also applies to using different ports on localhost during development. The IP 127.0.0.1 is even different from localhost, as origins are compared directly as strings and not based on their actual IP address.

Some big companies, like car sellers, have their own internal bank. It’s easier for them to communicate with this bank if it’s literally located in the same office. They are like full-stack monolithic applications.

But if this internal bank is located in a different country (API separate from the website), this doesn’t work anymore. It could as well be a traditional bank with multiple clients (a third-party API). CORS is needed either way.

CORS is a concept specific to browser-server communication. When a server talks to another server, the Same-Origin Policy doesn’t apply. The API may either block those requests with no origin completely or, on the contrary, accept them, or filter them based on IP and so on.

Banks have specific patterns of communicating with each other that are different from how they communicate with their client companies.

A Polite Cross-origin Discussion Between A Server And A Client, Using Preflight Requests

Here’s a more concrete vision of what happens during a cross-origin request and how the server response headers should be set.

From the browser standpoint, whenever a website sends a “non-simple” request to an API, the browser will first send a preflight request. It’s a request to the same URL but using the OPTIONS method, which is dedicated to checking whether you can actually consume this API endpoint.

Fetching JSON data falls into this “non-simple” category in standard browsers, as well as calling your login endpoint if the auth API doesn’t live on the same domain as your website, so this step is extremely important.

The server receives this request, which includes an Origin header. It checks its own list of accepted origins. If the origin is allowed, it will respond with the right Access-Control-Allow-Origin header.

This server header should match the website’s origin. It can also be a wildcard * if the API is totally open, but that works only for requests without credentials, so forget about this if your API requires authentication.

If the response header doesn’t match, the browser will trigger an error at this point because they already know that they can’t call this API. Otherwise, they will keep going and send the actual request.

A store (the API) can decide to accept only certain types of credit cards. Some stores accept only their own card, and they don’t even use CORS (same-origin scenario). Some stores accept only Visa or Mastercards (cross-origin scenario). The cashier (the browser) won’t let you pay if they are told your card is not accepted by the store.

There are a few other Access-Control headers that provide more information, like accepted methods or headers. Allow-Origin is the most basic one, as well as Allow-Credentials, so cookies and Set-Cookie headers are properly enabled when using fetch.

Access-control-allow-origin: https://www.foobar.com
Access-control-allow-credentials: true

Expected response headers for an authenticated call (or the “login” form submission) if the website lives on https://www.foobar.com and the API is on another domain or port.

Note: The server specifies the CORS policy, and the browser enforces it. If suddenly everyone used an unsafe browser run by pirates, they could swarm your server with forged requests. Hopefully, people usually don’t use a shady browser they found in a USB key itself found lying on the ground of a dark alley at 3 am.

The header things and the browser behavior are just politeness. The browser is an excessively polite software. It will never spam the server with requests that will be rejected anyway. And it won’t talk to rude servers that don’t set the proper headers. That’s why the Accept-Control-Allow-Origin response header must be correctly set for cross-origin requests to work.

Before going as far as aiming a customer with a shotgun, the banker will probably tell you, “sir, you entered the wrong bank office,” and the customer will probably answer, “oh, sorry, my mistake. I will leave immediately!”.

Cookies Or Not Cookies: The SameSite Option

Setting the credentials attribute in your fetch request is not enough for sending the cookie, plus it doesn’t apply to pages, only to programmatic requests. The SameSite attribute of cookies lets you configure when a cookie should be sent, depending on the current site.

The concept of “Site” is slightly different from the concept of domain:

  • https://api.foobar.com and https://www.foobar.com are 2 different domains.
  • They are, however, the same site. Nowadays, the scheme matters, too (HTTP vs. HTTPS).
  • There is an exception to this rule, e.g., in multi-tenant architectures: foo.github.io and bar.github.io will be different sites despite having the same domain, and the server owner can configure that to their will.

If the Access-control response headers are correctly set, and the preflight request succeeds, the browser will proceed with the actual request.

But if SameSite is not set with the right value in the cookie containing the authentication token, this cookie won’t be sent to the server. When loading a page, you will be considered unauthenticated by the server just because it literally doesn’t have your cookie.

The SameSite: Lax value is usually the most appropriate. It will only send cookies to the same site but also when the user comes from another site and is redirected to your page (only for top-level navigations). This is the current default in most browsers.

With SameSite: Lax, the website www.foobar.com will automatically include the authentication cookie when it sends requests to api.foobar.com, which is part of the same site. Authentication will work as expected.

Sec-Fetch Helps Detecting Cross-origin Requests But Are Not Standard Yet

Related to CORS, you may also hear about the Sec-Fetch request headers. They provide information about the request context to the server. For instance, if Sec-Fetch-Site is same-origin, there is no need to check the Origin header. It’s slightly more intuitive than manually checking the request origin against predefined rules. If it’s same-site, you still need to add the required CORS headers in the response because the site and the origin are different concepts.

However, those headers are not sent by all browsers. Namely, Safari doesn’t support them at the time of writing. Therefore, they can only act as a secondary solution to identify a request’s origin.

A bank can identify you via your passport, but not everybody has a passport, so you will always need to resort to a good old ID card. Passports are still a cool way to identify people when available. That’s what the Sec-Fetch headers are.

Keep in mind that attackers may trick users into sending requests they don’t want to, but they still have no control over which browser will send the request. Thus, Sec-Fetch headers are still useful despite being supported only by some browsers. When they are present, they are a reliable piece of information.

More broadly, cross-origin and cross-site security patterns are based on the fact that users are using legitimate browsers that should never try to voluntarily mislead servers about the origin. Beware of more elaborate attacks, such as subdomain takeover, that can still bypass these security measures.

You may think, what about servers, then? Server requests are able to spoof a request origin, effectively bypassing CORS. Well, that’s right: CORS is a browser-server security mechanism; it simply doesn’t apply to server-server calls. Someone can build an API that calls your own API and serve the result to another website. True.

However, the thief will have to host a server, which isn’t free, so they will pay actual money for this API call, and the server IP may be traced back to them. It’s way less attractive than forcing browsers to send the request directly to your API, which would be free and trace back to the first victim, your user, and not to the attacker. Thanks to CORS, this worst-case scenario is not possible.

No, We Don’t Take Checks: Don’t Confuse A Web Page Access And An API Call

If you’ve stumbled upon this article, you have probably read a lot of documentation before that. You might have encountered various patterns that seem very different from one another and may be very different from what I’ve been describing so far.

I’d like to point out that this article is about authentication for websites. This is, I believe, the root of all confusion. First, the requests are sent by a browser, not by another server. Then, a page of a website can be accessed via the URL bar, and the website can send calls to an API via JavaScript; that’s two very different situations.

We have detailed the technical differences between each scenario earlier in this article. But let’s recap again to better understand the most common sources of confusion.

Using Browser Storage (Session, Local, And Friend) Is Like Using Your Mailbox As A Piggy Bank

Setting your token in localStorage is like storing money in your physical mailbox. Well, technically, it’s safe. It’s even used to temporarily store your mail. Session storage is similar; it’s just like getting your mail every day instead of letting it rot for weeks. Be nice to the postman!

But either way, the storage is accessible to JavaScript. It’s not open like your mailbox most probably has a lock. But it’s still lying outside.

Someone who is really mean could craft a key and go as far as discreetly stealing your mail for days. That’s what would happen to your users if someone manages to run a script injection attack in your application. They can craft a JS script that reads the localStorage, and sends them the tokens of any of your users affected by the breach!

There are ways to prevent cross-scripting attacks. It’s not a fatality, and some would argue that it’s still better than using an HTTP-only cookie! But if you go the web storage way, don’t forget to protect your mailbox.

JWTs Passed As Header: It’s A Pattern For API Calls Only

I’d like to go one step further and understand why so many developers are tempted to use the browser storage without even considering the cookie option.

I think there are two main reasons:

  1. The API doesn’t use Set-Cookie during the login step. It often happens in APIs that were not specifically designed to be accessed from a browser, and they use an authentication pattern that works for servers, too (so no cookies). Your back-end developer will claim it was a “design choice.” Don’t trust them.
  2. The client needs to set an Authorization header with the token. That’s just the symmetrical issue; the API chose not to rely on cookies.

You’ll meet a lot of documentation in the wild that shows headers like this:

Authorization: Bearer <token>

Setting the Authorization header implies you cannot use an HTTP-Only cookie. Of course, you can set the headers of a request using JavaScript. However, this means that the token must be available to JavaScript in the first place. Let’s repeat it once again: it’s ok not to use a cookie and prefer web storage, but be very careful with XSS attacks!

Moreover, this pattern works only for programmatic requests with fetch and XHR. You can’t secure access to the web page itself because the browser won’t set this header automatically during navigation. You just really can’t.

If you need to secure web pages, the backend of your website (which may differ from your actual API that serves data) should use HTTP-only cookies, or you should set a non-HTTP-only cookie manually. Either way, be careful that by using cookies, you are introducing a new vector of attacks.

Not all websites need to be secured this way, though. It’s actually pretty uncommon for public-facing websites. Remember Blitz.js’ motto “secure data, not pages.” But the cookie way is still a pattern you must be aware of.

If you really need to secure a web page, reach out to your backend developer and explain to them that cookies are good. They won’t contradict this claim.

If it’s really not possible to improve the API, you might want to adopt a pattern named “backend-for-frontend” (BFF). Basically, have a tiny server owned by the front-end team that takes care of calling authenticated APIs server-side, among other things. It works pretty well with frameworks like Remix or Next.js that have built-in server-oriented features.

Side Note On Basic Auth: Looks Like A Header, Behaves Like A Cookie

Of course, every rule must have its exceptions. There is only one common scenario I know where the token can be passed as a request header without using JavaScript: using basic authentication.

Basic authentication is an unsafe pattern, as identifiers are sent alongside every request, not just a token but the actual username and password. It’s a bit like asking your local drug dealer to pay your rent and give them your ID card and 2000$ that they should kindly bring to your landlord. It’s only ok if this is a fake ID. Actually, no, fake IDs and drug dealers are not ok. But you get the point!

It’s still interesting for a few use cases, like quickly securing the demo of a website. As long as you can accept the password to be stolen without that many consequences, typically if the password has been generated by an admin for a temporary occasion (NOT a user-set password that could be reused for multiple services!).

In basic authentication, the browser sets the Authorization header for you. You still need to set the credentials option properly when using fetch programmatically so that the header is also set in this case. But you don’t have to manually tweak the header. It really behaves like a cookie.

That’s actually one reason why this fetch option is named “credentials” and not just “cookies”: it also applies to the basic authentication header.

Conclusion

Wow, if you’ve read this far, you definitely earned your right to a quick summary!

But before that, I have another gift for you: an open-source demo of authentication with Deno and HTMX for the front end!

This is a very basic setup, but it puts the concepts described in this article into action using the “cookie way.”

Here is your well-deserved TL;DR:

  • JWT is like banknotes, and sessionId is like credit cards. They are two authentication patterns, but either way, you want to store those tokens safely client-side.
  • One safe place to store tokens in a browser is an HTTP-only and secure cookie, set via the Set-Cookie header of the login response. It cannot be stolen by malicious JavaScript code. Web storage is another possibility if you only secure API calls and not web page access. If you pick web storage, then be very careful with XSS attacks.
  • Cookies are automatically sent alongside every request provided you use the right options, namely the path and SameSite attributes of the cookie. Like a wallet, you would carry it on you all the time and take it out of your bag only in safe places when a payment is needed.
  • When navigating via URL, the browser takes care of sending the cookie containing the auth token (if properly set). But you may also need to call authenticated APIs using JavaScript code. Fetch credentials and XHR withCredentials options have to be configured correctly, so cookies are also sent alongside those programmatic requests. Don’t forget to set the Access-Control-Allow-Credentials response header as well.
  • Auth is about politeness: the server must set the proper headers for everything to work. You can’t enter a bank and yell at people to get your money.
  • This is particularly true for cross-origin requests. Companies must have secure and polite discussions with their bank, which can be separate organizations or located in a distant country. That’s what CORS is for websites and APIs. Don’t forget to properly set the Access-Control-Allow-Origin and Access-Control-Allow-Credentials headers in the response.
  • If you store your savings in your mailbox, put a camera on top of it, and if you store your token in the web storage, protect it against XSS attacks.
  • Banks don’t treat people like they treat other banks. Don’t confuse webpages patterns (have to rely on cookies and browser’s built-in features) and API patterns (relying on headers, which can be implemented using client-side JavaScript).
  • Basic auth is a fun pattern that blurs the line, as it uses an Authorization header like for APIs, but it can be used for webpages too. It’s a specific and very insecure pattern that is only meant for a handful of limited use cases. Basic auth headers are a reason why fetch and XHR options are named “credentials” and not just “cookies.”

Resources

A Guide To Command-Line Data Manipulation

Allow me to preface this article by saying that I’m not a terminal person. I don’t use Vim. I find sed, grep, and awk convoluted and counter-intuitive. I prefer seeing my files in a nice UI. Despite all that, I got into the habit of reaching for command-line interfaces (CLIs) when I had small, dedicated tasks to complete. Why? I’ll explain all of that below. In this article, you’ll also learn how to use a CLI tool named Miller to manipulate data from CSV, TSV and/or JSON files.

Why Use The Command Line?

Everything that I’m showing here can be done with regular code. You can load the file, parse the CSV data, and then transform it using regular JavaScript, Python, or any other language. But there are a few reasons why I reach out for command-line interfaces (CLIs) whenever I need to transform data:

  • Easier to read.
    It is faster (for me) to write a script in JavaScript or Python for my usual data processing. But, a script can be confusing to come back to. In my experience, command-line manipulations are harder to write initially but easier to read afterward.
  • Easier to reproduce.
    Thanks to package managers like Homebrew, CLIs are much easier to install than they used to be. No need to figure out the correct version of Node.js or Python, the package manager takes care of that for you.
  • Ages well.
    Compared to modern programming languages, CLIs are old. They change a lot more slowly than languages and frameworks.
What Is Miller?

The main reason I love Miller is that it’s a standalone tool. There are many great tools for data manipulation, but every other tool I found was part of a specific ecosystem. The tools written in Python required knowing how to use pip and virtual environments; for those written in Rust, it was cargo, and so on.

On top of that, it’s fast. The data files are streamed, not held in memory, which means that you can perform operations on large files without freezing your computer.

As a bonus, Miller is actively maintained, John Kerl really keeps on top of PRs and issues. As a developer, I always get a satisfying feeling when I see a neat and maintained open-source project with great documentation.

Installation
  • Linux: apt-get install miller or Homebrew.
  • macOS: brew install miller using Homebrew.
  • Windows: choco install miller using Chocolatey.

That’s it, and you should now have the mlr command available in your terminal.

Run mlr help topics to see if it worked. This will give you instructions to navigate the built-in documentation. You shouldn’t need it, though; that’s what this tutorial is for!

How mlr Works

Miller commands work the following way:

mlr [input/output file formats] [verbs] [file]

Example: mlr --csv filter '$color != "red"' example.csv

Let’s deconstruct:

  • --csv specifies the input file format. It’s a CSV file.
  • filter is what we’re doing on the file, called a “verb” in the documentation. In this case, we’re filtering every row that doesn’t have the field color set to "red". There are many other verbs like sort and cut that we’ll explore later.
  • example.csv is the file that we’re manipulating.
Operations Overview

We can use those verbs to run specific operations on your data. There’s a lot we can do. Let’s explore.

Data

I’ll be using a dataset of IMDb ratings for American TV dramas created by The Economist. You can download it here or find it in the repo for this article.

Note: For the sake of brevity, I’ve renamed the file from mlr --csv head ./IMDb_Economist_tv_ratings.csv to tv_ratings.csv.

Above, I mentioned that every command contains a specific operation or verb. Let’s learn our first one, called head. What it does is show you the beginning of the file (the “head”) rather than print the entire file in the console.

You can run the following command:

`mlr --csv head ./tv_ratings.csv`

And this is the output you’ll see:

titleId,seasonNumber,title,date,av_rating,share,genres
tt2879552,1,11.22.63,2016-03-10,8.489,0.51,"Drama,Mystery,Sci-Fi"
tt3148266,1,12 Monkeys,2015-02-27,8.3407,0.46,"Adventure,Drama,Mystery"
tt3148266,2,12 Monkeys,2016-05-30,8.8196,0.25,"Adventure,Drama,Mystery"
tt3148266,3,12 Monkeys,2017-05-19,9.0369,0.19,"Adventure,Drama,Mystery"
tt3148266,4,12 Monkeys,2018-06-26,9.1363,0.38,"Adventure,Drama,Mystery"
tt1837492,1,13 Reasons Why,2017-03-31,8.437,2.38,"Drama,Mystery"
tt1837492,2,13 Reasons Why,2018-05-18,7.5089,2.19,"Drama,Mystery"
tt0285331,1,24,2002-02-16,8.5641,6.67,"Action,Crime,Drama"
tt0285331,2,24,2003-02-09,8.7028,7.13,"Action,Crime,Drama"
tt0285331,3,24,2004-02-09,8.7173,5.88,"Action,Crime,Drama"

This is a bit hard to read, so let’s make it easier on the eye by adding --opprint.

mlr --csv --opprint head ./tv_ratings.csv

The resulting output will be the following:

titleId   seasonNumber title            date          av_rating   share   genres
tt2879552      1       11.22.63         2016-03-10    8.489       0.51    Drama,Mystery,Sci-Fi
tt3148266      1       12 Monkeys       2015-02-27    8.3407      0.46    Adventure,Drama,Mystery
tt3148266      2       12 Monkeys       2016-05-30    8.8196      0.25    Adventure,Drama,Mystery
tt3148266      3       12 Monkeys       2017-05-19    9.0369      0.19    Adventure,Drama,Mystery
tt3148266      4       12 Monkeys       2018-06-26    9.1363      0.38    Adventure,Drama,Mystery
tt1837492      1       13 Reasons Why   2017-03-31    8.437       2.38    Drama,Mystery
tt1837492      2       13 Reasons Why   2018-05-18    7.5089      2.19    Drama,Mystery
tt0285331      1       24               2002-02-16    8.5641      6.67    Action,Crime,Drama
tt0285331      2       24               2003-02-09    8.7028      7.13    Action,Crime,Drama
tt0285331      3       24               2004-02-09    8.7173      5.88    Action,Crime,Drama

Much better, isn’t it?

Note: Rather than typing --csv --opprint every time, we can use the --c2p option, which is a shortcut.

Chaining

That’s where the fun begins. Rather than run multiple commands, we can chain the verbs together by using the then keyword.

Remove columns

You can see that there’s a titleId column that isn’t very useful. Let’s get rid of it using the cut verb.

mlr --c2p cut -x -f titleId then head ./tv_ratings.csv

It gives you the following output:

seasonNumber  title            date         av_rating   share    genres
     1      11.22.63          2016-03-10    8.489       0.51     Drama,Mystery,Sci-Fi
     1      12 Monkeys        2015-02-27    8.3407      0.46     Adventure,Drama,Mystery
     2      12 Monkeys        2016-05-30    8.8196      0.25     Adventure,Drama,Mystery
     3      12 Monkeys        2017-05-19    9.0369      0.19     Adventure,Drama,Mystery
     4      12 Monkeys        2018-06-26    9.1363      0.38     Adventure,Drama,Mystery
     1      13 Reasons Why    2017-03-31    8.437       2.38     Drama,Mystery
     2      13 Reasons Why    2018-05-18    7.5089      2.19     Drama,Mystery
     1      24                2002-02-16    8.5641      6.67     Action,Crime,Drama
     2      24                2003-02-09    8.7028      7.13     Action,Crime,Drama
     3      24                2004-02-09    8.7173      5.88     Action,Crime,Drama
Fun Fact

This is how I first learned about Miller! I was playing with a CSV dataset for https://details.town/ that had a useless column, and I looked up “how to remove a column from CSV command line.” I discovered Miller, loved it, and then pitched an article to Smashing magazine. Now here we are!

Filter

This is the verb that I first showed earlier. We can remove all the rows that don’t match a specific expression, letting us clean our data with only a few characters.

If we only want the rating of the first seasons of every series in the dataset, this is how you do it:

mlr --c2p filter '$seasonNumber == 1' then head ./tv_ratings.csv

Sorting

We can sort our data based on a specific column like it would be in a UI like Excel or macOS Numbers. Here’s how you would sort your data based on the series with the highest rating:

mlr --c2p sort -nr av_rating then head ./tv_ratings.csv

The resulting output will be the following:

titleId   seasonNumber title                         date         av_rating  share   genres
tt0098887      1       Parenthood                    1990-11-13   9.6824     1.68    Comedy,Drama
tt0106028      6       Homicide: Life on the Street  1997-12-05   9.6        0.13    Crime,Drama,Mystery
tt0108968      5       Touched by an Angel           1998-11-15   9.6        0.08    Drama,Family,Fantasy
tt0903747      5       Breaking Bad                  2013-02-20   9.554      18.95   Crime,Drama,Thriller
tt0944947      6       Game of Thrones               2016-05-25   9.4943     15.18   Action,Adventure,Drama
tt3398228      5       BoJack Horseman               2018-09-14   9.4738     0.45    Animation,Comedy,Drama
tt0103352      3       Are You Afraid of the Dark?   1994-02-23   9.4349     2.6     Drama,Family,Fantasy
tt0944947      4       Game of Thrones               2014-05-09   9.4282     11.07   Action,Adventure,Drama
tt0976014      4       Greek                         2011-03-07   9.4        0.01    Comedy,Drama
tt0090466      4       L.A. Law                      1990-04-05   9.4        0.1     Drama

We can see that Parenthood, from 1990, has the highest rating on IMDb — who knew!

Saving Our Operations

By default, Miller only prints your processed data to the console. If we want to save it to another CSV file, we can use the > operator.

If we wanted to save our sorted data to a new CSV file, this is what the command would look like:

mlr --csv sort -nr av_rating ./tv_ratings.csv > sorted.csv

Convert CSV To JSON

Most of the time, you don’t use CSV data directly in your application. You convert it to a format that is easier to read or doesn’t require additional dependencies, like JSON.

Miller gives you the --c2j option to convert your data from CSV to JSON. Here’s how to do this for our sorted data:

mlr --c2j sort -nr av_rating ./tv_ratings.csv > sorted.json
Case study: Top 5 Athletes With Highest Number Of Medals In Rio 2016

Let’s apply everything we learned above to a real-world use case. Let’s say that you have a detailed dataset of every athlete who participated in the 2016 Olympic games in Rio, and you want to know who the 5 with the highest number of medals are.

First, download the athlete data as a CSV, then save it in a file named athletes.csv.

Let’s open up the following file:

mlr --c2p head ./athletes.csv

The resulting output will be something like the following:

id        name                nationality sex    date_of_birth height weight sport      gold silver bronze info
736041664 A Jesus Garcia      ESP         male   1969-10-17    1.72    64     athletics    0    0      0      -
532037425 A Lam Shin          KOR         female 1986-09-23    1.68    56     fencing      0    0      0      -
435962603 Aaron Brown         CAN         male   1992-05-27    1.98    79     athletics    0    0      1      -
521041435 Aaron Cook          MDA         male   1991-01-02    1.83    80     taekwondo    0    0      0      -
33922579  Aaron Gate          NZL         male   1990-11-26    1.81    71     cycling      0    0      0      -
173071782 Aaron Royle         AUS         male   1990-01-26    1.80    67     triathlon    0    0      0      -
266237702 Aaron Russell       USA         male   1993-06-04    2.05    98     volleyball   0    0      1      -
382571888 Aaron Younger       AUS         male   1991-09-25    1.93    100    aquatics     0    0      0      -
87689776  Aauri Lorena Bokesa ESP         female 1988-12-14    1.80    62     athletics    0    0      0      -

Optional: Clean Up The File

The CSV file has a few fields we don’t need. Let’s clean it up by removing the info , id , weight, and date_of_birth columns.

mlr --csv -I cut -x -f id,info,weight,date_of_birth athletes.csv

Now we can move to our original problem: we want to find who won the highest number of medals. We have how many of each medal (bronze, silver, and gold) the athletes won, but not the total number of medals per athlete.

Let’s compute a new value called medals which corresponds to this total number (bronze, silver, and gold added together).

mlr --c2p put '$medals=$bronze+$silver+$gold' then head ./athletes.csv

It gives you the following output:

name                 nationality   sex      height  sport        gold silver bronze medals
A Jesus Garcia       ESP           male     1.72    athletics      0    0      0      0
A Lam Shin           KOR           female   1.68    fencing        0    0      0      0
Aaron Brown          CAN           male     1.98    athletics      0    0      1      1
Aaron Cook           MDA           male     1.83    taekwondo      0    0      0      0
Aaron Gate           NZL           male     1.81    cycling        0    0      0      0
Aaron Royle          AUS           male     1.80    triathlon      0    0      0      0
Aaron Russell        USA           male     2.05    volleyball     0    0      1      1
Aaron Younger        AUS           male     1.93    aquatics       0    0      0      0
Aauri Lorena Bokesa  ESP           female   1.80    athletics      0    0      0      0
Ababel Yeshaneh      ETH           female   1.65    athletics      0    0      0      0

Sort by the highest number of medals by adding a sort.

mlr --c2p put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head ./athletes.csv

Respectively, the resulting output will be the following:

name              nationality  sex     height  sport       gold silver bronze medals
Michael Phelps    USA          male    1.94    aquatics      5    1      0      6
Katie Ledecky     USA          female  1.83    aquatics      4    1      0      5
Simone Biles      USA          female  1.45    gymnastics    4    0      1      5
Emma McKeon       AUS          female  1.80    aquatics      1    2      1      4
Katinka Hosszu    HUN          female  1.75    aquatics      3    1      0      4
Madeline Dirado   USA          female  1.76    aquatics      2    1      1      4
Nathan Adrian     USA          male    1.99    aquatics      2    0      2      4
Penny Oleksiak    CAN          female  1.86    aquatics      1    1      2      4
Simone Manuel     USA          female  1.78    aquatics      2    2      0      4
Alexandra Raisman USA          female  1.58    gymnastics    1    2      0      3

Restrict to the top 5 by adding -n 5 to your head operation.

mlr --c2p put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head -n 5 ./athletes.csv

You will end up with the following file:

name             nationality  sex      height  sport        gold silver bronze medals
Michael Phelps   USA          male     1.94    aquatics       5     1      0      6
Katie Ledecky    USA          female   1.83    aquatics       4     1      0      5
Simone Biles     USA          female   1.45    gymnastics     4     0      1      5
Emma McKeon      AUS          female   1.80    aquatics       1     2      1      4
Katinka Hosszu   HUN          female   1.75    aquatics       3     1      0      4

As a final step, let’s convert this into a JSON file with the --c2j option.

Here is our final command:

mlr --c2j put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head -n 5 ./athletes.csv > top5.json

With a single command, we've computed new data, sorted the result, truncated it, and converted it to JSON.

[
  {
    "name": "Michael Phelps",
    "nationality": "USA",
    "sex": "male",
    "height": 1.94,
    "weight": 90,
    "sport": "aquatics",
    "gold": 5,
    "silver": 1,
    "bronze": 0,
    "medals": 6
  }
  // Other entries omitted for brevity.
]

Bonus: If you wanted to show the top 5 women, you could add a filter.

mlr --c2p put '$medals=$bronze+$silver+$gold' then sort -nr medals then filter '$sex == "female"' then head -n 5 ./athletes.csv

Respectively, you would end up with the following output:

name              nationality   sex       height   sport        gold silver bronze medals
Katie Ledecky     USA           female    1.83     aquatics       4    1      0      5
Simone Biles      USA           female    1.45     gymnastics     4    0      1      5
Emma McKeon       AUS           female    1.80     aquatics       1    2      1      4
Katinka Hosszu    HUN           female    1.75     aquatics       3    1      0      4
Madeline Dirado   USA           female    1.76     aquatics       2    1      1      4
Conclusion

I hope this article showed you how versatile Miller is and gave you a taste of the power of command-line tools. Feel free to scourge the internet for the best CLI next time you find yourself writing yet another random script.

Resources

Further Reading on Smashing Magazine

JSON Minify Full Guideline: Easy For You

Hi guys, in this article, I will talk about JSON minify full guidelines; if you have any queries about JSON minify, I will resolve your problem by reading the complete article.

What Is JSON Minify?

JSON minify refers to the process of removing unnecessary whitespace and other characters from a JSON data structure.

Build Your First App with JavaScript, Node.js, and DataStax Astra DB

This is the first of a three-part app development workshop series designed to help developers understand technologies like Node.js, GraphQL, React, Netlify, and JavaScript to kickstart their app development portfolio. In this post, we’ll cover the fundamental concepts of website applications and introduce DataStax Astra DB as your free, fast, always-on database based on Apache Cassandra®.

In the U.S. we spend almost 88% of our mobile internet time buried in apps like Facebook, Instagram, TikTok, and games. With nearly a million new apps released each year and 218 billion app downloads in 2020, learning how to build them is an essential step in any front-end developer’s career.

Blasting Off Into Stargate Using HTTPie

As a DataStax Developer Advocate, my job is to help our amazing teams provide you with the best possible experience with Cassandra and our products.

Datastax Astra is built on Apache Cassandra. In addition to great documentation, Astra offers a robust free tier that can run small production workloads, pet projects, or just let you play—all for free, no credit card required. Cassandra can be tricky for hardcore SQL developers, because it uses a different slightly different query language (CQL), but when you get Astra, Stargate is there to let you interact with your data through APIs. Our open source Stargate product provides REST, GraphQL, and schemaless document APIs in addition to native language drivers. If you like them but don’t want to use our products, that’s fine! It’s completely open source and you can implement it on your own system.

How Can We Read a JSON File in Java?

Read JSON File in Java

 To know about reading a JSON file in Java, first, we must know about the JSON file.

JSON

JSON is “JavaScript Object Notation.” The JSON is the simple format for storing and sending data. It is used when the data is transported from the server to the Webpage. It is used in the field of “Web Development.” 

Arduino Data on MQTT

MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT) and one of the protocols supported by akenza. 
It is designed as an extremely lightweight publish/subscribe messaging protocol that is ideal for connecting remote devices with a small code footprint and minimal network bandwidth. MQTT is used in various industries. 
To run this project, we used akenza as an IoT platform, as it runs an open-source MQTT broker from Eclipse MosquittoBy using a combination of MQTT and API functionalities, we have been able to automatically create Digital Twins for our device.

As Hardware, we have chosen an Arduino Uno WiFi Rev2.

1. Configure the Arduino Device

1.1 Set up the WiFi Connection

To have the Arduino Uno Wifi able to connect to WiFi, we used the WiFiNINA library, available in the Library Manager of Arduino IDE.

1.1.1 Manage Username and Password

To manage Username and Password, we have created an additional header file called arudino_secrets.h 
 
#define SECRET_SSID "<your username>"
#define SECRET_PASS "<your password>"


1.1.2 WiFi Connection Code

The code to connect Arduino to WiFi is reported as below:
 
#include <WiFiNINA.h>
#include "arduino_secrets.h"

///////please enter your sensitive data in the Secret tab/arduino_secrets.h
char ssid[] = SECRET_SSID;     // your network SSID (name)
char pass[] = SECRET_PASS;    // your network password (use for WPA, or use as key for WEP)

WiFiClient wifiClient;

void setup() {
  //Initialize serial and wait for port to open:
  Serial.begin(9600);
  while (!Serial) {
    ; // wait for serial port to connect. Needed for native USB port only
  }

  // attempt to connect to Wifi network:
  Serial.print("Attempting to connect to WPA SSID: ");
  Serial.println(ssid);
  while (WiFi.begin(ssid, pass) != WL_CONNECTED) {
    // failed, retry
    Serial.print(".");
    delay(5000);
  }

  Serial.println("You're connected to the network");
  Serial.println();
}

void loop()
{}


1.2 Set up the MQTT Connection to akenza

For security reasons, akenza only supports authenticated connections via MQTT. For this, we have chosen as library PubSubClient to manage our MQTT connection. This enables us to use username and passwords in our connection string. 
 
#include <PubSubClient.h>

//MQTTClient mqttClient(WiFiClient);

char host[] = "mqtt.akenza.io";
char clientid[] = "Arduino";
char username[] = "<copy from Akenza Device Api configuration>";
char password[] = "<copy from Akenza Device Api configuration>";
char outTopic[] = "<copy from Akenza Device Api configuration>";

PubSubClient client(host, 1883, callback, wifiClient);

void setup() {
  if (client.connect(host, username, password)) {
    Serial.print("Connected to ");
    Serial.println(host);
    Serial.println();
    
    boolean r = client.subscribe(outTopic);
    Serial.print("Subscribed to ");
    Serial.println(outTopic);
    Serial.println();
    } 
    else {
      // connection failed
      // mqttClient.state() will provide more information
      // on why it failed.
      Serial.print("Connection failed: ");
      Serial.println(client.state());
      Serial.println();
  }
}