How To Fax Over the Internet in 6 Simple Steps

Want to get started with WiFi faxing immediately? RingCentral is the best online fax service for most businesses. Click here to start a 30-day free trial of RingCentral now.

Many believe faxing to be outdated and obsolete. And although the fax machine now has its place in technological nostalgia, the need to send documents securely over the internet remains. The good news is that you can now leverage modern technology to fax without a fax machine.

Faxes are still the go-to method of sending important documents quickly and securely. With the internet, you can now send them directly from your computer.

In this article, we give you five simple steps to get started.

How to Fax Over the Internet in 6 Simple Steps

The 12 Best Online Fax Services for Sending Faxes Over the Internet

Every online fax service is slightly different, and the one that’s right for you will depend on your communication needs. To help you find the best online fax service for your business, we’ve tried and reviewed the 12 best online fax services to make it easier for you.

  • RingCentral Fax — Best overall
  • eFax — Most popular online fax service
  • Ooma — Best online fax and phone service bundle
  • MetroFax — Best mobile app for online faxing
  • Nextiva — Best standalone online fax service
  • iFax — Best enterprise fax solution
  • Fax.Plus — Best for offices that fax occasionally
  • MyFax — Best online fax service for personal use
  • HelloFax — Best for small teams and cloud storage integration
  • FaxZero — Best for faxing a few pages
  • Sfax — Best HIPAA-compliant online fax service
  • Biscom 123 — Best email-to-fax service

Send Faxes Over the Internet in 6 Easy Steps

Without the need for paper, ink, a phone line, or a fax machine, sending faxes has never been easier. Here are six steps to take to start sending faxes via the internet.

  1. Choose the best online faxing service for your needs
  2. Set up the service with your chosen provider
  3. Get up-to-date on online faxing best practices
  4. Compose your documents and prepare them for sending
  5. Start sending and receiving faxes
  6. Manage your sensitive documents and properly store your records

Step 1: Choose the best online faxing service for your needs

If you want to send faxes over the internet, there are several different options for doing so. But they aren’t all created equal.

There are several reasons to use an online faxing service:

  • Ease of Use: You don’t need a fax machine, phone line, printer, or extra hardware. All you need is a subscription to your chosen software provider and a WiFi connection, and you’re ready to start.
  • Security: Online faxing is a secure and reliable way to send documents online. The data transmission is encrypted, which makes it almost impossible for hackers and unauthorized users to intercept your information.
  • Cost Savings: Using an online faxing service saves money on hardware and recurring phone line costs. You also save time, as you don’t have to wait for the document to print out and be physically sent via a fax machine.
  • Going Green: You can send and receive faxes without paper or ink. Since it is much less resource-intensive than traditional methods, online faxing is a great way to reduce your environmental footprint.

But the right faxing service for you will depend on the size of your business and its needs. Take some time to research the different options available and decide which one best suits your requirements.

RingCentral Fax homepage
RingCentral makes it easy to start sending faxes over the internet today.

To choose the right vendor for your needs, consider the following:

  • Assess your faxing needs. Knowing the volume of documents you need to send and the level of security required is vital. If you send sensitive documents (e.g., healthcare records), you’ll need a HIPAA-compliant service like Sfax. You’ll also want to consider the industry you’re in and the specific features that are important for your business.
  • Look at the features offered. Different providers have different features like cloud storage integration, mobile app support, and recurring faxes. If your company already uses VoIP phone services, you may want a fax/phone bundle like Ooma.
  • Compare costs. Prices vary widely between vendors, so compare to find the best deal for your budget. If you plan to use the service less, you can use a per-fax pricing plan. If you plan to use it more than a few times a month, you’re probably better off paying for the monthly service.
  • Look at multiple vendors. It’s important to shop around and evaluate multiple providers to find the best fit for your business.
  • Read user reviews. Online faxing services can be hit or miss, so read what other customers have to say before making your decision.

Once you’ve chosen the right provider, you can set up your service and send faxes in no time.

Step 2: Set up the service with your chosen provider

Once you’ve decided on a provider, setting up the service is easy. All you need to do is sign up for an account and configure the settings.

Most providers offer both a free trial and a paid plan, so you can test out the service before committing to it.

If you opt for a paid plan, the setup process usually involves verifying your credit card information and signing up for a subscription. Most providers also allow you to test out their services with a limited number of faxes during the trial period.

When you sign up for a service, you will also get a dedicated fax number, which you can use to send and receive faxes.

Once your account is set up, you can access the provider’s dashboard and start sending faxes over the internet.

Step 3: Get up-to-date on online faxing best practices

When it comes to online faxing, there are some best practices that you should be aware of.

Make sure that your computer is protected by up-to-date anti-virus software. As the documents you’re sending and receiving are sensitive, you don’t want them to be exposed to malware or viruses.

When you’re sending documents, make sure to encrypt them. This adds an extra layer of security and makes it much harder for unauthorized users to access your information. To do so, you will need to use a secure file format like PDF.

Delete any sensitive documents that you no longer need. As online faxing does not require a physical paper trail, it’s important to make sure that all sensitive information is securely deleted once you no longer need it.

Be aware of security risks. Online faxing is a convenient way to send documents, but missent information, lost data, and hacking can all be real risks. To protect your data, make sure to keep track of who you’re sending documents to and confirm with the recipient that they’ve received the documents.

Have a backup plan for emergencies. Make sure that you have a backup plan in case your provider experiences technical issues or outages.

Step 4: Compose your documents and prepare them for sending

Creating your faxable document is easy. Most fax services allow you to upload documents from your computer or drag and drop them into the online service.

Depending on the document type, there are several different ways you can do this:

  • Branded PDF Files: Larger companies have branded PDF files that they use as templates for faxes, which you can easily upload to your fax service.
  • Word Documents: Word documents are the simplest type of document to send and receive. Simply save your file as a PDF and upload it to the fax service.
  • Images: If you need to send an image, make sure it is in a high-resolution format like JPG or PNG.
  • Fill-In Forms: Forms that require filling in can be scanned and uploaded as a PDF to the fax service.
  • Private Records: Especially for sensitive documents (e.g., medical records, tax forms), make sure to encrypt the document before sending.

The process for composing your faxable documents will be different depending on what exactly they are. But in general, you should make sure that the documents are small enough to be sent as one fax.

If you’re sending a document from an online source such as Google Docs or Microsoft Word, you should make sure to download it in a PDF format first. This will help ensure the integrity of your files and reduce file size.

Once you’ve downloaded the document, you can then prepare it for sending by adding a cover page. A cover page is like an envelope for your fax and will help to ensure that the recipient knows what documents you’re sending them before they even open the file.

Step 5: Start sending and receiving faxes

After you have prepared your documents, you can start sending and receiving faxes.

Faxing wirelessly and faxing over the internet are fairly similar—both processes involve sending documents through the web. When you are faxing over the internet, your documents will be sent via an online protocol such as FTP or secure file sharing. This means that they won’t be transmitted through a physical phone line like traditional faxes.

There are two ways to send faxes over the internet:

  • On your mobile phone
  • Via a computer or laptop

If you choose to send a fax from your mobile phone, you can use your providers for iPhone or Android to do so. Alternatively, if you’d rather send a fax from your laptop or computer, there are several online services that offer faxing solutions. Usually, you will also have the option of sending it to their email address.

In any case, you will need a secure WiFi connection to send the fax successfully.

When you’re ready to send a fax, all you need to do is enter the recipient’s fax number, upload your documents, and hit send. Depending on your provider, you may even be able to send multiple files in one instance.

If you’re expecting faxes from other people, simply provide them with your online fax number and let them know where they can find the documents they need to send.

Once a fax has been successfully sent, you’ll receive an email notification. This is especially helpful if you’re trying to keep track of important documents and make sure that they have reached the intended recipient.

And once a fax has been received, you can easily view or print it from your online account.

Step 6: Manage your sensitive documents and properly store your records

Data privacy is important to your company and the people you work with. Thus, using a strong encryption system is essential to keep your sensitive documents secure. This will help protect the information from unauthorized access and prevent any data leakage or hacking attempts.

There are several different types of encryption you can use, such as AES, RSA, and PGP. Each of these systems will protect your documents with a unique key that only you know.

Also, make sure to store your faxed records properly. Depending on the nature of the information in them, you may need to store them for a set amount of time. You can easily keep track of all your records using an online document management system. This will help you easily access them should you ever need to refer back to them in the future.

You should also organize them so you can easily find them when you need them.

Final Thoughts About Sending Faxes Over the Internet

Although fax machines are a thing of the past, the use of fax technology has only become more commonplace in the modern office—especially with the rise of digital documents. This is why learning how to send faxes over the internet can be so helpful in helping you stay organized and efficient.

When setting up online faxing for your business, make sure to choose a reliable provider that suits your specific needs and take the necessary steps to protect your sensitive documents. And most importantly, make sure to follow the steps outlined above for a successful experience.

399: Data Munging

There was a small problem in our database. Some JSON data we kept in a column would sometimes have a string instead of an integer. Like {"tabSize": "5"} instead of {"tabSize": 5} of the like. Investigation on how that happened was just silly stuff like not calling parseInt on a value as it came off a <select> element in the DOM. This problem never surfaced because our Rails app just papered over it. But we’re moving our code to Go in when you parse JSON in Go, the struct type that you parse it out into needs to match those types perfectly, or else it panics. We had found that our Go code was working around this in all sorts of ways that felt sloppy and inconsistent.

One way to fix this? Fix any bad data going into the DB, then write a script to fix all the data in the DB. This is exactly the approach I took at first, and it would have absolutely fixed this problem.

But Alex took a step back and looked at the problem a bit wider, and we ended up building some tools that helped us solve this problem, and solve future problems related to this. For one, we built a more permission JSON parser that would not panic on something as easy to fix as a string-as-int problem. This worked by way of some Go reflection that could tell what types the data was supposed to be and coerce them if possible. But what should the value fall back to if it’s not savable? That was another tool we built to set the default values of Go structs to be potentially other values than what the defaults for their types are. And since this is all in the realm of data validation, we built another tool to validate the data in Go structs against constraints, so we can always keep the data they contain good.

Once all these tools were in place, the new script to fix the data was much easier to write. Just call the safe JSON function to fix the data and put it back. And the result is a cleaned up code base and tools we can use for data safety for the long term.

Time Jumps

  • 00:29 How can we fix the problem forever?
  • 03:53 Chris becomes a Go-pher
  • 15:04 Building rules that will work for anything, not just this situation
  • 18:54 Setting up proper testing is huge
  • 20:19 Sponsor: Intelligent Demand
  • 21:06 Testing for pointers
  • 25:18 Using GORM
  • 27:08 Supabase postgresDB

Sponsor: Split

This podcast is powered by Split. The Feature Management & Experimentation Platform that reimagines software delivery. By attaching insightful data to feature flags, Split frees you to quickly deploy, measure, and learn the impact of every feature you release. So you can safely deliver features up to 50 times faster and exhale. What a Release.

Start raising feature flags (and lowering stress). Visit Split.io/CodePen for a free trial.

The post 399: Data Munging appeared first on CodePen Blog.

Asynchronous HTTP Requests With RxJava

Let’s say we develop a service that has to interact with other components. Unfortunately, those components are slow and blocking.

It may be a legacy service that is very slow or some blocking API that we must use. Regardless, we have no control over it. In this post, we will call two APIs. One of them will block for two seconds and another for five seconds.

Implementation of Python Generator Functions: A Complete Guide

Have you ever encountered memory issues while working with a very large data set, or working with an infinite sequence of data? Converting the objects to iterable ones helps here, which can be easily done by the Python generator functions. Since PEP 255, when generators were first introduced, Python has incorporated them heavily. You can declare a function that acts like an iterator using generator functions in an easy and efficient way.

In this article, we will discuss what iterator objects in Python are, how they can be declared using the Python generator functions and expressions, and why and where they are used and preferred.

Reduce Data Breaches by Adding a Data Privacy Vault to Your HealthTech App Architecture

With the rising adoption of healthcare apps and wearable devices that gather medical data, the importance of data privacy for HealthTech companies is greater than ever. Companies that work with PHI must ensure they’re HIPAA-compliant, lest they face fines, lawsuits, or closures. 

If you’re a developer or architect in the HealthTech field, you know that HIPAA is only a starting point if you want to provide truly robust privacy protections for your users.

Architectural Miscalculation and Hibernate Problem “Type UUID but Expression Is of Type Bytea”

Nowadays, it is difficult to find a service that works on its own and does not communicate with other services, especially modern systems that are built on a microservice architecture. In this regard, there are difficulties in obtaining data from one or another service since not all the data necessary for the operation of the service is stored in one database, and you cannot simply make a "join." I want to talk about one of these problems and its solution in this article.

Case Description

A huge number of projects use Spring + Hibernate. This bundle gives an advantage in development speed, reducing the amount of code and blah blah blah. But there are also disadvantages.

Landing Your Dream Job with Lensa in Graphic Design

Landing your dream job in graphic design requires more than just a creative eye and technical skills. You need to have the right skills, an online presence, and a portfolio that demonstrates your abilities. In this article, we will discuss how to make yourself a more attractive candidate for a graphic design position by honing […]

Ultimate Guide to FaceIO

Facial recognition technology called FaceIO is employed for online user identification. For increased security during online transactions and access to sensitive information, it can be linked to websites and apps to verify a person's identity using their distinctive facial features. It can be used in addition to more established authentication techniques, such as passwords, for increased security. It's crucial to consider any potential drawbacks, such as bias, errors, and privacy issues. 

Key Features of FaceIO

FaceIO may include the following features that could make it a viable tool for online authentication: 

How To Remove Yelp Reviews in 6 Simple Steps

WebiMax is the best online reputation management company to remove Yelp reviews, helping you maintain a positive public image. Get your free proposal to get started today.

As a business owner, online reviews make or break your business, as many customers take them as the gospel. Negative reviews are detrimental to your business’s survival, especially when they appear on a popular platform like Yelp.

Unfortunately, you can’t log in to your business account and remove negative reviews—they have to meet certain conditions.

If your business has received negative Yelp reviews you believe are unfair, fake, or violate Yelp’s community guidelines, we’ll walk you through the steps to handle them professionally and maintain a positive online presence.

How to Remove Yelp Reviews in 6 Simple Steps

The 11 Best Online Reputation Management Companies for Removing Yelp Reviews

Removing Yelp reviews can be tedious, so why not hire professionals to take care of it? Here are some of the best online reputation management companies specializing in social reputation management that can help you get that perfect Yelp rating—faster.

Remove Yelp Reviews in 6 Easy Steps

Technically, you can remove a review on Yelp. But you can’t do it by yourself, and only Yelp can eliminate reviews from its platform.

WebiMax is a notable online reputation management company that can help. After analyzing and identifying reviews violating Yelp’s guidelines, the company will submit your case to Yelp on your behalf, facilitating removal.

Here’s a step-by-step breakdown of how to remove Yelp reviews:

  1. Claim your Yelp business profile
  2. Understand the grounds for removal
  3. Submit a removal request to Yelp
  4. Politely respond to the reviewer
  5. Know when to hire an online reputation management company
  6. Prevent future negative reviews

Step 1: Claim Your Yelp Business Profile

If you haven’t already claimed your business profile on Yelp, follow these steps:

  1. Go to Yelp for Business.
  2. Type in your business name. If it’s listed, click on it. If it’s not listed, click Add to Yelp for free.
Yelp for Business signup page
It’s easy to create a business account on Yelp.
  1. Enter your business information and email address to create your account.
  2. You’ll get an email or call to the number listed on your page with the verification code. Use the code to verify your Yelp account.

If you’ve already claimed your account, move on to Step 2.

Step 2: Understand the Grounds for Removal

The easiest way to remove a Yelp review is by proving it’s “questionable.” 

How do you do that? By showing the Yelp team that the review does one of the following:

  • Violates Yelp’s content guidelines. Yelp doesn’t approve content that’s inappropriate, irrelevant, promotional, or violates someone’s privacy or intellectual property. This includes the following:
A dropdown of reasons for why you want to report a Yelp review
Review the reasons why you might want to report any review for your business to see which fits your situation best.
  • Is fake. Yelp is quick to detect and remove spammy reviews (content posted multiple times from different accounts or several reviews from the same IP address). It also dismisses impersonations (reviews expressing views of someone else other than yourself) and conflicts of interest (reviewing your own business).
  • Is “not recommended.” Yelp has innovative recommendation software that auto-filters reviews based on certain factors such as unreliability, poor quality, and questionable user activity. The platform will filter if a negative review on your Yelp profile is not recommended. Note that these reviews aren’t displayed on your profile and hence don’t affect your rating.

Step 3: Submit a Removal Request to Yelp

If you find the negative review meets Yelp’s grounds for removal, you can submit a request to remove the review.

Here’s how to go about it:

  1. Find the review. Click Reviews on the left-hand side of your business page dashboard. Sort through all past reviews to locate the one you want to be removed.
  2. Report the review. Click on the three dots on the right-hand side of the unwanted review. From the list of displayed options, select Report Review. Note: if you‘re reporting the review on a desktop, the Report Review button will look like a flag. Click on it.
  3. Mention grounds of removal. Select the appropriate reason to report the review and provide any additional information if requested. Provide necessary evidence, such as a screenshot or link to the review, to support your claim.

After submitting a removal request, Yelp moderators will evaluate the unwanted review to determine if it indeed violates their guidelines. Be patient as it can take a few days before you see results. 

You may have to follow up with Yelp’s service team. If you don’t receive a response within a reasonable timeframe (a good rule of thumb is after a week), email them again to ensure your request is being considered.

Step 4: Politely Respond to the Reviewer

The Yelp moderator may decline your request. 

In this case, your next step should be to message the reviewer privately to work things out. The idea is to make amends, help resolve the problem, and take charge of the situation so that the reviewer themselves edit, update, or remove their review. The following are some tips to strategically respond to a negative Yelp review:

  • Be prompt. Don’t ignore negative reviews, and try to respond within 24-72 hours of posting. Avoid being defensive or retaliating. You don’t want to appear like an uncaring company that doesn’t value its customers. 
  • Be personable and positive: Address the reviewer by their name and acknowledge their concerns. Even if it’s not your fault, apologize with the intention of preventing escalation. Then respond with useful information that shows you care about their feedback.
  • Be proactive. Ensure your response provides the reviewer with an effective solution that resolves their issue quickly. If there has been a miscommunication or misunderstanding, politely let them know. 

Here’s a great example of a restaurant owner’s response to a dissatisfied customer:

A one-star review from a customer with a response from the business owner apologizing and offering a free meal voucher
Responding to a customer review shows you care about the customer experience and want to improve.

Remember, other Yelp users may potentially see your response. See this as an opportunity to bring in more business, and not to drive customers away with an overly aggressive or defensive retort. And your proactive approach may impress the reviewer enough for them to remove or change their review.

Step 5: Know When to Hire an Online Reputation Management Company

If you don’t agree with Yelp’s final decision and didn’t find luck reaching out to the reviewer, you can ask for a re-review for further consideration. 

Admittedly, it’s likely Yelp moderators will agree with the previous ruling. But if you make a strong and compelling case for it to be reversed, they can reverse positions. Since this will be your only chance to make an argument, we recommend bringing out the experts to handle this for you.

WebiMax reputation management webpage featuring snippets on how its service can help stop bad reviews, manage reviews on a dashboard, and get more reviews
WebiMax is a top online reputation management company and can help with far more than just removing Yelp reviews.

WebiMax has been a leader in online reputation management for decades. Not only can the company help you bury negative reviews with positive ones, but it can also remove them altogether. 

The company will give you a report showing all the negative content on your Yelp profile, along with the estimated time to remove each one. If this sounds like something you’re interested in, you can get a quote and have WebiMax fix your online reputation.

Step 6: Prevent Future Negative Reviews

While this is not technically required to remove a Yelp review that already exists, you must have an action plan to prevent future negative reviews. This will help you maintain a positive image in the long run.

Here are some tips to enhance your reputation management strategy:

  • Take a long, hard look at your product or service. Brainstorm ways to make them better and deliver better customer experiences.
  • Actively work to improve your customer service to resolve bad experiences before they‘re published online.
  • Encourage existing customers to leave positive reviews on Yelp and other social platforms.
  • If you decide to hire an online application management company, have them regularly monitor your online reputation and catch negative reviews early.

Final Thoughts About How to Remove Yelp Reviews

The first step to removing a Yelp review is understanding the grounds for removal. If you find a negative review doesn’t meet Yelp’s community guidelines, submit a removal request or hire an online reputation management company to do it on your behalf. Alternatively, you can respond to the reviewer acknowledging their experience and helping them resolve their problem.

Quick Pattern-Matching Queries in PostgreSQL and YugabyteDB

This tutorial shows how to optimize pattern-matching requests in PostgreSQL and YugabyteDB by applying several indexing strategies. Since YugabyteDB is built on PostgreSQL source code, the strategies work for both single-server PostgreSQL deployments and multi-node YugabyteDB clusters.

Loading Sample Data

Let’s begin by building out an example.  Assume you have a table that stores user profiles. One of the columns is the user’s first name. Then, let’s find all the users whose name starts with An and is an arbitrary length. For that, we can use a pattern-matching query like  WHERE firstname LIKE ‘An%’.

Vector Databases Are Reinventing How Unstructured Data Is Analyzed

Unstructured data is a complex challenge but a huge opportunity in any organization’s pursuit of data excellence. Unfortunately, it remains untouched due to the complexity of sorting, managing, and organizing the load. Interestingly, the OpenAI initiative, ChatGPT, has emerged as a winner in manipulating unstructured data into a structured format. However, ChatGPT isn’t the only one making inroads to streamlining the analysis of unstructured data: Enter vector databases. 

Difference between structured and unstructured data.

Measuring Page Speed With Lighthouse

Page speed matters more than you think. According to research by Google, the probability of users staying on your site plummets as the loading speed slows down. A site that loads in ten seconds increases the bounce rate by a whopping 123%. In other words, speed equals revenue.

How can we ensure that our pages are loading at top speed? The answer is to measure them regularly with Lighthouse and CI/CD.

Measuring Page Speed With Lighthouse

Lighthouse is a page speed benchmark tool created by Google. It runs a battery of tests against your website and produces a report with detailed advice to improve performance.

Content Security Policy / PayPal

Anyone here had experience with PayPal Content Security Policy?
I tried to make a dummy purchase using Chrome Developer tools and this message is written in the bottom box: Content Security Policy blocks inline execution of scripts and stylesheets
(See attachment)
There must be a line of code to go in the <head> lurking somewhere, PayPal doesn't want to make it simple.
So, does anyone have any knowledge of this CSP? It's certainly preventing any PayPal sales that's for sure.

CSP1.png

Caching Data in SvelteKit

My previous post was a broad overview of SvelteKit where we saw what a great tool it is for web development. This post will fork off what we did there and dive into every developer’s favorite topic: caching. So, be sure to give my last post a read if you haven’t already. The code for this post is available on GitHub, as well as a live demo.

This post is all about data handling. We’ll add some rudimentary search functionality that will modify the page’s query string (using built-in SvelteKit features), and re-trigger the page’s loader. But, rather than just re-query our (imaginary) database, we’ll add some caching so re-searching prior searches (or using the back button) will show previously retrieved data, quickly, from cache. We’ll look at how to control the length of time the cached data stays valid and, more importantly, how to manually invalidate all cached values. As icing on the cake, we’ll look at how we can manually update the data on the current screen, client-side, after a mutation, while still purging the cache.

This will be a longer, more difficult post than most of what I usually write since we’re covering harder topics. This post will essentially show you how to implement common features of popular data utilities like react-query; but instead of pulling in an external library, we’ll only be using the web platform and SvelteKit features.

Unfortunately, the web platform’s features are a bit lower level, so we’ll be doing a bit more work than you might be used to. The upside is we won’t need any external libraries, which will help keep bundle sizes nice and small. Please don’t use the approaches I’m going to show you unless you have a good reason to. Caching is easy to get wrong, and as you’ll see, there’s a bit of complexity that’ll result in your application code. Hopefully your data store is fast, and your UI is fine allowing SvelteKit to just always request the data it needs for any given page. If it is, leave it alone. Enjoy the simplicity. But this post will show you some tricks for when that stops being the case.

Speaking of react-query, it was just released for Svelte! So if you find yourself leaning on manual caching techniques a lot, be sure to check that project out, and see if it might help.

Setting up

Before we start, let’s make a few small changes to the code we had before. This will give us an excuse to see some other SvelteKit features and, more importantly, set us up for success.

First, let’s move our data loading from our loader in +page.server.js to an API route. We’ll create a +server.js file in routes/api/todos, and then add a GET function. This means we’ll now be able to fetch (using the default GET verb) to the /api/todos path. We’ll add the same data loading code as before.

import { json } from "@sveltejs/kit";
import { getTodos } from "$lib/data/todoData";

export async function GET({ url, setHeaders, request }) {
  const search = url.searchParams.get("search") || "";

  const todos = await getTodos(search);

  return json(todos);
}

Next, let’s take the page loader we had, and simply rename the file from +page.server.js to +page.js (or .ts if you’ve scaffolded your project to use TypeScript). This changes our loader to be a “universal” loader rather than a server loader. The SvelteKit docs explain the difference, but a universal loader runs on both the server and also the client. One advantage for us is that the fetch call into our new endpoint will run right from our browser (after the initial load), using the browser’s native fetch function. We’ll add standard HTTP caching in a bit, but for now, all we’ll do is call the endpoint.

export async function load({ fetch, url, setHeaders }) {
  const search = url.searchParams.get("search") || "";

  const resp = await fetch(`/api/todos?search=${encodeURIComponent(search)}`);

  const todos = await resp.json();

  return {
    todos,
  };
}

Now let’s add a simple form to our /list page:

<div class="search-form">
  <form action="/list">
    <label>Search</label>
    <input autofocus name="search" />
  </form>
</div>

Yep, forms can target directly to our normal page loaders. Now we can add a search term in the search box, hit Enter, and a “search” term will be appended to the URL’s query string, which will re-run our loader and search our to-do items.

Search form

Let’s also increase the delay in our todoData.js file in /lib/data. This will make it easy to see when data are and are not cached as we work through this post.

export const wait = async amount => new Promise(res => setTimeout(res, amount ?? 500));

Remember, the full code for this post is all on GitHub, if you need to reference it.

Basic caching

Let’s get started by adding some caching to our /api/todos endpoint. We’ll go back to our +server.js file and add our first cache-control header.

setHeaders({
  "cache-control": "max-age=60",
});

…which will leave the whole function looking like this:

export async function GET({ url, setHeaders, request }) {
  const search = url.searchParams.get("search") || "";

  setHeaders({
    "cache-control": "max-age=60",
  });

  const todos = await getTodos(search);

  return json(todos);
}

We’ll look at manual invalidation shortly, but all this function says is to cache these API calls for 60 seconds. Set this to whatever you want, and depending on your use case, stale-while-revalidate might also be worth looking into.

And just like that, our queries are caching.

Cache in DevTools.

Note make sure you un-check the checkbox that disables caching in dev tools.

Remember, if your initial navigation on the app is the list page, those search results will be cached internally to SvelteKit, so don’t expect to see anything in DevTools when returning to that search.

What is cached, and where

Our very first, server-rendered load of our app (assuming we start at the /list page) will be fetched on the server. SvelteKit will serialize and send this data down to our client. What’s more, it will observe the Cache-Control header on the response, and will know to use this cached data for that endpoint call within the cache window (which we set to 60 seconds in put example).

After that initial load, when you start searching on the page, you should see network requests from your browser to the /api/todos list. As you search for things you’ve already searched for (within the last 60 seconds), the responses should load immediately since they’re cached.

What’s especially cool with this approach is that, since this is caching via the browser’s native caching, these calls could (depending on how you manage the cache busting we’ll be looking at) continue to cache even if you reload the page (unlike the initial server-side load, which always calls the endpoint fresh, even if it did it within the last 60 seconds).

Obviously data can change anytime, so we need a way to purge this cache manually, which we’ll look at next.

Cache invalidation

Right now, data will be cached for 60 seconds. No matter what, after a minute, fresh data will be pulled from our datastore. You might want a shorter or longer time period, but what happens if you mutate some data and want to clear your cache immediately so your next query will be up to date? We’ll solve this by adding a query-busting value to the URL we send to our new /todos endpoint.

Let’s store this cache busting value in a cookie. That value can be set on the server but still read on the client. Let’s look at some sample code.

We can create a +layout.server.js file at the very root of our routes folder. This will run on application startup, and is a perfect place to set an initial cookie value.

export function load({ cookies, isDataRequest }) {
  const initialRequest = !isDataRequest;

  const cacheValue = initialRequest ? +new Date() : cookies.get("todos-cache");

  if (initialRequest) {
    cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });
  }

  return {
    todosCacheBust: cacheValue,
  };
}

You may have noticed the isDataRequest value. Remember, layouts will re-run anytime client code calls invalidate(), or anytime we run a server action (assuming we don’t turn off default behavior). isDataRequest indicates those re-runs, and so we only set the cookie if that’s false; otherwise, we send along what’s already there.

The httpOnly: false flag is also significant. This allows our client code to read these cookie values in document.cookie. This would normally be a security concern, but in our case these are meaningless numbers that allow us to cache or cache bust.

Reading cache values

Our universal loader is what calls our /todos endpoint. This runs on the server or the client, and we need to read that cache value we just set up no matter where we are. It’s relatively easy if we’re on the server: we can call await parent() to get the data from parent layouts. But on the client, we’ll need to use some gross code to parse document.cookie:

export function getCookieLookup() {
  if (typeof document !== "object") {
    return {};
  }

  return document.cookie.split("; ").reduce((lookup, v) => {
    const parts = v.split("=");
    lookup[parts[0]] = parts[1];

    return lookup;
  }, {});
}

const getCurrentCookieValue = name => {
  const cookies = getCookieLookup();
  return cookies[name] ?? "";
};

Fortunately, we only need it once.

Sending out the cache value

But now we need to send this value to our /todos endpoint.

import { getCurrentCookieValue } from "$lib/util/cookieUtils";

export async function load({ fetch, parent, url, setHeaders }) {
  const parentData = await parent();

  const cacheBust = getCurrentCookieValue("todos-cache") || parentData.todosCacheBust;
  const search = url.searchParams.get("search") || "";

  const resp = await fetch(`/api/todos?search=${encodeURIComponent(search)}&cache=${cacheBust}`);
  const todos = await resp.json();

  return {
    todos,
  };
}

getCurrentCookieValue('todos-cache') has a check in it to see if we’re on the client (by checking the type of document), and returns nothing if we are, at which point we know we’re on the server. Then it uses the value from our layout.

Busting the cache

But how do we actually update that cache busting value when we need to? Since it’s stored in a cookie, we can call it like this from any server action:

cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });

The implementation

It’s all downhill from here; we’ve done the hard work. We’ve covered the various web platform primitives we need, as well as where they go. Now let’s have some fun and write application code to tie it all together.

For reasons that’ll become clear in a bit, let’s start by adding an editing functionality to our /list page. We’ll add this second table row for each todo:

import { enhance } from "$app/forms";
<tr>
  <td colspan="4">
    <form use:enhance method="post" action="?/editTodo">
      <input name="id" value="{t.id}" type="hidden" />
      <input name="title" value="{t.title}" />
      <button>Save</button>
    </form>
  </td>
</tr>

And, of course, we’ll need to add a form action for our /list page. Actions can only go in .server pages, so we’ll add a +page.server.js in our /list folder. (Yes, a +page.server.js file can co-exist next to a +page.js file.)

import { getTodo, updateTodo, wait } from "$lib/data/todoData";

export const actions = {
  async editTodo({ request, cookies }) {
    const formData = await request.formData();

    const id = formData.get("id");
    const newTitle = formData.get("title");

    await wait(250);
    updateTodo(id, newTitle);

    cookies.set("todos-cache", +new Date(), { path: "/", httpOnly: false });
  },
};

We’re grabbing the form data, forcing a delay, updating our todo, and then, most importantly, clearing our cache bust cookie.

Let’s give this a shot. Reload your page, then edit one of the to-do items. You should see the table value update after a moment. If you look in the Network tab in DevToold, you’ll see a fetch to the /todos endpoint, which returns your new data. Simple, and works by default.

Saving data

Immediate updates

What if we want to avoid that fetch that happens after we update our to-do item, and instead, update the modified item right on the screen?

This isn’t just a matter of performance. If you search for “post” and then remove the word “post” from any of the to-do items in the list, they’ll vanish from the list after the edit since they’re no longer in that page’s search results. You could make the UX better with some tasteful animation for the exiting to-do, but let’s say we wanted to not re-run that page’s load function but still clear the cache and update the modified to-do so the user can see the edit. SvelteKit makes that possible — let’s see how!

First, let’s make one little change to our loader. Instead of returning our to-do items, let’s return a writeable store containing our to-dos.

return {
  todos: writable(todos),
};

Before, we were accessing our to-dos on the data prop, which we do not own and cannot update. But Svelte lets us return our data in their own store (assuming we’re using a universal loader, which we are). We just need to make one more tweak to our /list page.

Instead of this:

{#each todos as t}

…we need to do this since todos is itself now a store.:

{#each $todos as t}

Now our data loads as before. But since todos is a writeable store, we can update it.

First, let’s provide a function to our use:enhance attribute:

<form
  use:enhance={executeSave}
  on:submit={runInvalidate}
  method="post"
  action="?/editTodo"
>

This will run before a submit. Let’s write that next:

function executeSave({ data }) {
  const id = data.get("id");
  const title = data.get("title");

  return async () => {
    todos.update(list =>
      list.map(todo => {
        if (todo.id == id) {
          return Object.assign({}, todo, { title });
        } else {
          return todo;
        }
      })
    );
  };
}

This function provides a data object with our form data. We return an async function that will run after our edit is done. The docs explain all of this, but by doing this, we shut off SvelteKit’s default form handling that would have re-run our loader. This is exactly what we want! (We could easily get that default behavior back, as the docs explain.)

We now call update on our todos array since it’s a store. And that’s that. After editing a to-do item, our changes show up immediately and our cache is cleared (as before, since we set a new cookie value in our editTodo form action). So, if we search and then navigate back to this page, we’ll get fresh data from our loader, which will correctly exclude any updated to-do items that were updated.

The code for the immediate updates is available at GitHub.

Digging deeper

We can set cookies in any server load function (or server action), not just the root layout. So, if some data are only used underneath a single layout, or even a single page, you could set that cookie value there. Moreoever, if you’re not doing the trick I just showed manually updating on-screen data, and instead want your loader to re-run after a mutation, then you could always set a new cookie value right in that load function without any check against isDataRequest. It’ll set initially, and then anytime you run a server action that page layout will automatically invalidate and re-call your loader, re-setting the cache bust string before your universal loader is called.

Writing a reload function

Let’s wrap-up by building one last feature: a reload button. Let’s give users a button that will clear cache and then reload the current query.

We’ll add a dirt simple form action:

async reloadTodos({ cookies }) {
  cookies.set('todos-cache', +new Date(), { path: '/', httpOnly: false });
},

In a real project you probably wouldn’t copy/paste the same code to set the same cookie in the same way in multiple places, but for this post we’ll optimize for simplicity and readability.

Now let’s create a form to post to it:

<form method="POST" action="?/reloadTodos" use:enhance>
  <button>Reload todos</button>
</form>

That works!

UI after reload.

We could call this done and move on, but let’s improve this solution a bit. Specifically, let’s provide feedback on the page to tell the user the reload is happening. Also, by default, SvelteKit actions invalidate everything. Every layout, page, etc. in the current page’s hierarchy would reload. There might be some data that’s loaded once in the root layout that we don’t need to invalidate or re-load.

So, let’s focus things a bit, and only reload our to-dos when we call this function.

First, let’s pass a function to enhance:

<form method="POST" action="?/reloadTodos" use:enhance={reloadTodos}>
import { enhance } from "$app/forms";
import { invalidate } from "$app/navigation";

let reloading = false;
const reloadTodos = () => {
  reloading = true;

  return async () => {
    invalidate("reload:todos").then(() => {
      reloading = false;
    });
  };
};

We’re setting a new reloading variable to true at the start of this action. And then, in order to override the default behavior of invalidating everything, we return an async function. This function will run when our server action is finished (which just sets a new cookie).

Without this async function returned, SvelteKit would invalidate everything. Since we’re providing this function, it will invalidate nothing, so it’s up to us to tell it what to reload. We do this with the invalidate function. We call it with a value of reload:todos. This function returns a promise, which resolves when the invalidation is complete, at which point we set reloading back to false.

Lastly, we need to sync our loader up with this new reload:todos invalidation value. We do that in our loader with the depends function:

export async function load({ fetch, url, setHeaders, depends }) {
    depends('reload:todos');

  // rest is the same

And that’s that. depends and invalidate are incredibly useful functions. What’s cool is that invalidate doesn’t just take arbitrary values we provide like we did. We can also provide a URL, which SvelteKit will track, and invalidate any loaders that depend on that URL. To that end, if you’re wondering whether we could skip the call to depends and invalidate our /api/todos endpoint altogether, you can, but you have to provide the exact URL, including the search term (and our cache value). So, you could either put together the URL for the current search, or match on the path name, like this:

invalidate(url => url.pathname == "/api/todos");

Personally, I find the solution that uses depends more explicit and simple. But see the docs for more info, of course, and decide for yourself.

If you’d like to see the reload button in action, the code for it is in this branch of the repo.

Parting thoughts

This was a long post, but hopefully not overwhelming. We dove into various ways we can cache data when using SvelteKit. Much of this was just a matter of using web platform primitives to add the correct cache, and cookie values, knowledge of which will serve you in web development in general, beyond just SvelteKit.

Moreover, this is something you absolutely do not need all the time. Arguably, you should only reach for these sort of advanced features when you actually need them. If your datastore is serving up data quickly and efficiently, and you’re not dealing with any kind of scaling problems, there’s no sense in bloating your application code with needless complexity doing the things we talked about here.

As always, write clear, clean, simple code, and optimize when necessary. The purpose of this post was to provide you those optimization tools for when you truly need them. I hope you enjoyed it!


Caching Data in SvelteKit originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

A Guide About Dialogflow CX Webhook Development

Dialogflow CX Webhooks can be developed using Google Cloud functions or a REST API endpoint. Google Cloud Function function is Googles’s implementation of serverless functions available in GCP. Google recommends using Google Cloud Function functions for Dialogflow CX development.

In this post, we will implement a Dialogflow CX Agent by using Golang, Protobuf, and Google Cloud Functions. This project is basically a "Hello, World!" example.

Introduction to Azure Data Lake Storage Gen2

Built on Azure Blob Storage, Azure Data Lake Storage Gen2 is a suite of features for big data analytics.

Azure Data Lake Storage Gen1 and Azure Blob Storage's capabilities are combined in Data Lake Storage Gen2. For instance, Data Lake Storage Gen2 offers scale, file-level security, and file system semantics. You will also receive low-cost, tiered storage with high availability and disaster recovery capabilities because these capabilities are built on Blob storage.

Agile Transformation With ChatGPT or McBoston?

TL; DR: Agile Transformation With ChatGPT or McBoston?

This article is another excursion into this nascent yet fascinating new technology of generative AI and LLMs and the future of knowledge work. I was interested in learning more about a typical daily challenge many Agile practitioners face: How shall we successfully pursue an Agile transformation? Shall we outsource the effort to one of the big consultancies, lovely dubbed McBoston? Or shall we embark on an Agile transformation with ChatGPT providing some guidance?

If technology can pass a Wharton MBA exam, maybe, it deserves some attention. We thought that AI might initially come after simple office jobs. I am no longer sure about that. Maybe, ChatGPT’s successor will start at the top of the food chain.