How to Set Date Time from Mac Command Line

Featured Imgs 23

Working on a web extension that ships to an app store and isn’t immediately modifiable, like a website, can be difficult. Since you cannot immediately deploy updates, you sometimes need to bake in hardcoded date-based logic. Testing future dates can be difficult if you don’t know how to quickly change the date on your local machine.

To change the current date on your Mac, execute the following from command line:

# Date Format:  MMDDYYYY
sudo date -I 06142024

This command does not modify time, only the current date. Using the same command to reset to current date is easy as well!

The post How to Set Date Time from Mac Command Line appeared first on David Walsh Blog.

Create a Striking Portfolio with These Free Webflow Templates

Featured Imgs 23

Are you ready to take your portfolio game to the next level? Let’s dive into the exciting world of Webflow templates for creating stunning portfolios! As a web designer, I can vouch for the game-changing benefits these templates bring to the table. Imagine having access to professionally designed layouts and functionalities without spending a dime. That’s the power of free Webflow portfolio templates.

Webflow templates offer a treasure trove of features that can make your portfolio shine. These templates are perfect for designers who want to create impressive designs with dynamic animations and responsive layouts. With just a few clicks, you can customize these templates to reflect your unique style and creativity. Of course! Webflow templates are a great way to save time and create a professional, polished look for your website. They make it easy to showcase your work without getting bogged down in coding and design details. So, why settle for mediocrity when you can dazzle with a Webflow portfolio template?

Advantages of using free templates for creating a striking portfolio

Using free Webflow portfolio templates for your portfolio website comes with a plethora of benefits. These templates serve as a solid foundation that jumpstarts your design process, saving you time and effort in crafting a stunning portfolio. They offer a wide range of design options, from sleek and modern layouts to vibrant and creative designs. With these templates, you can easily customize colors, fonts, and layouts to match your style and showcase your work effectively.

One major advantage is that free Webflow templates are designed by professionals with user experience in mind, ensuring that your portfolio is not only visually appealing but also easy to navigate for your viewers. Additionally, these templates are responsive by nature, meaning your portfolio will look fantastic on any device, whether it’s a desktop, tablet, or smartphone. This adaptability ensures that your work shines across various platforms, attracting potential clients and employers to your impressive design skills without missing a beat.

Benefits of using Webflow templates for portfolio creation

When it comes to building a standout portfolio, using Webflow templates is like having a secret weapon in your design arsenal. These templates offer a plethora of benefits for creating a top-notch portfolio website. First off, they are incredibly user-friendly, making the design process smooth and stress-free. With pre-built layouts and elements, you can save valuable time and focus on showcasing your work in the best possible light.

Additionally, Webflow templates are highly customizable, allowing you to tailor every aspect of your portfolio to fit your unique style and vision. This level of flexibility ensures that your portfolio stands out from the rest and leaves a lasting impression on visitors. Moreover, the responsive nature of Webflow templates guarantees that your portfolio looks stunning on any device, from desktops to smartphones.

Overall, using Webflow templates for portfolio creation not only saves time and effort but also elevates the visual appeal and functionality of your website effortlessly. With these templates, you can create a professional and polished portfolio that speaks volumes about your design expertise.

Tips for maximizing the impact of your portfolio with free templates

When it comes to making the most out of your portfolio using free Webflow templates, remember to showcase your best work first. Grab the viewer’s attention with a stunning homepage that highlights your top projects. Keep it simple and clean to ensure your designs shine through without distractions. Additionally, personalize the template to match your style and brand identity. Add a touch of creativity by incorporating unique elements that reflect your personality.

Don’t forget to optimize your images for faster loading times, as nobody likes waiting around for a portfolio to load. Use high-quality visuals that clearly demonstrate your skills and expertise. And lastly, make sure your portfolio is easy to navigate. Organize your projects into categories or sections, making it effortless for visitors to explore and discover your creations with ease. With these tips, you can maximize the impact of your portfolio and leave a lasting impression on potential clients and employers.

In conclusion, using free Webflow portfolio templates is a game-changer for any aspiring web designer. These templates provide a solid foundation to kickstart your portfolio design journey without breaking a sweat. With a wide range of customizable options at your fingertips, crafting a visually stunning and user-friendly portfolio has never been easier. The convenience and flexibility of Webflow templates allow you to showcase your work in the best light possible, impressing potential clients and employers with your creative prowess. So, why make things harder for yourself when you can take advantage of these free resources to elevate your portfolio to new heights? Embrace the power of Webflow templates and watch your design dreams come to life with style and ease. Your portfolio deserves to stand out, and with Webflow, that’s exactly what it will do.

yrFolio – Portfolio Website Template

yrFolio

Source Live Preview

Matteo Fabbiani – Personal Portfolio Web Template

Matteo Fabbiani

Source Live Preview

Personal Portfolio Webflow Website Template

Personal Portfolio Webflow Website Template

Source Live Preview

Relume Portfolio Webflow Template

Relume Portfolio Webflow Template

Source Live Preview

Solveig Portfolio Template

Solveig Portfolio Template

Source Live Preview

Portfolio Template for Architecture

Portfolio Template for Architecture

Source Live Preview

Student Portfolio

Student Portfolio

Source Live Preview

Portfolio Website Template

Portfolio Website Template

Source Live Preview

SkillSet – Minimal Portfolio Template

SkillSet

Source Live Preview

Dante Portfolio Webflow Template

Dante Portfolio Webflow Template

Source Live Preview

Free Personal Portfolio Web Template

Free Personal Portfolio Web Template

Source Live Preview

Product Design Portfolio Template

Product Design Portfolio Template

Source Live Preview

Photographer’s Portfolio Template

Photographer's Portfolio Template

Source Live Preview

Free Minimalist Portfolio Webflow Template

Free Minimalist Portfolio Webflow Template

Source Live Preview

Uncommon Portfolio Web Template

Uncommon Portfolio Web Template

Source Live Preview

Indi Harris

Indi Harris

Source Live Preview

Douglas Pinho Portfolio

Douglas Pinho Portfolio

Source Live Preview

Hire Flemming

Hire Flemming

Source Live Preview

QuickSnap Photographer Template

QuickSnap Photographer Template

Source Live Preview

Overflow

Overflow

Source Live Preview

Darren Harroff

Darren Harroff

Source Live Preview

The post Create a Striking Portfolio with These Free Webflow Templates appeared first on CSS Author.

Formatting web pages for various displays

Featured Imgs 23

First I want to say that my only interest in the technical details of web development are from an end user perspective. I usually go to this site to get my weather forecasts. Their current display format is to present the forecasts horizontally but it will likely be changing soon to vertical. My two most visited pages are the hourly and seven day forecasts. It's always annoyed me that on my display (16x9 laptop) most of the space is wasted. On the seven day page it doesn't matter, but on the hourly page it could easily display twelve hours or more of data.

But when the switch is made to the vertical format the hourly page will show only five hours of data with pointless "content continues below" page breaks. The seven day forecast shows only one day per screen. It seems to me that if you are planning, for example, a multi-day activity (perhaps a trip to the beach/cottage) it makes for easier planning to see multiple days (the more the better) on one page. For single-day events, perhaps a trip to the park, seeing more hours at once is much better than fewer.

My question (and I do have one) is, how difficult is it for the web site to customize the html for the target device? I realize that many more people would access this site on a smart phone than a desktop/laptop computer, but shouldn't major sites like this try to please both smart phone users as well as dinosaurs?

Tabular Data Classification with Hugging Face Meta Tree Transformer

Featured Imgs 23

As a data scientist, I have extensively used the Hugging Face library for processing unstructured data such as images, text, and audio. My previous blogs have covered various transformer models for these types of data. Lately, however, I discovered that Hugging Face also provides transformer models for tabular data. One such transformer is the Meta Tree Transformer.

This article will explore using the Meta Tree Transformer model to classify tabular data, detailing each process step and providing insights based on the Bank Note Authentication dataset.

Installing and Importing Required Libraries

You must install and import the following libraries to run the codes in this article.


!pip install metatreelib
!pip install --upgrade scikit-learn
!pip install imodels

from metatree.model_metatree import LlamaForMetaTree as MetaTree
from metatree.decision_tree_class import DecisionTree, DecisionTreeForest
from metatree.run_train import preprocess_dimension_patch
from transformers import AutoConfig
from sklearn.metrics import accuracy_score
import imodels # pip install imodels
import sklearn
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd
import torch
from torch.utils.data import Dataset, DataLoader
import random
Loading and Preprocessing the Dataset

The dataset used in this tutorial is the Bank Note Authentication dataset, which you can download from Kaggle. The dataset contains features extracted from images of banknotes and is used to classify whether a banknote is authentic or not.

The dataset consists of the following columns:

  • variance
  • skewness
  • curtosis
  • entropy
  • class

The class column is the target variable, indicating whether the banknote is authentic (1) or not (0).

First, we need to read the dataset and preprocess it. We will use the pandas library to read the dataset from a CSV file.

This will load the dataset into a pandas DataFrame and display the first few rows to get an overview of the data.


# Load the dataset
file_path = '/content/BankNote_Authentication.csv'  # Path to the dataset
df = pd.read_csv(file_path)

# Display the first few rows of the dataset
df.head()

Output:

image1.png

Next, we split the dataset into training and testing sets using sklearn.model_selection.train_test_split. Here, 20% of the data is reserved for testing, ensuring we have sufficient data to evaluate the model's performance.


# Split the dataset into features and target variable
X = df.drop(columns=['class'])
y = df['class']

# Split the data into training and testing sets
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.2,
DataLoader for Batching

To handle the entire dataset in batches of 256, we will create a custom dataset class and use PyTorch's DataLoader to batch and shuffle the data. The batch size is set to 256 since the Meta Tree transformer expects the data to be in batches of 256 records.


class TabularDataset(Dataset):
    def __init__(self, features, labels):
        self.features = features
        self.labels = labels

    def __len__(self):
        return len(self.features)

    def __getitem__(self, idx):
        feature = self.features[idx]
        label = self.labels[idx]
        return torch.tensor(feature, dtype=torch.float32), torch.tensor(label, dtype=torch.float32)

# Convert data to tensors
train_features = train_X.values
train_labels = torch.nn.functional.one_hot(torch.tensor(train_y.values), num_classes=2).float().numpy()

# Create Dataset
train_dataset = TabularDataset(train_features, train_labels)

# Parameters
batch_size = 256

# Create DataLoader
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
Setting Up the Meta Tree Transformer Model

To begin with, we need to initialize the Meta Tree Transformer model and adjust its configuration to match our dataset, particularly the number of features and classes.

The model is configured to handle a different number of features by default. We set config.n_feature to the number of features in our dataset (train_X.shape[1]).

Similarly, the model is configured for a different number of output classes. We set config.n_class to the number of classes in our dataset (2).



# Initialize Model
model_name_or_path = "yzhuang/MetaTree"

config = AutoConfig.from_pretrained(model_name_or_path)
# Override config parameters to match your dataset
config.n_feature = train_X.shape[1]
config.n_class = 2

model = MetaTree.from_pretrained(
    model_name_or_path,
    config=config,
    ignore_mismatched_sizes=True
)

decision_tree_forest = DecisionTreeForest()

# Set the depth of the model
model.depth = 2

Training the Model with Batches

Next, we train the model using the batches provided by the DataLoader.


# Training loop
for batch_features, batch_labels in train_loader:
    # Prepare the batch for the model
    batch = {"input_x": batch_features, "input_y": batch_labels, "input_y_clean": batch_labels}
    batch = preprocess_dimension_patch(batch, n_feature=train_X.shape[1], n_class=2)

    # Generate decision tree
    outputs = model.generate_decision_tree(batch['input_x'], batch['input_y'], depth=model.depth)
    decision_tree_forest.add_tree(DecisionTree(auto_dims=outputs.metatree_dimensions, auto_thresholds=outputs.tentative_splits, input_x=batch['input_x'], input_y=batch['input_y'], depth=model.depth))

    print("Decision Tree Features: ", [x.argmax(dim=-1) for x in outputs.metatree_dimensions])
    print("Decision Tree Thresholds: ", outputs.tentative_splits)


Evaluating the Model

Finally, we evaluate the model's performance on the test set.


# Predict using the decision tree forest
test_X_tensor = torch.tensor(test_X.values, dtype=torch.float32)
tree_pred = decision_tree_forest.predict(test_X_tensor)

tree_pred = tree_pred.argmax(dim=-1).squeeze().numpy()

# Calculate accuracy
accuracy = accuracy_score(test_y, tree_pred)
print("MetaTree Test Accuracy: ", accuracy)

Output:


MetaTree Test Accuracy:  0.8727272727272727
Conclusion

The Meta Tree Transformer offers a powerful method for classifying tabular data by combining the interpretability of decision trees with the robust performance of transformer models. In this tutorial, we walked through the process of setting up the model, preprocessing the data, training with multiple batches, and evaluation.

In my experience, the performance of the Meta Tree Transformer was on par with simpler algorithms like Random Forest and AdaBoost. Experimenting with different parameters and datasets can further enhance its performance, making it a valuable addition to any data scientist's toolkit.

Feel free to leave your feedback and the results you obtained using Transformer models on tabular data.

What Are CSS Container Style Queries Good For?

Category Image 052

We’ve relied on media queries for a long time in the responsive world of CSS but they have their share of limitations and have shifted focus more towards accessibility than responsiveness alone. This is where CSS Container Queries come in. They completely change how we approach responsiveness, shifting the paradigm away from a viewport-based mentality to one that is more considerate of a component’s context, such as its size or inline-size.

Querying elements by their dimensions is one of the two things that CSS Container Queries can do, and, in fact, we call these container size queries to help distinguish them from their ability to query against a component’s current styles. We call these container style queries.

Existing container query coverage has been largely focused on container size queries, which enjoy 90% global browser support at the time of this writing. Style queries, on the other hand, are only available behind a feature flag in Chrome 111+ and Safari Technology Preview.

The first question that comes to mind is What are these style query things? followed immediately by How do they work?. There are some nice primers on them that others have written, and they are worth checking out.

But the more interesting question about CSS Container Style Queries might actually be Why we should use them? The answer, as always, is nuanced and could simply be it depends. But I want to poke at style queries a little more deeply, not at the syntax level, but what exactly they are solving and what sort of use cases we would find ourselves reaching for them in our work if and when they gain browser support.

Why Container Queries

Talking purely about responsive design, media queries have simply fallen short in some Aspects, but I think the main one is that they are context-agnostic in the sense that they only consider the viewport size when applying styles without involving the size or dimensions of an element’s parent or the content it contains.

This usually isn’t a problem since we only have a main element that doesn’t share space with others along the x-axis, so we can style our content depending on the viewport’s dimensions. However, if we stuff an element into a smaller parent and maintain the same viewport, the media query doesn’t kick in when the content becomes cramped. This forces us to write and manage an entire set of media queries that target super-specific content breakpoints.

Container queries break this limitation and allow us to query much more than the viewport’s dimensions.

How Container Queries Generally Work

Container size queries work similarly to media queries but allow us to apply styles depending on the container’s properties and computed values. In short, they allow us to make style changes based on an element’s computed width or height regardless of the viewport. This sort of thing was once only possible with JavaScript or the ol’ jQuery, as this example shows.

As noted earlier, though, container queries can query an element’s styles in addition to its dimensions. In other words, container style queries can look at and track an element’s properties and apply styles to other elements when those properties meet certain conditions, such as when the element’s background-color is set to hsl(0 50% 50%).

That’s what we mean when talking about CSS Container Style Queries. It’s a proposed feature defined in the same CSS Containment Module Level 3 specification as CSS Container Size Queries — and one that’s currently unsupported by any major browser — so the difference between style and size queries can get a bit confusing as we’re technically talking about two related features under the same umbrella.

We’d do ourselves a favor to backtrack and first understand what a “container” is in the first place.

Containers

An element’s container is any ancestor with a containment context; it could be the element’s direct parent or perhaps a grandparent or great-grandparent.

A containment context means that a certain element can be used as a container for querying. Unofficially, you can say there are two types of containment context: size containment and style containment.

Size containment means we can query and track an element’s dimensions (i.e., aspect-ratio, block-size, height, inline-size, orientation, and width) with container size queries as long as it’s registered as a container. Tracking an element’s dimensions requires a little processing in the client. One or two elements are a breeze, but if we had to constantly track the dimensions of all elements — including resizing, scrolling, animations, and so on — it would be a huge performance hit. That’s why no element has size containment by default, and we have to manually register a size query with the CSS container-type property when we need it.

On the other hand, style containment lets us query and track the computed values of a container’s specific properties through container style queries. As it currently stands, we can only check for custom properties, e.g. --theme: dark, but soon we could check for an element’s computed background-color and display property values. Unlike size containment, we are checking for raw style properties before they are processed by the browser, alleviating performance and allowing all elements to have style containment by default.

Did you catch that? While size containment is something we manually register on an element, style containment is the default behavior of all elements. There’s no need to register a style container because all elements are style containers by default.

And how do we register a containment context? The easiest way is to use the container-type property. The container-type property will give an element a containment context and its three accepted values — normal, size, and inline-size — define which properties we can query from the container.

/* Size containment in the inline direction */
.parent {
  container-type: inline-size;
}

This example formally establishes a size containment. If we had done nothing at all, the .parent element is already a container with a style containment.

Size Containment

That last example illustrates size containment based on the element’s inline-size, which is a fancy way of saying its width. When we talk about normal document flow on the web, we’re talking about elements that flow in an inline direction and a block direction that corresponds to width and height, respectively, in a horizontal writing mode. If we were to rotate the writing mode so that it is vertical, then “inline” would refer to the height instead and “block” to the width.

Consider the following HTML:

<div class="cards-container">
  <ul class="cards">
    <li class="card"></li>
  </ul>
</div>

We could give the .cards-container element a containment context in the inline direction, allowing us to make changes to its descendants when its width becomes too small to properly display everything in the current layout. We keep the same syntax as in a normal media query but swap @media for @container

.cards-container {
  container-type: inline-size;
  }

  @container (width < 700px) {
  .cards {
    background-color: red;
  }
}

Container syntax works almost the same as media queries, so we can use the and, or, and not operators to chain different queries together to match multiple conditions.

@container (width < 700px) or (width > 1200px) {
  .cards {
    background-color: red;
  }
}

Elements in a size query look for the closest ancestor with size containment so we can apply changes to elements deeper in the DOM, like the .card element in our earlier example. If there is no size containment context, then the @container at-rule won’t have any effect.

/* 👎 
 * Apply styles based on the closest container, .cards-container
 */
@container (width < 700px) {
  .card {
    background-color: black;
  }
}

Just looking for the closest container is messy, so it’s good practice to name containers using the container-name property and then specifying which container we’re tracking in the container query just after the @container at-rule.

.cards-container {
  container-name: cardsContainer;
  container-type: inline-size;
}

@container cardsContainer (width < 700px) {
  .card {
    background-color: #000;
  }
}

We can use the shorthand container property to set the container name and type in a single declaration:

.cards-container {
  container: cardsContainer / inline-size;

  /* Equivalent to: */
  container-name: cardsContainer;
  container-type: inline-size;
}

The other container-type we can set is size, which works exactly like inline-size — only the containment context is both the inline and block directions. That means we can also query the container’s height sizing in addition to its width sizing.

/* When container is less than 700px wide */
@container (width < 700px) {
  .card {
    background-color: black;
  }
}

/* When container is less than 900px tall */
@container (height < 900px) {
  .card {
    background-color: white;
  }
}

And it’s worth noting here that if two separate (not chained) container rules match, the most specific selector wins, true to how the CSS Cascade works.

So far, we’ve touched on the concept of CSS Container Queries at its most basic. We define the type of containment we want on an element (we looked specifically at size containment) and then query that container accordingly.

Container Style Queries

The third value that is accepted by the container-type property is normal, and it sets style containment on an element. Both inline-size and size are stable across all major browsers, but normal is newer and only has modest support at the moment.

I consider normal a bit of an oddball because we don’t have to explicitly declare it on an element since all elements are style containers with style containment right out of the box. It’s possible you’ll never write it out yourself or see it in the wild.

.parent {
  /* Unnecessary */
  container-type: normal;
}

If you do write it or see it, it’s likely to undo size containment declared somewhere else. But even then, it’s possible to reset containment with the global initial or revert keywords.

.parent {
  /* All of these (re)set style containment */
  container-type: normal;
  container-type: initial;
  container-type: revert;
}

Let’s look at a simple and somewhat contrived example to get the point across. We can define a custom property in a container, say a --theme.

.cards-container {
  --theme: dark;
}

From here, we can check if the container has that desired property and, if it does, apply styles to its descendant elements. We can’t directly style the container since it could unleash an infinite loop of changing the styles and querying the styles.

.cards-container {
  --theme: dark;
}

@container style(--theme: dark) {
  .cards {
    background-color: black;
  }
}

See that style() function? In the future, we may want to check if an element has a max-width: 400px through a style query instead of checking if the element’s computed value is bigger than 400px in a size query. That’s why we use the style() wrapper to differentiate style queries from size queries.

/* Size query */
@container (width > 60ch) {
  .cards {
    flex-direction: column;
  }
}

/* Style query */
@container style(--theme: dark) {
  .cards {
    background-color: black;
  }
}

Both types of container queries look for the closest ancestor with a corresponding containment-type. In a style() query, it will always be the parent since all elements have style containment by default. In this case, the direct parent of the .cards element in our ongoing example is the .cards-container element. If we want to query non-direct parents, we will need the container-name property to differentiate between containers when making a query.

.cards-container {
  container-name: cardsContainer;
  --theme: dark;
}

@container cardsContainer style(--theme: dark) {
  .card {
    color: white;
  }
}
Weird and Confusing Things About Container Style Queries

Style queries are completely new and bring something never seen in CSS, so they are bound to have some confusing qualities as we wrap our heads around them — some that are completely intentional and well thought-out and some that are perhaps unintentional and may be updated in future versions of the specification.

Style and Size Containment Aren’t Mutually Exclusive

One intentional perk, for example, is that a container can have both size and style containment. No one would fault you for expecting that size and style containment are mutually exclusive concerns, so setting an element to something like container-type: inline-size would make all style queries useless.

However, another funny thing about container queries is that elements have style containment by default, and there isn’t really a way to remove it. Check out this next example:

.cards-container {
  container-type: inline-size;
  --theme: dark;
}

@container style(--theme: dark) {
  .card {
    background-color: black;
  }
}

@container (width < 700px) {
  .card {
    background-color: red;
  }
}

See that? We can still query the elements by style even when we explicitly set the container-type to inline-size. This seems contradictory at first, but it does make sense, considering that style and size queries are computed independently. It’s better this way since both queries don’t necessarily conflict with each other; a style query could change the colors in an element depending on a custom property, while a container query changes an element’s flex-direction when it gets too small for its contents.

But We Can Achieve the Same Thing With CSS Classes and IDs

Most container query guides and tutorials I’ve seen use similar examples to demonstrate the general concept, but I can’t stop thinking no matter how cool style queries are, we can achieve the same result using classes or IDs and with less boilerplate. Instead of passing the state as an inline style, we could simply add it as a class.

<ol>
  <li class="item first">
    <img src="..." alt="Roi's avatar" />
    <h2>Roi</h2>
  </li>
  <li class="item second"><!-- etc. --></li>
  <li class="item third"><!-- etc. --></li>
  <li class="item"><!-- etc. --></li>
  <li class="item"><!-- etc. --></li>
</ol>

Alternatively, we could add the position number directly inside an id so we don’t have to convert the number into a string:

<ol>
  <li class="item" id="item-1">
    <img src="..." alt="Roi's avatar" />
    <h2>Roi</h2>
  </li>
  <li class="item" id="item-2"><!-- etc. --></li>
  <li class="item" id="item-3"><!-- etc. --></li>
  <li class="item" id="item-4"><!-- etc. --></li>
  <li class="item" id="item-5"><!-- etc. --></li>
</ol>

Both of these approaches leave us with cleaner HTML than the container queries approach. With style queries, we have to wrap our elements inside a container — even if we don’t semantically need it — because of the fact that containers (rightly) are unable to style themselves.

We also have less boilerplate-y code on the CSS side:

#item-1 {
  background: linear-gradient(45deg, yellow, orange); 
}

#item-2 {
  background: linear-gradient(45deg, grey, white);
}

#item-3 {
  background: linear-gradient(45deg, brown, peru);
}

See the Pen Style Queries Use Case Replaced with Classes [forked] by Monknow.

As an aside, I know that using IDs as styling hooks is often viewed as a no-no, but that’s only because IDs must be unique in the sense that no two instances of the same ID are on the page at the same time. In this instance, there will never be more than one first-place, second-place, or third-place player on the page, making IDs a safe and appropriate choice in this situation. But, yes, we could also use some other type of selector, say a data-* attribute.

There is something that could add a lot of value to style queries: a range syntax for querying styles. This is an open feature that Miriam Suzanne proposed in 2023, the idea being that it queries numerical values using range comparisons just like size queries.

Imagine if we wanted to apply a light purple background color to the rest of the top ten players in the leaderboard example. Instead of adding a query for each position from four to ten, we could add a query that checks a range of values. The syntax is obviously not in the spec at this time, but let’s say it looks something like this just to push the point across:

/* Do not try this at home! */
@container leaderboard style(4 >= --position <= 10) {
  .item {
    background: linear-gradient(45deg, purple, fuchsia);
  }
}

In this fictional and hypothetical example, we’re:

  • Tracking a container called leaderboard,
  • Making a style() query against the container,
  • Evaluating the --position custom property,
  • Looking for a condition where the custom property is set to a value equal to a number that is greater than or equal to 4 and less than or equal to 10.
  • If the custom property is a value within that range, we set a player’s background color to a linear-gradient() that goes from purple to fuschia.

This is very cool, but if this kind of behavior is likely to be done using components in modern frameworks, like React or Vue, we could also set up a range in JavaScript and toggle on a .top-ten class when the condition is met.

See the Pen Style Ranged Queries Use Case Replaced with Classes [forked] by Monknow.

Sure, it’s great to see that we can do this sort of thing directly in CSS, but it’s also something with an existing well-established solution.

Separating Style Logic From Logic Logic

So far, style queries don’t seem to be the most convenient solution for the leaderboard use case we looked at, but I wouldn’t deem them useless solely because we can achieve the same thing with JavaScript. I am a big advocate of reaching for JavaScript only when necessary and only in sprinkles, but style queries, the ones where we can only check for custom properties, are most likely to be useful when paired with a UI framework where we can easily reach for JavaScript within a component. I have been using Astro an awful lot lately, and in that context, I don’t see why I would choose a style query over programmatically changing a class or ID.

However, a case can be made that implementing style logic inside a component is messy. Maybe we should keep the logic regarding styles in the CSS away from the rest of the logic logic, i.e., the stateful changes inside a component like conditional rendering or functions like useState and useEffect in React. The style logic would be the conditional checks we do to add or remove class names or IDs in order to change styles.

If we backtrack to our leaderboard example, checking a player’s position to apply different styles would be style logic. We could indeed check that a player’s leaderboard position is between four and ten using JavaScript to programmatically add a .top-ten class, but it would mean leaking our style logic into our component. In React (for familiarity, but it would be similar to other frameworks), the component may look like this:

const LeaderboardItem = ({position}) => {
  <li className={item ${position &gt;= 4 && position &lt;= 10 ? "top-ten" : ""}} id={item-${position}}>
    <img src="..." alt="Roi's avatar" />
    <h2>Roi</h2>
  </li>;
};

Besides this being ugly-looking code, adding the style logic in JSX can get messy. Meanwhile, style queries can pass the --position value to the styles and handle the logic directly in the CSS where it is being used.

const LeaderboardItem = ({position}) => {
  <li className="item" style={{"--position": position}}>
    <img src="..." alt="Roi's avatar" />
    <h2>Roi</h2>
  </li>;
};

Much cleaner, and I think this is closer to the value proposition of style queries. But at the same time, this example makes a large leap of assumption that we will get a range syntax for style queries at some point, which is not a done deal.

Conclusion

There are lots of teams working on making modern CSS better, and not all features have to be groundbreaking miraculous additions.

Size queries are definitely an upgrade from media queries for responsive design, but style queries appear to be more of a solution looking for a problem.

It simply doesn’t solve any specific issue or is better enough to replace other approaches, at least as far as I am aware.

Even if, in the future, style queries will be able to check for any property, that introduces a whole new can of worms where styles are capable of reacting to other styles. This seems exciting at first, but I can’t shake the feeling it would be unnecessary and even chaotic: styles reacting to styles, reacting to styles, and so on with an unnecessary side of boilerplate. I’d argue that a more prudent approach is to write all your styles declaratively together in one place.

Maybe it would be useful for web extensions (like Dark Reader) so they can better check styles in third-party websites? I can’t clearly see it. If you have any suggestions on how CSS Container Style Queries can be used to write better CSS that I may have overlooked, please let me know in the comments! I’d love to know how you’re thinking about them and the sorts of ways you imagine yourself using them in your work.

RDLC reporting with parameters

Featured Imgs 23

I have a parameter in a RDLC report named "allocated_jobs"

There are no Available Values assigned.
It has 1 Specify Values: RTrim(Allocated = "True")

ie; Only rows with field name Allocated and it's contents equal "True" are to be in the report.

Me.JobsDataReportViewer.LocalReport.ReportEmbeddedResource = "Data_Reporting.JobsList.rdlc"
*Data_Reporting is the stored report path in my.settings*

Dim param_allocatedjobs As New ReportParameter("allocated_jobs")
Dim reportparameters() As ReportParameter = {param_allocatedjobs}

Me.JobsAppTableAdapter.Fill(Me.ReportJobsDbDs.JobsApp)

Me.JobsDataReportViewer.LocalReport.SetParameters(reportparameters)

Me.JobsDataReportViewer.RefreshReport()   

The report runs fine, no errors, but without the desired return of the parameter.

All help welcome, thankyou.

Shane.

2-Page Login Pattern, And How To Fix It

Fotolia Subscription Monthly 4685447 Xl Stock

Why do we see login forms split into multiple screens everywhere? Instead of typing email and password, we have to type email, move to the next page, and then type password there. This seems to be inefficient, to say the least.

Let’s see why login forms are split across screens, what problem they solve, and how to design a better experience for better authentication UX (video).

This article is part of our ongoing series on design patterns. It’s also an upcoming part of the 10h-video library on Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

The Problem With Login Forms

If there is one thing we’ve learned over the years in UX, it’s that designing for people is hard. This applies to login forms as well. People are remarkably forgetful. They often forget what email they signed up with or what service they signed in with last time (Google, Twitter, Apple, and so on)

One idea is to remind customers what they signed in with last time and perhaps make it a default option. However, it reveals directly what the user’s account was, which might be a privacy or security issue:

What if instead of showing all options to all customers all the time, we ask for email first, and then look up what service they used last time, and redirect customers to the right place automatically? Well, that’s exactly the idea behind 2-page logins.

Meet 2-Page-Logins

You might have seen them already. If a few years ago, most login forms asked for email and password on one page, these days it’s more common to ask only for email first. When the user chooses to continue, the form will ask for a password in a separate step. Brad explores some problems of this pattern.

A common reason for splitting the login form across pages is Single Sign-On (SSO) authentication. Large companies typically use SSO for corporate sign-ins of their employees. With it, employees log in only once every day and use only one set of credentials, which improves enterprise security.

The UX Intricacies of Single Sign-On (SSO)

SSO also helps with regulatory compliance, and it’s much easier to provision users with appropriate permissions and revoke them later at once. So, if an employee leaves, all their accounts and data can be deleted at once.

To support both business customers and private customers, companies use 2-step-login. Users need to type in their email first, then the validator checks what provider the email is associated with and redirects users there.

Users rarely love this experience. Sometimes, they have multiple accounts (private and business) with one service. Also, 2-step-logins often break autofill and password managers. And for most users, login/pass is way faster than 2-step-login.

Of course, typically, there are dedicated corporate login pages for employees to sign in, but they often head directly to Gmail, Figma, and so on instead and try to sign in there. However, they won’t be able to log in as they must sign in through SSO.

Bottom line: the pattern works well for SSO users, but for non-SSO users, it results in a frustrating UX.

Alternative Solution: Conditional Reveal of SSO

There is a way to work around these challenges (see the image below). We could use a single-page look-up with email and password input fields as a default. Once a user has typed in their email, we detect if the SSO authentication is enabled.

If Single Sign-On (SSO) is enabled for that email, we show a Single Sign-On option and default to it. We could also make the password field optional or disabled.

If SSO isn’t enabled for that email, we proceed with the regular email/password login. This is not much hassle, but it saves trouble for both private and business accounts.

Key Takeaways

🤔 People often forget what email they signed up with.
🤔 They also forget the auth service they signed in with.
🤔 Companies use Single Sign-On (SSO) for corporate sign-in.
🤔 Individual accounts still need email and password for login.
✅ 2-step login: ask for email, then redirect to the right service.

✅ 2-step-login replaces “social” sign-in for repeat users.
✅ It directs users rather than giving them roadblocks.
🤔 Users still keep forgetting the email they signed in with.
🤔 Sometimes, users have multiple accounts with one service.
🚫 2-step logins often break autofill and password managers.
🚫 For most users, login/pass is way faster than 2-step-login.

✅ Better: start with one single page with login and password.
✅ As users type their email, detect if SSO is enabled for them.
✅ If it is, reveal an SSO-login option and set a default to it.
✅ Otherwise, proceed with the regular password login.
✅ If users must use SSO, disable the password field — don’t hide it.

Wrapping Up

Personally, I haven’t tested the approach, but it might be a good alternative to 2-page logins — both for SSO and non-SSO users. Keep in mind, though, that SSO authentication might or might not require a password, as sometimes login happens via Yubikey or Touch-ID or third parties (e.g., OAuth).

Also, eventually, users will be locked out; it’s just a matter of time. So, do use magic links for password recovery or access recovery, but don’t mandate it as a regular login option. Switching between applications is slow and causes mistakes. Instead, nudge users to enable 2FA: it’s both usable and secure.

And most importantly, test your login flow with the tools that your customers rely on. You might be surprised how broken their experience is if they rely on password managers or security tools to log in. Good luck, everyone!

Useful Resources

Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

Answer to my 100% tools thread

Featured Imgs 23

Thanks everybody for the discussion So here's the thing.
I was invited on an interview for a company to do there SEO and they gave me access to their SemRush and said that find us the keyword on which we have not created a blog post before and neither our competitors and they told me that it should be informational not commercial.
So I used the SemRush and found the keyword now here is the thing I relied completely on the SemRush and it was showing that the keyword is informational and then the interviewer typed that keyword in the google and all the SERP's were commercial for that specific keyword.
So here is my point that you can not completely rely on tools.

The Scent Of UX: The Unrealized Potential Of Olfactory Design

Fotolia Subscription Monthly 4685447 Xl Stock

Imagine that you could smell this page. The introduction would emit a subtle scent of sage and lavender to set the mood. Each paragraph would fill your room with the coconut oil aroma, helping you concentrate and immerse in reading. The fragrance of the comments section, resembling a busy farmer’s market, would nudge you to share your thoughts and debate with strangers.

How would the presence of smells change your experience reading this text or influence your takeaways?

Scents are everywhere. They fill our spaces, bind our senses to objects and people, alert us to dangers, and arouse us. Smells have so much influence over our mood and behavior that hundreds of companies are busy designing fragrances for retail, enticing visitors to purchase more, hotels, making customers feel at home, and amusement parks, evoking a warm sense of nostalgia.

At the same time, the digital world, where we spend our lives working, studying, shopping, and resting, remains entirely odorless. Our smart devices are not designed to emit or recognize scents, and every corner of the Internet, including this page, smells exactly the same.

We watch movies, play games, study, and order dinner, but our sense of smell is left unengaged. The lack of odors rarely bothers us, but occasionally, we choose analog things like books merely because their digital counterparts fail to connect with us at the same level.

Could the presence of smells improve our digital experiences? What would it take to build the “smelly” Internet, and why hasn't it been done before? Last but not least, what power do scents hold over our senses, memory, and health, and how could we harness it for the digital world?

Let’s dive deep into a fascinating and underexplored realm of odors.

Olfactory Design For The Real World

Why Do We Remember Smells?

In his novel In Search of Lost Time, French writer Marcel Proust describes a sense of déjà vu he experienced after tasting a piece of cake dipped in tea:

“Immediately the old gray house upon the street rose up like a stage set… the house, the town, the square where I was sent before lunch, the streets along which I used to run errands, the country roads we took… the whole of Combray and of its surroundings… sprang into being, town and gardens alike, all from my cup of tea.”

— Marcel Proust

The Proust Effect, the phenomenon of an ‘involuntary memory’ evoked by scents, is a common occurrence. It explains how the presence of a familiar smell activates areas in our brain responsible for odor recognition, causing us to experience a strong, warm, positive sense of nostalgia.

Smells have a potent and almost magical impact on our ability to remember and recognize objects and events. “The nose makes the eyes remember”, as a renowned Finnish architect Juhani Pallasmaa puts it: a single droplet of a familiar fragrance is often enough to bring up a wild cocktail of emotions and recollections, even those that have long been forgotten.

A memory of a place, a person, or an experience is often a memory of their smell that lingers long after the odor is gone. J. Douglas Porteous, Professor of Geography at the University of Victoria, coined the term Smellscape to describe how a collective of smells in each particular area form our perception, define our attitude, and craft our recollection of it.

To put it simply, we choose to avoid beautiful places and forget delicious meals when their odors are not to our liking. Pleasant aromas, on the other hand, alter our memory, make us overlook flaws and defects, or even fall in love.

With such an immense power that scents hold over our perception of reality, it comes as no surprise they have long become a tool in the hands of brand and service designers.

Scented Advertising

What do a luxury car brand, a cosmetics store, and a carnival ride have in common? The answer is that they all have their own distinct scents.

Carefully crafted fragrances are widely used to create brand identities, make powerful impressions, and differentiate brands “emotionally and memorably”.

Some choose to complement visual identities with subtle, tailored aromas. 12.29, a creative “olfactive branding company,” developed the “scent identity” for Cadillac, a “symbol of self-expression representing the irrepressible pursuit of life.”

The branded Cadillac scent is diffused in dealerships and auto shows around the world, evoking a sense of luxury and class. Customers are expected to remember Cadillac better for its “signature nutty coffee, dark leather, and resinous amber notes”, forging a strong emotional connection with the brand.

Next time they think of Cadillac, their brain will recall its signature fragrance and the way it made them feel. Cadillac is ready to bet they will not even consider other brands afterwards.

Others may be less subtle and employ more aggressive, fragrant marketing tactics. LUSH, a British cosmetics retailer, is known for its distinct smells. Although even the company co-founder admits that odors can be overwhelming for some, LUSH’s scents play an important role in crafting the brand’s identity.

Indeed, the aroma of their stores is so recognizable that it lures customers in from afar with ease, and few walk away without forever remembering the brand’s distinct smell.

However, retail is not the only area that employs discernible smells.

Disney takes a holistic approach to service design, carefully considering every aspect that influences customer satisfaction. Smells have long been a part of the signature “Disney experience”: the main street smells like pastry and popcorn, Spaceship Earth is filled with the burning wood aroma, and Soarin’ is accompanied by notes of orange and pine.

Dozens of scent-emitting devices, Smellitzers, are responsible for adding scents to each experience. Deployed around each park and perfectly synced with every other sensory stimulus, they “shoot scents toward passersby” and “trigger memories of childhood nostalgia.”

As shown in the patent, Smellitzer is a rather simple odor delivery system designed to “enhance the sense of flight created in the minds of the passengers.” Scents are carefully curated and manufactured to evoke precise emotions without disrupting the ride experience.

Disney’s attractions, lanes, and theaters are packed with smell-emitting gadgets that distribute sweet and savoury notes. The visitors barely notice the presence of added scents, but later inevitably experience a sudden but persistent urge to return to the park.

Could it be something in the air, perhaps?

Well-curated, timely delivered, recognizable scents can be a powerful ally in the hands of a designer.

They can soothe a passenger during a long flight with the subtle notes of chamomile and mint or seduce a hungry shopper with the familiar aroma of freshly baked cinnamon buns. Scents can create and evoke great memories, amplify positive emotions, or turn casual buyers into eager and loyal consumers.

Unfortunately, smells can also ruin otherwise decent experiences.

Scented Entertainment

Why Fragrant Cinema Failed

In 1912, Aldous Huxley, author of the dystopian novel Brave New World, published an essay “Silence is Golden”, reflecting on his first experience watching a sound film. Huxley despised cinema, calling it the “most frightful creation-saving device for the production of standardized amusement”, and the addition of sound made the writer concerned for the future of entertainment. Films engaged multiple senses but demanded no intellectual involvement, becoming more accessible, more immersive, and, as Huxley feared, more influential.

“Brave New World,” published in 1932, features the cinema of the future — a multisensory entertainment complex designed to distract society from seeking a deeper sense of purpose in life. Attendees enjoy a ​​“scent organ” playing “a delightfully refreshing Herbal Capriccio — rippling arpeggios of thyme and lavender, of rosemary, basil, myrtle, tarragon,” and get to experience every physical stimulation imaginable.

Huxley’s critical take on the state of the entertainment industry was spot-on. Obsessed with the idea of multisensory entertainment, studios did not take long to begin investing in immersive experiences. The 1950s were the age of experiments designed to attract more viewers: colored cinema, 3D films, and, of course, scented movies.

In 1960, two films hit the American theaters: Scent of Mystery, accompanied by the odor-delivery technology called “Smell–O–Vision”, and Behind the Great Wall, employing the process named AromaRama. Smell–O–Vision was designed to transport scents through tubes to each seat, much like Disney’s Smellitzers, whereas AromaRama distributed smells through the theater’s ventilation.

Both scented movies were panned by critics and viewers alike. In his review for the New York Times, Bosley Crowther wrote that “...synthetic smells [...] occasionally befit what one is viewing, but more often they confuse the atmosphere”. Audiences complained about smells being either too subtle or too overpowering and the machines disrupting the viewing experience.

The groundbreaking technologies were soon forgotten, and all plans to release more scented films were scrapped.

Why did odors, so efficient at manufacturing nostalgic memories of an amusement park, fail to entertain the audience at the movies? On the one hand, it may attributed to the technological limitations of the time. For instance, AromaRama diffused the smells into the ventilation, which significantly delayed the delivery and required scents to be removed between scenes. Suffice it to say the viewers did not enjoy the experience.

However, there could be other possible explanations.

First of all, digital entertainment is traditionally odorless. Viewers do not anticipate movies to be accompanied by smells, and their brains are conditioned to ignore them. Researchers call it “inattentional anosmia”: people connect their enjoyment with what they see on the screen, not what they smell or taste.

Moreover, background odors tend to fade and become less pronounced with time. A short exposure to a pleasant odor may be complimentary. For instance, viewers could smell orange as the character in “Behind the Great Wall” cut and squeezed the fruit: an “impressive” moment, as admitted by critics. However, left to linger, even the most pleasant scents can leave the viewer uninvolved or irritated.

Finally, cinema does not require active sensory involvement. Viewers sit still in silence, rarely even moving their heads, while their sight and hearing are busy consuming and interpreting the information. Immersion requires suspension of disbelief: well-crafted films force the viewer to forget the reality around them, but the addition of scents may disrupt this state, especially if scents are not relevant or well-crafted.

For the scented movie to engage the audience, smells must be integrated into the film’s events and play an important role in the viewing experience. Their delivery must be impeccable: discreet, smooth, and perfectly timed. In time, perhaps, we may see the revival of scented cinema. Until then, rare auteur experiments and 4D–cinema booths at carnivals will remain the only places where fragrant films will live on.

Fortunately, the lessons from the early experiments helped others pave the way for the future of fragrant entertainment.

Immersive Gaming

Unlike movies, video games require active participation. Players are involved in crafting the narrative of the game and, as such, may expect (and appreciate) a higher degree of realism. Virtual Reality is a good example of technology designed for full sensory stimulation.

Modern headsets are impressive, but several companies are already working hard on the next-gen tech for immersive gaming. Meta and Manus are developing gloves that make virtual elements tangible. Teslasuit built a full-body suit that captures motion and biometry, provides haptic feedback, and emulates sensations for objects in virtual reality. We may be just a few steps away from virtual multi-sensory entertainment being as widespread as mobile phones.

Scents are coming to VR, too, albeit at a slower pace, with a few companies already selling devices for fragrant entertainment. For instance, GameScent has developed a cube that can distribute up to 8 smells, from “gunfire” and “explosion” to “forest” and “storm”, using AI to sync the odors with the events in the game.

The vast majority of experiments, however, occur in the labs, where researchers attempt to understand how smells impact gamers and test various concepts. Some assign smells to locations in a VR game and distribute them to players; others have the participants use a hand-held device to “smell” objects in the game.

The majority of studies demonstrate promising results. The addition of fragrances creates a deeper sense of immersion and enhances realism in virtual reality and in a traditional gaming setting.

A notable example of the latter is “Tainted”, an immersive game based on South-East Asian folklore, developed by researchers in 2017. The objective of the game is to discover and burn banana trees, where the main antagonist of the story — a mythical vengeful spirit named Pontianak — is traditionally believed to hide.

The way “Tainted” incorporates smells into the gameplay is quite unique. A scent-emitting module, placed in front of the player, diffuses fragrances to complement the narrative. For instance, the smell of banana signals the ghost’s presence, whereas pineapple aroma means that a flammable object required to complete the quest is nearby. Odors inform the player of dangers, give directions, and become an integral part of the gaming experience, like visuals and sound.

Some of the most creative examples of scented learning come from places that combine education and entertainment, most notably, museums.

Jorvik Viking Centre is famous for its use of “smells of Viking-age York” to capture the unique atmosphere of the past. Its scented halls, holograms, and entertainment programs turn a former archeological site into a carnival ride that teleports visitors into the 10th century to immerse them into the daily life of the Vikings.

Authentic smells are the center’s distinct feature, an integral part of its branding and marketing, and an important addition to its collection. Smells are responsible for making Jorvik exhibitions so memorable, and hopefully, for visitors walking away with a few Viking trivia facts firmly stuck in their heads.

At the same time, learning is becoming increasingly more digital, from mobile apps for foreign languages to student portals and online universities. Smart devices strive to replace classrooms with their analog textbooks, papers, gel pens, and teachers. Virtual Reality is a step towards the future of immersive digital education, and odors may play a more significant role in making it even more efficient.

Education will undoubtedly continue leveraging the achievements of the digital revolution to complement its existing tools. Tablets and Kindles are on their way to replace textbooks and pens. Phones are no longer deemed a harmful distraction that causes brain cancer.

Odors, in turn, are becoming “learning supplements”. Teachers and parents have access to personalized diffusers that distribute the smell of peppermint to enhance students’ attention. Large scent-emitting devices for educational facilities are available on the market, too.

At the same time, inspired to figure out the way to upload knowledge straight into our brains, we’ve discovered a way to learn things in our sleep using smells. Several studies have shown that exposure to scents during sleep significantly improves cognitive abilities and memory. More than that, smells can activate our memory while we sleep and solidify what we have learnt while awake.

Odors may not replace textbooks and lectures, but their addition will make remembering and recalling things significantly easier. In fact, researchers from MIT built and tested a wearable scent-emitting device that can be used for targeted memory reactivation.

In time, we will undoubtedly see more smart devices that make use of scents for memory enhancement, training, and entertainment. Integrated into the ecosystems of gadgets, olfactory wearables and smart home appliances will improve our well-being, increase productivity, and even detect early symptoms of illnesses.

There is, however, a caveat.

The Challenging UX Of Scents

We know very little about smells.

Until 2004, when Richard Axel and Linda Buck received a Nobel Prize for identifying the genes that control odor receptors, we didn’t even know how our bodies processed smells or that different areas in our brains were activated by different odors.

We know that our experience with smells is deep and intimate, from the memories they create to the emotions they evoke. We are aware that unpleasant scents linger longer and have a stronger impact on our mental state and memory. Finally, we understand that intensity, context, and delivery matter as much as the scent itself and that a decent aroma diffused out of place ruins the experience.

Thus, if we wish to build devices that make the best use of scents, we need to follow a few simple principles.

Design Principle #1: Tailor The Scents To Each User

In his article about Smellscapes, J. Douglas Porteous writes:

“The smell of a certain institutional soap may carry a person back to the purgatory of boarding school. A particular floral fragrance reminds one of a lost love. A gust of odour from an ethnic spice emporium may waft one back, in memory, to Calcutta.”

— J. Douglas Porteous

Smells revive hidden memories and evoke strong emotions, but their connection to our minds is deeply personal. A rich, spicy aroma of freshly roasted coffee beans will not have the same impact on different people, and in order to use scents in learning, we need to tailor the experience to each user.

In order to maximize the potential of odors in immersion and learning, we need to understand which smells have the most impact on the user. By filtering out the smells that the user finds unpleasant or associates with sad events in their past, we can reduce any potential negative effect on their wellness or memory.

Design Principle #2: Stick To The Simpler Smells

Humans are notoriously bad at describing odors.

Very few languages in the world feature specific terms for smells. For instance, the speakers of Jahai, a language in Malaysia, enjoy the privilege of having specific names for scents like “bloody smell that attracts tigers” and “wild mango, wild ginger roots, bat caves, and petrol”.

English, on the other hand, often uses adjectives associated with flavor (“smoky vanilla”) or comparison (“smells like orange”) to describe scents. For centuries, we have been trying to work out a system that could help cluster odors.

Aristotle classified all odors into six groups: sweet, acid, severe, fatty, sour, and fetid (unpleasant). Carl Linnaeus expanded it to 7 types: aromatic, fragrant, alliaceous (garlic), ambrosial (musky), hircinous (goaty), repulsive, and nauseous. Hans Henning arranged all scent groups in a prism. None of the existing classifications, however, help accurately describe complex smells, which inevitably makes it harder to recreate them.

Academics have developed several comprehensive lists, for instance, the Odor Character Profiling that contains 146 unique descriptors. Pleasant smells from the list are easier to reproduce than unique and sophisticated odors.

Although an aroma of the “warm touch of an early summer sun” may work better for a particular user than the smell of an apple pie, the high price of getting the scent wrong makes it a reasonable trade-off.

Design Principle #3: Ensure Stable And Convenient Delivery

Nothing can ruin a good olfactory experience more than an imperfect delivery system.

Disney’s Smellitzers and Jorvik’s scented exhibition set the standard for discreet, contextual, and consistent inclusion of smells to complement the experience. Their diffusers are well-concealed, and odors do not come off as overwhelming or out of place.

On the other hand, the failure of scented movies from the 1950s can at least partially be attributed to poorly designed aroma delivery systems. Critics remembered that even the purifying treatment that was used to clear the theater air between scenes left a “sticky, sweet” and “upsetting” smell.

Good delivery systems are often simple and focus on augmenting the experience without disrupting it. For instance, eScent, a scent-enhanced FFP3 mask, is engineered to reduce stress and improve the well-being of frontline workers. The mask features a slot for applicators infused with essential oil; users can choose fragrances and swap the applicator whenever they want. Beside that, eScent is no different from its “analog” predecessor: it does not require special equipment or preparation, and the addition of smells does not alter the experience of wearing a mask.

In The Not Too Distant Future

We may know little about smells, but we are steadily getting closer to harnessing their power.

In 2022, Alex Wiltschko, a former Google staff research scientist, founded Osmo, a company dedicated to “giving computers a sense of smell.” In the long run, Osmo aspires to use its knowledge to manufacture scents on demand from sustainable synthetic materials.

Today, the company operates as a research lab, using a trained AI to predict the smell of a substance by analyzing its molecular structure. Osmo’s first tests demonstrated some promising results, with machine accurately describing the scents in 53% of cases.

Should Osmo succeed at building a machine capable of recognizing and predicting smells, it will change the digital world forever. How will we interact with our smart devices? How will we use their newly discovered sense of smell to exchange information, share precious memories with each other, or relive moments from the past? Is now the right time for us to come up with ideas, products, and services for the future?

Odors are a booming industry that offers designers and engineers a unique opportunity to explore new and brave concepts. With the help of smells, we can transform entire industries, from education to healthcare, crafting immersive multi-sensory experiences for learning and leisure.

Smells are a powerful tool that requires precision and perfection to reach the desired effect. Our past shortcomings may have tainted the reputation of scented experiences, but recent progress demonstrates that we have learnt our lessons well. Modern technologies make it even easier to continue the explorations and develop new ways to use smells in entertainment, learning, and wellness — in the real world and beyond.

Our digital spaces may be devoid of scents, but they will not remain odorless for long.

Top Free Tools for Creating Accessible Email Designs

Featured Imgs 23

Creating accessible emails is crucial for web designers and students alike. Ensuring your emails are accessible widens your audience and provides a better user experience for everyone. Let’s explore some free tools to help you design accessible emails efficiently.

Why Accessibility in Email Design is Important ?

Before we dive into the tools, it’s essential to understand why accessibility matters in email design. Picture an email that you can’t read because the text is too small or the color contrast is poor. For many, this isn’t just a minor inconvenience—it can render the content completely inaccessible. Ensuring your emails are accessible is not only a good practice but a necessity for inclusive design.

Accessibility Insights for Web

Accessibility Insights

Accessibility Insights for Web is a browser extension that helps you find and fix accessibility issues in your email design. It provides automated checks and guided manual assessments, making it easy to ensure your emails are accessible to all users.

Usage Example:

Install the browser extension.

Run the automated checks on your email’s HTML.

Follow the guided assessments to fix any identified issues.

Source

WAVE Web Accessibility Evaluation Tool

WAVE

WAVE by WebAIM is a comprehensive tool for analyzing the accessibility of your email designs. It highlights issues like missing alt text, low contrast, and more, providing you with a detailed report.

Usage Example:

Copy and paste your email HTML into the WAVE tool.

Review the highlighted issues and follow the suggestions to improve accessibility.

Source

Email On Acid

Email On Acid

Email on Acid offers a free accessibility checker as part of its suite of email testing tools. This tool examines your email for various accessibility concerns, including screen reader compatibility and color contrast.

Usage Example:

Upload your email design to Email on Acid.

Run the accessibility checker to get a detailed report.

Address the issues flagged by the tool.

Source

Litmus

Litmus

Litmus provides a free accessibility checker within its suite of email testing tools. It helps you ensure that your emails are accessible by checking for issues such as color contrast and screen reader compatibility.

Usage Example:

Upload your email design to Litmus.

Use the accessibility checker to identify and fix issues.

Ensure your email meets accessibility standards before sending.

Source

accessiBe

accessiBe

accessiBe offers a range of accessibility tools, including a free audit tool that helps you identify accessibility issues in your email designs. It provides actionable insights to help you make your emails more accessible.

Usage Example:

Use the accessiBe audit tool to scan your email HTML.

Review the report and implement the suggested changes to improve accessibility.

Source

Accessible Email

Accessible Email

Accessible-Email.org provides guidelines and resources for creating accessible emails. Their online tool checks your email design against a comprehensive list of accessibility criteria.

Usage Example:

Paste your email HTML into the tool.

Receive a detailed analysis with actionable recommendations.

Source

Microsoft Outlook Accessibility Checker

Outlook

Microsoft Outlook includes an accessibility checker that helps ensure your emails are accessible to all recipients. This tool is particularly useful for those who use Outlook for email design and distribution.

Usage Example:

Compose your email in Microsoft Outlook.

Use the built-in accessibility checker to identify and fix issues.

Ensure your email meets accessibility standards before sending.

Source

AChecker

AChecker

AChecker is a web accessibility evaluation tool that helps you identify accessibility issues in your email designs. It provides a detailed report with recommendations for improvements.

Usage Example:

Copy and paste your email HTML into AChecker.

Review the report and implement the suggested changes to enhance accessibility.

Source

Gmail Accessibility Features

Gmail

Gmail offers several built-in accessibility features that help you create and manage accessible emails. These features include screen reader support, keyboard shortcuts, and more.

Usage Example:

Compose your email in Gmail.

Utilize the accessibility features to ensure your email is accessible.

Test your email with screen readers to verify compatibility.

Source

Putsmail

Putsmail

PutsMail allows you to test your email designs in various email clients and devices. While not specifically an accessibility tool, it helps you ensure that your email renders correctly across different platforms, which is a key aspect of accessibility.

Usage Example:

Upload your email design to PutsMail.

Test your email across various clients and devices.

Make adjustments to ensure consistent and accessible rendering.

Source

How to Integrate These Tools into Your Workflow

To maintain a speedy workflow while ensuring accessibility, integrate these tools into your design process. Start by checking color contrast early on, then validate your HTML with tools like WAVE and AChecker before finalizing your email. Finally, test with screen readers to catch any issues that automated tools might miss.

Engagement and Practical Tips

To keep your workflow efficient, consider creating a checklist based on the tools and their usage. Here’s a quick example:

Initial Design:

Use the Color Contrast Checker to select accessible colors.

HTML Development:

Validate your HTML with WAVE and Accessible-Email.org.

Final Review:

Test with Litmus for comprehensive accessibility checks.

Use VoiceOver and NVDA to ensure screen reader compatibility.

By following this process, you can streamline your email design workflow while ensuring accessibility.

Creating accessible emails doesn’t have to be complicated. With the right tools and a well-integrated workflow, you can design emails that everyone can enjoy. Remember, accessibility is not just a best practice—it’s a commitment to inclusivity and a broader reach. Whether you’re a seasoned web designer or a student just starting, these tools will help you make your emails accessible and effective.

Empower your designs with accessibility, and you’ll not only reach more people but also demonstrate your commitment to creating inclusive digital experiences.

The post Top Free Tools for Creating Accessible Email Designs appeared first on CSS Author.

How To Hack Your Google Lighthouse Scores In 2024

Fotolia Subscription Monthly 4685447 Xl Stock

This article is a sponsored by Sentry.io

Google Lighthouse has been one of the most effective ways to gamify and promote web page performance among developers. Using Lighthouse, we can assess web pages based on overall performance, accessibility, SEO, and what Google considers “best practices”, all with the click of a button.

We might use these tests to evaluate out-of-the-box performance for front-end frameworks or to celebrate performance improvements gained by some diligent refactoring. And you know you love sharing screenshots of your perfect Lighthouse scores on social media. It’s a well-deserved badge of honor worthy of a confetti celebration.

Just the fact that Lighthouse gets developers like us talking about performance is a win. But, whilst I don’t want to be a party pooper, the truth is that web performance is far more nuanced than this. In this article, we’ll examine how Google Lighthouse calculates its performance scores, and, using this information, we will attempt to “hack” those scores in our favor, all in the name of fun and science — because in the end, Lighthouse is simply a good, but rough guide for debugging performance. We’ll have some fun with it and see to what extent we can “trick” Lighthouse into handing out better scores than we may deserve.

But first, let’s talk about data.

Field Data Is Important

Local performance testing is a great way to understand if your website performance is trending in the right direction, but it won’t paint a full picture of reality. The World Wide Web is the Wild West, and collectively, we’ve almost certainly lost track of the variety of device types, internet connection speeds, screen sizes, browsers, and browser versions that people are using to access websites — all of which can have an impact on page performance and user experience.

Field data — and lots of it — collected by an application performance monitoring tool like Sentry from real people using your website on their devices will give you a far more accurate report of your website performance than your lab data collected from a small sample size using a high-spec super-powered dev machine under a set of controlled conditions. Philip Walton reported in 2021 that “almost half of all pages that scored 100 on Lighthouse didn’t meet the recommended Core Web Vitals thresholds” based on data from the HTTP Archive.

Web performance is more than a single core web vital metric or Lighthouse performance score. What we’re talking about goes way beyond the type of raw data we’re working with.

Web Performance Is More Than Numbers

Speed is often the first thing that comes up when talking about web performance — just how long does a page take to load? This isn’t the worst thing to measure, but we must bear in mind that speed is probably influenced heavily by business KPIs and sales targets. Google released a report in 2018 suggesting that the probability of bounces increases by 32% if the page load time reaches higher than three seconds, and soars to 123% if the page load time reaches 10 seconds. So, we must conclude that converting more sales requires reducing bounce rates. And to reduce bounce rates, we must make our pages load faster.

But what does “load faster” even mean? At some point, we’re physically incapable of making a web page load any faster. Humans — and the servers that connect them — are spread around the globe, and modern internet infrastructure can only deliver so many bytes at a time.

The bottom line is that page load is not a single moment in time. In an article titled “What is speed?” Google explains that a page load event is:

[…] “an experience that no single metric can fully capture. There are multiple moments during the load experience that can affect whether a user perceives it as ‘fast’, and if you just focus solely on one, you might miss bad experiences that happen during the rest of the time.”

The key word here is experience. Real web performance is less about numbers and speed than it is about how we experience page load and page usability as users. And this segues nicely into a discussion of how Google Lighthouse calculates performance scores. (It’s much less about pure speed than you might think.)

How Google Lighthouse Performance Scores Are Calculated

The Google Lighthouse performance score is calculated using a weighted combination of scores based on core web vital metrics (i.e., First Contentful Paint (FCP), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS)) and other speed-related metrics (i.e., Speed Index (SI) and Total Blocking Time (TBT)) that are observable throughout the page load timeline.

This is how the metrics are weighted in the overall score:

Metric Weighting (%)
Total Blocking Time 30
Cumulative Layout Shift 25
Largest Contentful Paint 25
First Contentful Paint 10
Speed Index 10

The weighting assigned to each score gives us insight into how Google prioritizes the different building blocks of a good user experience:

1. A Web Page Should Respond to User Input

The highest weighted metric is Total Blocking Time (TBT), a metric that looks at the total time after the First Contentful Paint (FCP) to help indicate where the main thread may be blocked long enough to prevent speedy responses to user input. The main thread is considered “blocked” any time there’s a JavaScript task running on the main thread for more than 50ms. Minimizing TBT ensures that a web page responds to physical user input (e.g., key presses, mouse clicks, and so on).

2. A Web Page Should Load Useful Content With No Unexpected Visual Shifts

The next most weighted Lighthouse metrics are Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS). LCP marks the point in the page load timeline when the page’s main content has likely loaded and is therefore useful.

At the point where the main content has likely loaded, you also want to maintain visual stability to ensure that users can use the page and are not affected by unexpected visual shifts (CLS). A good LCP score is anything less than 2.5 seconds (which is a lot higher than we might have thought, given we are often trying to make our websites as fast as possible).

3. A Web Page Should Load Something

The First Contentful Paint (FCP) metric marks the first point in the page load timeline where the user can see something on the screen, and the Speed Index (SI) measures how quickly content is visually displayed during page load over time until the page is “complete”.

Your page is scored based on the speed indices of real websites using performance data from the HTTP Archive. A good FCP score is less than 1.8 seconds and a good SI score is less than 3.4 seconds. Both of these thresholds are higher than you might expect when thinking about speed.

Usability Is Favored Over Raw Speed

Google Lighthouse’s performance scoring is, without a doubt, less about speed and more about usability. Your SI and FCP could be super quick, but if your LCP takes too long to paint, and if CLS is caused by large images or external content taking some time to load and shifting things visually, then your overall performance score will be lower than if your page was a little slower to render the FCP but didn’t cause any CLS. Ultimately, if the page is unresponsive due to JavaScript blocking the main thread for more than 50ms, your performance score will suffer more than if the page was a little slow to paint the FCP.

To understand more about how the weightings of each metric contribute to the final performance score, you can play about with the sliders on the Lighthouse Scoring Calculator, and here’s a rudimentary table demonstrating the effect of skewed individual metric weightings on the overall performance score, proving that page usability and responsiveness is favored over raw speed.

Description FCP (ms) SI (ms) LCP (ms) TBT (ms) CLS Overall Score
Slow to show something on screen 6000 0 0 0 0 90
Slow to load content over time 0 5000 0 0 0 90
Slow to load the largest part of the page 0 0 6000 0 0 76
Visual shifts occurring during page load 0 0 0 0 0.82 76
Page is unresponsive to user input 0 0 0 2000 0 70

The overall Google Lighthouse performance score is calculated by converting each raw metric value into a score from 0 to 100 according to where it falls on its Lighthouse scoring distribution, which is a log-normal distribution derived from the performance metrics of real website performance data from the HTTP Archive. There are two main takeaways from this mathematically overloaded information:

  1. Your Lighthouse performance score is plotted against real website performance data, not in isolation.
  2. Given that the scoring uses log-normal distribution, the relationship between the individual metric values and the overall score is non-linear, meaning you can make substantial improvements to low-performance scores quite easily, but it becomes more difficult to improve an already high score.

Read more about how metric scores are determined, including a visualization of the log-normal distribution curve on developer.chrome.com.

Can We “Trick” Google Lighthouse?

I appreciate Google’s focus on usability over pure speed in the web performance conversation. It urges developers to think less about aiming for raw numbers and more about the real experiences we build. That being said, I’ve wondered whether today in 2024, it’s possible to fool Google Lighthouse into believing that a bad page in terms of usability and usefulness is actually a great one.

I put on my lab coat and science goggles to investigate. All tests were conducted:

  • Using the Chromium Lighthouse plugin,
  • In an incognito window in the Arc browser,
  • Using the “navigation” and “mobile” settings (apart from where described differently),
  • By me, in a lab (i.e., no field data).

That all being said, I fully acknowledge that my controlled test environment contradicts my advice at the top of this post, but the experiment is an interesting ride nonetheless. What I hope you’ll take away from this is that Lighthouse scores are only one piece — and a tiny one at that — of a very large and complex web performance puzzle. And, without field data, I’m not sure any of this matters anyway.

How to Hack FCP and LCP Scores

TL;DR: Show the smallest amount of LCP-qualifying content on load to boost the FCP and LCP scores until the Lighthouse test has likely finished.

FCP marks the first point in the page load timeline where the user can see anything at all on the screen, while LCP marks the point in the page load timeline when the main page content (i.e., the largest text or image element) has likely loaded. A fast LCP helps reassure the user that the page is useful. “Likely” and “useful” are the important words to bear in mind here.

What Counts as an LCP Element

The types of elements on a web page considered by Lighthouse for LCP are:

  • <img> elements,
  • <image> elements inside an <svg> element,
  • <video> elements,
  • An element with a background image loaded using the url() function, (and not a CSS gradient), and
  • Block-level elements containing text nodes or other inline-level text elements.

The following elements are excluded from LCP consideration due to the likelihood they do not contain useful content:

  • Elements with zero opacity (invisible to the user),
  • Elements that cover the full viewport (likely to be background elements), and
  • Placeholder images or other images with low entropy (i.e., low informational content, such as a solid-colored image).

However, the notion of an image or text element being useful is completely subjective in this case and generally out of the realm of what machine code can reliably determine. For example, I built a page containing nothing but a <h1> element where, after 10 seconds, JavaScript inserts more descriptive text into the DOM and hides the <h1> element.

Lighthouse considers the heading element to be the LCP element in this experiment. At this point, the page load timeline has finished, but the page’s main content has not loaded, even though Lighthouse thinks it is likely to have loaded within those 10 seconds. Lighthouse still awards us with a perfect score of 100 even if the heading is replaced by a single punctuation mark, such as a full stop, which is even less useful.

This test suggests that if you need to load page content via client-side JavaScript, we‘ll want to avoid displaying a skeleton loader screen since that requires loading more elements on the page. And since we know the process will take some time — and that we can offload the network request from the main thread to a web worker so it won’t affect the TBT — we can use some arbitrary “splash screen” that contains a minimal viable LCP element (for better FCP scoring). This way, we’re giving Lighthouse the impression that the page is useful to users quicker than it actually is.

All we need to do is include a valid LCP element that contains something that counts as the FCP. While I would never recommend loading your main page content via client-side JavaScript in 2024 (serve static HTML from a CDN instead or build as much of the page as you can on a server), I would definitely not recommend this “hack” for a good user experience, regardless of what the Lighthouse performance score tells you. This approach also won’t earn you any favors with search engines indexing your site, as the robots are unable to discover the main content while it is absent from the DOM.

I also tried this experiment with a variety of random images representing the LCP to make the page even less useful. But given that I used small file sizes — made smaller and converted into “next-gen” image formats using a third-party image API to help with page load speed — it seemed that Lighthouse interpreted the elements as “placeholder images” or images with “low entropy”. As a result, those images were disqualified as LCP elements, which is a good thing and makes the LCP slightly less hackable.

View the demo page and use Chromium DevTools in an incognito window to see the results yourself.

This hack, however, probably won’t hold up in many other use cases. Discord, for example, uses the “splash screen” approach when you hard-refresh the app in the browser, and it receives a sad 29 performance score.

Compared to my DOM-injected demo, the LCP element was calculated as some content behind the splash screen rather than elements contained within the splash screen content itself, given there were one or more large images in the focussed text channel I tested on. One could argue that Lighthouse scores are less important for apps that are behind authentication anyway: they don’t need to be indexed by search engines.

There are likely many other situations where apps serve user-generated content and you might be unable to control the LCP element entirely, particularly regarding images.

For example, if you can control the sizes of all the images on your web pages, you might be able to take advantage of an interesting hack or “optimization” (in very large quotes) to arbitrarily game the system, as was the case of RentPath. In 2021, developers at RentPath managed to improve their Lighthouse performance score by 17 points when increasing the size of image thumbnails on a web page. They convinced Lighthouse to calculate the LCP element as one of the larger thumbnails instead of a Google Map tile on the page, which takes considerably longer to load via JavaScript.

The bottom line is that you can gain higher Lighthouse performance scores if you are aware of your LCP element and in control of it, whether that’s through a hack like RentPath’s or mine or a real-deal improvement. That being said, whilst I’ve described the splash screen approach as a hack in this post, that doesn’t mean this type of experience couldn’t offer a purposeful and joyful experience. Performance and user experience are about understanding what’s happening during page load, and it’s also about intent.

How to Hack CLS Scores

TL;DR: Defer loading content that causes layout shifts until the Lighthouse test has likely finished to make the test think it has enough data. CSS transforms do not negatively impact CLS, except if used in conjunction with new elements added to the DOM.

CLS is measured on a decimal scale; a good score is less than 0.1, and a poor score is greater than 0.25. Lighthouse calculates CLS from the largest burst of unexpected layout shifts that occur during a user’s time on the page based on a combination of the viewport size and the movement of unstable elements in the viewport between two rendered frames. Smaller one-off instances of layout shift may be inconsequential, but a bunch of layout shifts happening one after the other will negatively impact your score.

If you know your page contains annoying layout shifts on load, you can defer them until after the page load event has been completed, thus fooling Lighthouse into thinking there is no CLS. This demo page I created, for example, earns a CLS score of 0.143 even though JavaScript immediately starts adding new text elements to the page, shifting the original content up. By pausing the JavaScript that adds new nodes to the DOM by an arbitrary five seconds with a setTimeout(), Lighthouse doesn’t capture the CLS that takes place.

This other demo page earns a performance score of 100, even though it is arguably less useful and useable than the last page given that the added elements pop in seemingly at random without any user interaction.

Whilst it is possible to defer layout shift events for a page load test, this hack definitely won’t work for field data and user experience over time (which is a more important focal point, as we discussed earlier). If we perform a “time span” test in Lighthouse on the page with deferred layout shifts, Lighthouse will correctly report a non-green CLS score of around 0.186.

If you do want to intentionally create a chaotic experience similar to the demo, you can use CSS animations and transforms to more purposefully pop the content into view on the page. In Google’s guide to CLS, they state that “content that moves gradually and naturally from one position to another can often help the user better understand what’s going on and guide them between state changes” — again, highlighting the importance of user experience in context.

On this next demo page, I’m using CSS transform to scale() the text elements from 0 to 1 and move them around the page. The transforms fail to trigger CLS because the text nodes are already in the DOM when the page loads. That said, I did observe in my testing that if the text nodes are added to the DOM programmatically after the page loads via JavaScript and then animated, Lighthouse will indeed detect CLS and score things accordingly.

You Can’t Hack a Speed Index Score

The Speed Index score is based on the visual progress of the page as it loads. The quicker your content loads nearer the beginning of the page load timeline, the better.

It is possible to do some hack to trick the Speed Index into thinking a page load timeline is slower than it is. Conversely, there’s no real way to “fake” loading content faster than it does. The only way to make your Speed Index score better is to optimize your web page for loading as much of the page as possible, as soon as possible. Whilst not entirely realistic in the web landscape of 2024 (mainly because it would put designers out of a job), you could go all-in to lower your Speed Index as much as possible by:

  • Delivering static HTML web pages only (no server-side rendering) straight from a CDN,
  • Avoiding images on the page,
  • Minimizing or eliminating CSS, and
  • Preventing JavaScript or any external dependencies from loading.
You Also Can’t (Really) Hack A TBT Score

TBT measures the total time after the FCP where the main thread was blocked by JavaScript tasks for long enough to prevent responses to user input. A good TBT score is anything lower than 200ms.

JavaScript-heavy web applications (such as single-page applications) that perform complex state calculations and DOM manipulation on the client on page load (rather than on the server before sending rendered HTML) are prone to suffering poor TBT scores. In this case, you could probably hack your TBT score by deferring all JavaScript until after the Lighthouse test has finished. That said, you’d need to provide some kind of placeholder content or loading screen to satisfy the FCP and LCP and to inform users that something will happen at some point. Plus, you’d have to go to extra lengths to hack around the front-end framework you’re using. (You don’t want to load a placeholder page that, at some point in the page load timeline, loads a separate React app after an arbitrary amount of time!)

What’s interesting is that while we’re still doing all sorts of fancy things with JavaScript in the client, advances in the modern web ecosystem are helping us all reduce the probability of a less-than-stellar TBT score. Many front-end frameworks, in partnership with modern hosting providers, are capable of rendering pages and processing complex logic on demand without any client-side JavaScript. While eliminating JavaScript on the client is not the goal, we certainly have a lot of options to use a lot less of it, thus minimizing the risk of doing too much computation on the main thread on page load.

Bottom Line: Lighthouse Is Still Just A Rough Guide

Google Lighthouse can’t detect everything that’s wrong with a particular website. Whilst Lighthouse performance scores prioritize page usability in terms of responding to user input, it still can’t detect every terrible usability or accessibility issue in 2024.

In 2019, Manuel Matuzović published an experiment where he intentionally created a terrible page that Lighthouse thought was pretty great. I hypothesized that five years later, Lighthouse might do better; but it doesn’t.

On this final demo page I put together, input events are disabled by CSS and JavaScript, making the page technically unresponsive to user input. After five seconds, JavaScript flips a switch and allows you to click the button. The page still scores 100 for both performance and accessibility.

You really can’t rely on Lighthouse as a substitute for usability testing and common sense.

Some More Silly Hacks

As with everything in life, there’s always a way to game the system. Here are some more tried and tested guaranteed hacks to make sure your Lighthouse performance score artificially knocks everyone else’s out of the park:

  • Only run Lighthouse tests using the fastest and highest-spec hardware.
  • Make sure your internet connection is the fastest it can be; relocate if you need to.
  • Never use field data, only lab data, collected using the aforementioned fastest and highest-spec hardware and super-speed internet connection.
  • Rerun the tests in the lab using different conditions and all the special code hacks I described in this post until you get the result(s) you want to impress your friends, colleagues, and random people on the internet.

Note: The best way to learn about web performance and how to optimize your websites is to do the complete opposite of everything we’ve covered in this article all of the time. And finally, to seriously level up your performance skills, use an application monitoring tool like Sentry. Think of Lighthouse as the canary and Sentry as the real-deal production-data-capturing, lean, mean, web vitals machine.

And finally-finally, here’s the link to the full demo site for educational purposes.

Where to Download Blender Character Models for Free

Featured Imgs 23

When delving into the world of Blender character models, it’s essential to understand the vast array of options available for download. From detailed humanoid figures to fantastical creatures, the diversity of free models can cater to a wide range of projects and preferences. Blender character modeling offers an exciting avenue for 3D designers to explore their creativity and enhance their projects with ready-to-use assets.

As a seasoned 3D designer with over 8 years of experience, I understand the significance of Blender character modeling. Blender serves as a versatile tool for creating diverse and intricate character models with the latest version being 4.1.1.

Benefits of Using Free Character Models for Blender

Using free character models for Blender can save time and effort in creating unique designs. They provide a starting point for projects and can inspire creativity in 3D modeling workflows. Free models also help students learn and practice their skills without the need to create everything from scratch.

Using Free Character Models Effectively

Here are some ways to make the most out of the free character models you download:

  • Customization : Use Blender’s tools to modify the models, adjusting textures, poses, and other features to fit your project’s needs.
  • Learning Tool : Study how these models are constructed to improve your own modeling skills. This is especially beneficial for students and beginners.
  • Time-Saving : Free models can save you a lot of time, allowing you to focus on other aspects of your project like animation or scene composition.

Accessing free Blender character models from platforms like BlenderKit, TurboSquid, CGTrader, Free3D, and Blend Swap can significantly enhance your 3D modeling projects. These resources offer a variety of high-quality models that can save you time and help you learn new techniques. Dive into these sites and discover how they can benefit your next project!

TurboSquid

TurboSquid

Source

Blend Swap

Blend Swap

Source

Sketchfab

Sketchfab

Source

CGTrader

CGTrader

Source

Free3D

Free3D

Source

Clara.io

Clara.io

Source

Blenderkit

Blenderkit

Source

RenderHub

RenderHub

Source

3DExport

3DExport

Source

3D Warehouse

3D Warehouse

Source

Open3DModel

Open3DModel

Source

Cadnav

Cadnav

Source

Blender Market

Blender Market

Source

Blender Studio

Blender Studio

Source

Mixamo

Mixamo

Source

OpenGameArt

OpenGameArt

Source

NASA’s 3D Resources

NASA's 3D Resources

Source

Daz 3D

Daz 3D

Source

The post Where to Download Blender Character Models for Free appeared first on CSS Author.

Chris’ Corner: Let’s Look at Type!

Typography Definitions Cover

Dan Mall has my favorite post on picking a typeface. I’m no master typographer, but I know enough that I don’t want to be talked to like an absolute beginner where you teach me what a serif is. Dan gets into more realistic decision making steps, like intentionally not picking something ultra popular, admitting that you have to be around a lot of type to make good type decisions, and that ultimately choosing is akin to improvising in jazz: it’s just gotta feel right.

If you are a beginner, or really just like type, you’d do well carving out half an hour to watch the 6 parts of Practicing Typography Basics from Tim Brown who sounds like he’s at absolute zen at all times. Each of these videos only has a few thousand views which feels like a damn shame to me as they are super good and hit all the most important stuff about typography.

Now let’s have more fun and just look at some actual typefaces I’ve bookmarked lately.

MD IO

I just love this so much it’s one of those typefaces that make me want to find a project just to use it on.

Jgs

Jgs Font glyphs can be combined from one character to another, from one line to another. Thus from single characters it is possible to draw continuous lines, frames and patterns.

Nudica

The pricing atipo foundry does for their fonts (“pay what you want”) is awfully generous.

mononoki

a font for programming and code review

I’ve got this on my list of potential fonts to add to CodePen when I get to doing another round of that.

F.C. Variable

An exploration by Rob en Robin about using the axes of variable fonts to control illustrations. Wild!


Oh and kinda just for myself, I want to remember two fonts Dan mentioned. He said he doesn’t pick these as they are almost too popular, but I don’t know them well and that popularity kinda intrigues me honestly.

Two of the most popular typefaces on Typewolf are Grilli Type’s GT America and Lineto’s Circular. You can’t go wrong with those. They look great and they won’t offend anyone.

Enhancing Chatbot Effectiveness with RAG Models and Redis Cache: A Strategic Approach for Contextual Conversation Management

Featured Imgs 26

Organizations globally are leveraging the capabilities of Large Language Models (LLMs) to enhance their chatbot functionalities. These advanced chatbots are envisioned not just as tools for basic interaction but as sophisticated systems capable of intelligently accessing and processing a diverse array of internal organizational assets. These assets include detailed knowledge bases, frequently asked questions (FAQs), Confluence pages, and a myriad of other organizational documents and communications. 

This strategy is aimed at tapping into the rich vein of internal knowledge, ensuring more accurate, relevant, and secure interactions. However, this ambitious integration faces significant hurdles, notably in the realms of data security, privacy, and the avoidance of erroneous or "hallucinated" information, which are common challenges in AI-driven systems. Moreover, the practical difficulties of retraining expansive LLMs, considering the associated high costs and computational requirements, further complicate the situation. This article delves into a strategic solution to these challenges: the implementation of Retrieval-Augmented Generation (RAG) models in conjunction with LLMs, complemented by the innovative use of session-based context management through Redis cache.

Unleashing the Power of Generative AI: A Game-Changer for Next-Generation Recommender Systems

Featured Imgs 26

Recommender systems have become indispensable tools for users seeking relevant and personalized content in today's information-saturated landscape. Generative AI, a rapidly advancing subfield of artificial intelligence, holds the potential to revolutionize recommender systems by overcoming their limitations and enhancing their capabilities. This article delves into the various ways generative AI can contribute to more efficient, versatile, and accurate recommender systems.

1. Background: Generative AI and Recommender Systems

Generative AI models, including Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), excel at generating novel, high-quality data by learning from existing samples. Their ability to create new data can significantly benefit recommender systems, which rely on data to understand user preferences and make accurate suggestions. 

How To Use CDN in Your Website

Featured Imgs 26

A CDN's mission involves virtually shortening the physical distance to improve site rendering speed and performance. 

Physical Distance?  Yes, you read it right.