How To Harness Mouse Interaction Data For Practical Machine Learning Solutions

Mouse data is a subcategory of interaction data, a broad family of data about users generated as the immediate result of human interaction with computers. Its siblings from the same data family include logs of key presses or page visits. Businesses commonly rely on interaction data, including the mouse, to gather insights about their target audience. Unlike data that you could obtain more explicitly, let’s say via a survey, the advantage of interaction data is that it describes the actual behavior of actual people.

Collecting interaction data is completely unobtrusive since it can be obtained even as users go about their daily lives as usual, meaning it is a quantitative data source that scales very well. Once you start collecting it continuously as part of regular operation, you do not even need to do anything, and you’ll still have fresh, up-to-date data about users at your fingertips — potentially from your entire user base, without them even needing to know about it. Having data on specific users means that you can cater to their needs more accurately.

Of course, mouse data has its limitations. It simply cannot be obtained from people using touchscreens or those who rely on assistive tech. But if anything, that should not discourage us from using mouse data. It just illustrates that we should look for alternative methods that cater to the different ways that people interact with software. Among these, the mouse just happens to be very common.

When using the mouse, the mouse pointer is the de facto conduit for the user’s intent in a visual user interface. The mouse pointer is basically an extension of your arm that lets you interact with things in a virtual space that you cannot directly touch. Because of this, mouse interactions tend to be data-intensive. Even the simple mouse action of moving the pointer to an area and clicking it can yield a significant amount of data.

Mouse data is granular, even when compared with other sources of interaction data, such as the history of visited pages. However, with machine learning, it is possible to investigate jumbles of complicated data and uncover a variety of complex behavioral patterns. It can reveal more about the user holding the mouse without needing to provide any more information explicitly than normal.

For starters, let us venture into what kind of information can be obtained by processing mouse interaction data.

What Are Mouse Dynamics?

Mouse dynamics refer to the features that can be extracted from raw mouse data to describe the user’s operation of a mouse. Mouse data by itself corresponds with the simple mechanics of mouse controls. It consists of mouse events: the X and Y coordinates of the cursor on the screen, mouse button presses, and scrolling, each dated with a timestamp. Despite the innate simplicity of the mouse events themselves, the mouse dynamics using them as building blocks can capture user’s behavior from a diverse and emergently complex variety of perspectives.

If you are concerned about user privacy, as well you should be, mouse dynamics are also your friend. For the calculation of mouse dynamics to work, raw mouse data does not need to inherently contain any details about the actual meaning of the interaction. Without the context of what the user saw as they moved their pointer around and clicked, the data is quite safe and harmless.

Some examples of mouse dynamics include measuring the velocity and the acceleration at which the mouse cursor is moving or describing how direct or jittery the mouse trajectories are. Another example is whether the user presses and lets go of the primary mouse button quickly or whether there is a longer pause before they release their press. Four categories of over twenty base measures can be identified: temporal, spatial, spatial-temporal, and performance. Features do not need to be just metrics either, with other approaches using a time series of mouse events.

Temporal mouse dynamics:

  • Movement duration: The time between two clicks;
  • Response time: The time it takes to click something in response to a stimulus (e.g., from the moment when a page is displayed);
  • Initiation time: The time it takes from an initial stimulus for the cursor to start moving;
  • Pause time: The time measuring the cursor’s period of idleness.

Spatial mouse dynamics:

  • Distance: Length of the path traversed on the screen;
  • Straightness: The ratio between the traversed path and the optimal direct path;
  • Path deviation: Perpendicular distance of the traversed path from the optimal path;
  • Path crossing: Counted instances of the traversed and optimal path intersecting;
  • Jitter: The ratio of the traversed path length to its smoothed version;
  • Angle: The direction of movement;
  • Flips: Counted instances of change in direction;
  • Curvature: Change in angle over distance;
  • Inflection points: Counted instances of change in curvature.

Spatial-temporal mouse dynamics:

  • Velocity: Change of distance over time;
  • Acceleration: Change of velocity over time;
  • Jerk: Change of acceleration over time;
  • Snap: Change in jerk over time;
  • Angular velocity: Change in angle over time.

Performance mouse dynamics:

  • Clicks: The number of mouse button events pressing down or up;
  • Hold time: Time between mouse down and up events;
  • Click error: Length of the distance between the clicked point and the correct user task solution;
  • Time to click: Time between the hover event on the clicked point and the click event;
  • Scroll: Distance scrolled on the screen.

Note: For detailed coverage of varied mouse dynamics and their extraction, see the paper “Is mouse dynamics information credible for user behavior research? An empirical investigation.”

The spatial angular measures cited above are a good example of how the calculation of specific mouse dynamics can work. The direction angle of the movements between points A and B is the angle between the vector AB and the horizontal X axis. Then, the curvature angle in a sequence of points ABC is the angle between vectors AB and BC. Curvature distance can be defined as the ratio of the distance between points A and C and the perpendicular distance between point B and line AC. (Definitions sourced from the paper “An efficient user verification system via mouse movements.”)

Even individual features (e.g., mouse velocity by itself) can be delved into deeper. For example, on pages with a lot of scrolling, horizontal mouse velocity along the X-axis may be more indicative of something capturing the user’s attention than velocity calculated from direct point-to-point (Euclidean) distance in the screen's 2D space. The maximum velocity may be a good indicator of anomalies, such as user frustration, while the mean or median may tell you more about the user as a person.

From Data To Tangible Value

The introduction of mouse dynamics above, of course, is an oversimplification for illustrative purposes. Just by looking at the physical and geometrical measurements of users’ mouse trajectories, you cannot yet tell much about the user. That is the job of the machine learning algorithm. Even features that may seem intuitively useful to you as a human (see examples cited at the end of the previous section) can prove to be of low or zero value for a machine-learning algorithm.

Meanwhile, a deceptively generic or simplistic feature may turn out unexpectedly quite useful. This is why it is important to couple broad feature generation with a good feature selection method, narrowing the dimensionality of the model down to the mouse dynamics that help you achieve good accuracy without overfitting. Some feature selection techniques are embedded directly into machine learning methods (e.g., LASSO, decision trees) while others can be used as a preliminary filter (e.g., ranking features by significance assessed via a statistical test).

As we can see, there is a sequential process to transforming mouse data into mouse dynamics, into a well-tuned machine learning model to field its predictions, and into an applicable solution that generates value for you and your organization. This can be visualized as the pipeline below.

Machine Learning Applications Of Mouse Dynamics

To set the stage, we must realize that companies aren’t really known for letting go of their competitive advantage by divulging the ins and outs of what they do with the data available to them. This is especially true when it comes to tech giants with access to potentially some of the most interesting datasets on the planet (including mouse interaction data), such as Google, Amazon, Apple, Meta, or Microsoft. Still, recording mouse data is known to be a common practice.

With a bit of grit, you can find some striking examples of the use of mouse dynamics, not to mention a surprising versatility in techniques. For instance, have you ever visited an e-commerce site just to see it recommend something specific to you, such as a gendered line of cosmetics — all the while, you never submitted any information about your sex or gender anywhere explicitly?

Mouse data transcends its obvious applications, as is replaying the user’s session and highlighting which visual elements people interact with. A surprising amount of internal and external factors that shape our behavior are reflected in data as subtle indicators and can thus be predicted.

Let’s take a look at some further applications. Starting some simple categorization of users.

Example 1: Biological Sex Prediction

For businesses, knowing users well allows them to provide accurate recommendations and personalization in all sorts of ways, opening the gates for higher customer satisfaction, retention, and average order value. By itself, the prediction of user characteristics, such as gender, isn’t anything new. The reason for basing it on mouse dynamics, however, is that mouse data is generated virtually by the truckload. With that, you will have enough data to start making accurate predictions very early.

If you waited for higher-level interactions, such as which products the user visited or what they typed into the search bar, by the time you’d have enough data, the user may have already placed an order or, even worse, left unsatisfied.

The selection of the machine learning algorithm matters for a problem. In one published scientific paper, six various models have been compared for the prediction of biological gender using mouse dynamics. The dataset for the development and evaluation of the models provides mouse dynamics from participants moving the cursor in a broad range of trajectory lengths and directions. Among the evaluated models — Logistic regression, Support vector machine, Random forest, XGBoost, CatBoost, and LightGBM — CatBoost achieved the best F1 score.

Putting people into boxes is far from everything that can be done with mouse dynamics, though. Let’s take a look at a potentially more exciting use case — trying to predict the future.

Example 2: Purchase Prediction

Another e-commerce application predicts whether the user has the intent to make a purchase or even whether they are likely to become a repeat customer. Utilizing such predictions, businesses can adapt personalized sales and marketing tactics to be more effective and efficient, for example, by catering more to likely purchasers to increase their value — or the opposite, which is investigating unlikely purchasers to find ways to turn them into likely ones.

Interestingly, a paper dedicated to the prediction of repeat customership reports that when a gradient boosting model is validated on data obtained from a completely different online store than where it was trained and tuned, it still achieves respectable performance in the prediction of repeat purchases with a combination of mouse dynamics and other interaction and non-interaction features.

It is plausible that though machine-learning applications tend to be highly domain-specific, some models could be used as a starting seed, carried over between domains, especially while still waiting for user data to materialize.

Additional Examples

Applications of mouse dynamics are a lot more far-reaching than just the domain of e-commerce. To give you some ideas, here are a couple of other variables that have been predicted with mouse dynamics:

The Mouse-Shaped Caveat

When you think about mouse dynamics in-depth, some questions will invariably start to emerge. The user isn’t the only variable that could determine what mouse data looks like. What about the mouse itself?

Many brands and models are available for purchase to people worldwide. Their technical specifications deviate in attributes such as resolution (measured in DPI or, more accurately, CPI), weight, polling rate, and tracking speed. Some mouse devices have multiple profile settings that can be swapped between at will. For instance, the common CPI of an office mouse is around 800-1,600, while a gaming mouse can go to extremes, from 100 to 42,000. To complicate things further, the operating system has its own mouse settings, such as sensitivity and acceleration. Even the surface beneath the mouse can differ in its friction and optical properties.

Can we be sure that mouse data is reliable, given that basically everyone potentially works under different mouse conditions?

For the sake of argument, let’s say that as a part of a web app you’re developing, you implement biometric authentication with mouse dynamics as a security feature. You sell it by telling customers that this form of auth is capable of catching attackers who try to meddle in a tab that somebody in the customer’s organization left open on an unlocked computer. Recognizing the intruder, the app can sign the user out of the account and trigger a warning sent to the company. Kicking out the real authorized user and sounding the alarm just because somebody bought a new mouse would not be a good look. Recalibration to the new mouse would also produce friction. Some people like to change their mouse sensitivity or use different computers quite often, so frequent calibration could potentially present a critical flaw.

We found that up until now, there was barely anything written about whether or how mouse configuration affects mouse dynamics. By mouse configuration, we refer to all properties of the environment that could impact mouse behavior, including both hardware and software.

From the authors of papers and articles about mouse dynamics, there is barely a mention of mouse devices and settings involved in development and testing. This could be seen as concerning. Though hypothetically, there might not be an actual reason for concern, that is exactly the problem. There was just not even enough information to make a judgment on whether mouse configuration matters or not. This question is what drove the study conducted by UXtweak Research (as covered in the peer-reviewed paper in Computer Standards & Interfaces).

The quick answer? Mouse configuration does detrimentally affect mouse dynamics. How?

  1. It may cause the majority of mouse dynamics values to change in a statistically significant way between different mouse configurations.
  2. It may lower the prediction performance of a machine learning model if it was trained on a different set of mouse configurations than it was tested on.

It is not automatically guaranteed that prediction based on mouse dynamics will work equally well for people on different devices. Even the same person making the exact same mouse movements does not necessarily produce the same mouse dynamics if you give them a different mouse or change their settings.

We cannot say for certain how big an impact mouse configuration can have in a specific instance. For the problem that you are trying to solve (specific domain, machine learning model, audience), the impact could be big, or it could be negligible. But to be sure, it should definitely receive attention. After all, even a deceptively small percentage of improvement in prediction performance can translate to thousands of satisfied users.

Tackling Mouse Device Variability

Knowledge is half the battle, and so it is also with the realization that mouse configuration is not something that can be just ignored when working with mouse dynamics. You can perform tests to evaluate the size of the effect that mouse configuration has on your model’s performance. If, in some configurations, the number of false positives and false negatives rises above levels that you are willing to tolerate, you can start looking for potential solutions by tweaking your prediction model.

Because of the potential variability in real-world conditions, differences between mouse configurations can be seen as a concern. Of course, if you can rely on controlled conditions (such as in apps only accessible via standardized kiosks or company-issued computers and mouse devices where all system mouse settings are locked), you can avoid the concern altogether. Given that the training dataset uses the same mouse configuration as the configuration used in production, that is. Otherwise, that may be something new for you to optimize.

Some predicted variables can be observed repeatedly from the same user (e.g., emotional state or intent to make a purchase). In the case of these variables, to mitigate the problem of different users utilizing different mouse configurations, it would be possible to build personalized models trained and tuned on the data from the individual user and the mouse configurations they normally use. You also could try to normalize mouse dynamics by adjusting them to the specific user’s “normal” mouse behavior. The challenge is how to accurately establish normality. Note that this still doesn’t address situations when the user changes their mouse or settings.

Where To Take It From Here

So, we arrive at the point where we discuss the next steps for anyone who can’t wait to apply mouse dynamics to machine learning purposes of their own. For web-based solutions, you can start by looking at MouseEvents in JavaScript, which is how you’ll obtain the elementary mouse data necessary.

Mouse events will serve as the base for calculating mouse dynamics and the features in your model. Pick any that you think could be relevant to the problem you are trying to solve (see our list above, but don’t be afraid to design your own features). Don’t forget that you can also combine mouse dynamics with domain and application-specific features.

Problem awareness is key to designing the right solutions. Is your prediction problem within-subject or between-subject? A classification or a regression? Should you use the same model for your whole audience, or could it be more effective to tailor separate models to the specifics of different user segments?

For example, the mouse behavior of freshly registered users may differ from that of regular users, so you may want to divide them up. From there, you can consider the suitable machine/deep learning algorithm. For binary classification, a Support vector machine, Logistic regression, or a Random Forest could do the job. To delve into more complex patterns, you may wish to reach for a Neural network.

Of course, the best way to uncover which machine/deep learning algorithm works best for your problem is to experiment. Most importantly, don’t give up if you don’t succeed at first. You may need to go back to the drawing board a few times to reconsider your feature engineering, expand your dataset, validate your data, or tune the hyperparameters.

Conclusion

With the ongoing trend of more and more online traffic coming from mobile devices, some futurist voices in tech might have you believe that “the computer mouse is dead”. Nevertheless, those voices have been greatly exaggerated. One look at statistics reveals that while mobile devices are excessively popular, the desktop computer and the computer mouse are not going anywhere anytime soon.

Classifying users as either mobile or desktop is a false dichotomy. Some people prefer the desktop computer for tasks that call for exact controls while interacting with complex information. Working, trading, shopping, or managing finances — all, coincidentally, are tasks with a good amount of importance in people’s lives.

To wrap things up, mouse data can be a powerful information source for improving digital products and services and getting yourself a headway against the competition. Advantageously, data for mouse dynamics does not need to involve anything sensitive or in breach of the user’s privacy. Even without identifying the person, machine learning with mouse dynamics can shine a light on the user, letting you serve them more proper personalization and recommendations, even when other data is sparse. Other uses include biometrics and analytics.

Do not underestimate the impact of differences in mouse devices and settings, and you may arrive at useful and innovative mouse-dynamics-driven solutions to help you stand out.

‘AI Is Expected to Transform the Role of Controllers & Analysts ‘

AI will automize many routine tasks in accounting and the role of financial controllers and analysts will change, but not be replaced say Manoj Kumar Vandanapu and Sandeep Kumar.

agi-talks-02.jpg

In the latest AGI Talks, two renowned finance experts share their insights by answering 10 questions about Artificial Intelligence (AI) and Artificial General Intelligence (AGI).

About Manoj Kumar Vandanapu & Sandeep Kumar

Manoj Kumar Vandanapu and Sandeep Kumar are experienced experts in the fields of finance and controlling.

manoj.jpg

Manoj, serving as a Corporate Finance Controller for a multinational investment bank and an independent researcher in Illinois, is recognized for integration of finance and technology. With a background in accounting combined with a passion for AI and Machine Learning, Manojs career focuses on driving financial practices forward. His leadership in deploying innovative solutions within the investment banking sector has markedly enhanced operational efficiencies and established new industry benchmarks. As a researcher, peer reviewer, and adjudicator, he continues to play a critical role in the evolution of financial technologies, mentoring emerging professionals along the way.

sandeep.jpg

Sandeep is an expert for SAP AI and Data Analytics with over 20+ years of experience. He has served in leadership roles to implement and operate multi-million, multi-year SAP ERP projects, and has utilized broad cross-functional business and technology know-how in the fields of systems architecture, data engineering, AI and analytics.

AGI Talks with Manoj and Sandeep

In our interview, Manoj and Sandeep share insights on AIs impact on finance and accounting:

1. What is your preferred definition of AGI?

Manoj & Sandeep: From a finance and accounting perspective, AGI can be defined as an AI system that possesses the ability to understand, learn, and apply knowledge across a wide range of financial and accounting tasks at a level of competence comparable to or surpassing that of a human expert. This includes abilities such as conducting financial analysis, making investment decisions, managing risk, and interpreting complex tax and accounting laws autonomously.

2. and ASI?

ASI refers to a hypothetical AI system that not only matches but significantly surpasses human intelligence across all fields, including finance and accounting. In the finance and accounting domains, super-intelligent AI could potentially revolutionize insight generation in the financial markets, decision making based on financial data, audit processes, and strategic financial planning and forecasting by processing and analyzing data at a scale and speed unattainable by human beings.

3. In what ways do you believe AI will most significantly impact society in the next decade?

In the next decade, AI is poised to significantly impact society by automating routine tasks, enhancing decision-making processes, and personalizing services. In finance and accounting, this could translate into more efficient operations, improved accuracy in financial reporting, and personalized financial advice. However, it may also lead to job displacement for roles that require mundane repetitive tasks like financial reconciliations, data analysis and consolidation, operational reporting and will require a shift/alignment in respective skills to enhance and support AI utilization in the finance domain.

4. What do you think is the biggest benefit associated with AI?

The biggest benefit of AI, particularly in finance and accounting, is its potential to enhance efficiency and accuracy. By automating repetitive and time-consuming tasks, AI can free up human professionals to focus more on strategic and analytical tasks, potentially leading to more insightful financial decisions and innovations.

5. and the biggest risk of AI?

The biggest risk associated with AI is the potential for exacerbating inequalities and causing job displacement. As artificial intelligence systems become more capable, there is a risk that they could replace a significant number of jobs in finance and accounting, leading to economic and social challenges. However, at the same time, it will also open doors to new opportunities and roles to optimally enhance the design and utilization of AI capabilities. Additionally, the concentration of AI capabilities in the hands of a few could increase wealth and power disparities.

6. In your opinion, will AI have a net positive impact on society?

Whether AI will have a net positive impact on society depends on how its development and deployment is managed. If governed ethically and inclusively, AI has the potential to contribute positively by driving economic growth, improving financial services, and enhancing productivity. However, addressing the challenges of equity, privacy, and employment in the initial stage will be crucial.

7. Where are the limits of human control over AI systems?

The limits of human control over AI systems are defined by the complexity of a systems and the unpredictability of their learning processes. As AI systems, particularly those based on GenAI, evolve based on their interactions and data inputs, ensuring they adhere to human values and ethics becomes increasingly challenging, especially for complex and autonomous systems in the field of finance, healthcare, and law of the land.

8. Do you think AI can ever truly understand human values or possess consciousness?

While AI can be programmed to mimic certain aspects of human ethics and decision-making, genuinely comprehending the depth of human values or achieving consciousness involves subjective experiences and emotions that are currently beyond AI's capabilities. However, we are hopeful, it is going to evolve with time.

9. Do you think your jobs as controllers and analysts will ever be replaced by AI?

While AI is set to automate certain aspects of the financial controller's or Advance Analytics role, especially the more routine tasks, it is less likely to replace the role entirely. Instead, AI is expected to transform the role, elevating the importance of strategic oversight, decision-making, and technological proficiency. Financial controllers and Analytics experts will adapt and support changes by acquiring new skills. Learning to leverage AI effectively can enhance their value and remain indispensable to their organizations.

10. We will reach AGI by the year?

Predicting the timeline for achieving AGI is highly speculative, with estimates ranging from a decade (i.e. 2035) to few more decades. Factors such as breakthroughs in computational power, algorithmic efficiency, and data availability play crucial roles. From a finance and accounting perspective, reaching AGI would mean developing systems that can fully understand and innovate within these domains autonomously, a milestone that is very much possible, but yet uncertain and dependent on numerous technological and ethical considerations.

Using AI For Neurodiversity And Building Inclusive Tools

In 1998, Judy Singer, an Australian sociologist working on biodiversity, coined the term “neurodiversity.” It means every individual is unique, but sometimes this uniqueness is considered a deficit in the eyes of neuro-typicals because it is uncommon. However, neurodiversity is the inclusivity of these unique ways of thinking, behaving, or learning.

Humans have an innate ability to classify things and make them simple to understand, so neurodivergence is classified as something different, making it much harder to accept as normal.

“Why not propose that just as biodiversity is essential to ecosystem stability, so neurodiversity may be essential for cultural stability?”

— Judy Singer

Culture is more abstract in the context of biodiversity; it has to do with values, thoughts, expectations, roles, customs, social acceptance, and so on; things get tricky.

Discoveries and inventions are driven by personal motivation. Judy Singer started exploring the concept of neurodiversity because her daughter was diagnosed with autism. Autistic individuals are people who are socially awkward but are very passionate about particular things in their lives. Like Judy, we have a moral obligation as designers to create products everyone can use, including these unique individuals. With the advancement of technology, inclusivity has become far more important. It should be a priority for every company.

As AI becomes increasingly tangled in our technology, we should also consider how being more inclusive will help, mainly because we must recognize such a significant number. AI allows us to design affordable, adaptable, and supportive products. Normalizing the phenomenon is far easier with AI, and it would help build personalized tools, reminders, alerts, and usage of language and its form.

We need to remember that these changes should not be made only for neurodiverse individuals; it would help everyone. Even neurotypicals have different ways of grasping information; some are kinesthetic learners, and others are auditory or visual.

Diverse thinking is just a different way of approaching and solving problems. Remember, many great minds are neurodiverse. Alan Turing, who cracked the code of enigma machines, was autistic. Fun fact: he was also the one who built the first AI machine. Steve Jobs, the founder and pioneer design thinker, had dyslexia. Emma Watson, famously known for her role as Hermione Granger from the Harry Potter series, has Attention-Deficit/Hyperactivity Disorder (ADHD). There are many more innovators and disruptors out there who are different.

Neurodivergence is a non-medical umbrella term.) used to classify brain function, behavior, and processing, which is different from normal. Let’s also keep in mind that these examples and interpretations are meant to shed some light on the importance of the neglected topic. It should be a reminder for us to invest further and investigate how we can make this rapidly growing technology in favor of this group as we try to normalize neurodiversity.

Types Of Neurodiversities
  • Autism: Autism spectrum disorder (ASD) is a neurological and developmental disorder that affects how people interact with others, communicate, learn, and behave.
  • Learning Disabilities
    The common learning disabilities:
  • Attention-Deficit/Hyperactivity Disorder (ADHD): An ongoing pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development.
Making AI Technology More Neuro-inclusive

Artificial Intelligence (AI) enables machines to think and perform tasks. However, this thinking is based on algorithmic logic, and that logic is based on multiple examples, books, and information that AI uses to generate the resulting output. The network of information that AI mimics is just like our brains; it is called a neural network, so data processing is similar to how we process information in our brains to solve a problem.

We do not need to do anything special for neurodiversity, which is the beauty of AI technology in its current state. Everything already exists; it is the usage of the technology that needs to change.

There are many ways we could improve it. Let’s look at four ways that are crucial to get us started.

Workflow Improvements

For: Autistic and ADHD
Focus: Working memory

Gartner found that 80% of executives think automation can be applied to any business decision. Businesses realized that a tactical approach is less successful than a strategic approach to using AI. For example, it can support business decisions that would otherwise require a lot of manual research.

AI has played a massive role in automating various tasks till now and will continue to do so in the future; it helps users reduce the time they spend on repetitive aspects of their jobs. It saves users a lot of time to focus their efforts on things that matter. Mundane tasks get stacked in the working memory; however, there is a limit: humans can keep up to 3–5 ideas simultaneously. If there are more than five ideas at play, humans ought to forget or miss something unless they document it. When completing these typical but necessary tasks, it becomes time-consuming and frustrating for users to focus on their work. This is especially troublesome for neurodivergent employees.

Autistic and ADHD users might have difficulty following through or focusing on aspects of their work, especially if it does not interest them. Straying thoughts is not uncommon; it makes it even harder to concentrate. Autistic individuals are hyper-focused, preventing them from grasping other relevant information. On the contrary, ADHD users lose focus quickly as their attention span is limited, so their working memory takes a toll.

AI could identify this and help users overcome it. Improving and automating the workflow will allow them to focus on the critical tasks. It means less distractions and more direction. Since they have trouble with working memory, allowing the tool to assist them in capturing moments to help recall later would benefit them greatly.

Example That Can Be Improved

Zoom recently launched its AI companion. When a user joins a meeting as a host, they can use this tool for various actions. One of those actions is to summarize the meeting. It auto-generates meeting notes at the end and shares them. AI companion is an excellent feature for automating notes in the meeting, allowing all the participants to not worry about taking notes.

Opportunity: Along with the auto-generated notes, Zoom should allow users to take notes in-app and use them in their summaries. Sometimes, users get tangent thoughts or ideas that could be useful, and they can create notes. It should also allow users to choose the type of summary they want, giving them more control over it, e.g., short, simplified, or list. AI could also personalize this content to allow participants to comprehend it in their own way. Autistic users would benefit from their hyper-focused attention in the meeting. ADHD users can still capture those stray thoughts, which the AI will summarize in the notes. Big corporations usually are more traditional with incremental improvements. Small tech companies have less to lose, so we often see innovation there.

Neurodivergent Friendly Example

Fireflies.ai is an excellent example of how neuro-inclusivity can be considered, and it covers all the bases Zoom falls short of. It auto-generates meeting notes. It also allows participants to take notes, which are then appended to the auto-generated summary: this summary can be in a bullet list or a paragraph. The tool can also transcribe from the shared slide deck within the summary. It shares audio snippets of important points alongside the transcription. The product can support neurodivergent users far better.

Natural Language Processing

For: Autistic, Learning Disabilities, and ADHD
Focus: Use simple words and give emotional assistance

Words have different meanings for all. Some might understand the figurative language, but others might get offended by the choice of it. If this is so common with a neurotypical, imagine how tricky it will be for a neurodivergent. Autistic users have difficulty understanding metaphorical language and empathizing with others. Learning disabilities will have trouble with language, especially figurative language, which perplexes them. ADHD users have a short attention span, and using complex sentences would mean they will lose interest.

Using simple language aids users far better than complex sentence constructions for neurodivergent. Metaphors, jargon, or anecdotal information might be challenging to interpret and frustrate them. The frustration could avert them from pursuing things that they feel are complex. Providing them with a form of motivation by allowing them to understand and grow will enable them to pursue complexities confidently. AI could help multifold by breaking down the complex into straightforward language.

Example That Can Be Improved

Grammarly is a great tool for correcting and recommending language changes. It has grammatical and Grammarly-defined rules based on which the app makes recommendations. It also has a feature that allows users to select the tone of voice or goals, casual or academic style, enhancing the written language to the expectation. Grammarly also lets organizations define style guides; it could help the user write based on the organization’s expectations.

Opportunity: Grammarly still needs to implement a gen AI assistive technology, but that might change in the future. Large learning models (LLM) can further convert the text into inclusive language considering cultural and regional relevance. Most presets are specific to the rules Grammarly or the organization has defined, which is limiting. Sentimental analysis is still not a part of their rules. For example, if the write-up is supposed to be negative, the app recommends changing or making it positive.

Neurodivergent Friendly Example

Writer is another beautiful product that empowers users to follow guidelines established by the organization and, obviously, the grammatical rules. It provides various means to rewrite sentences that make sense, e.g., simplify, polish, shorten, and so on. Writers also assist with sentence reconstruction and recommendation based on the type of content the user writes, for instance, an error or a tooltip. Based on those features and many more under the gen AI list, Writer can perform better for neurodivergent users.

Cognitive Assistance

For: Autistic, Learning Disabilities, and ADHD
Focus: Suggestive technology

Equality Act 2010 was established to bring workplace equality with legislation on neurodiversity. Employers need to understand the additional needs of neurodivergent employees and make amendments to existing policies to incorporate them. The essence of the Equality Act can be translated into actionable digital elements to bring equality of usage of products.

Neurodiverse or not, cognitive differences are present in both groups. The gap becomes more significant when we talk about them separately. Think about it: all AI assistive technologies are cognition supplements.

Cognoassist did a study to understand cognition within people. They found that less than 10% of them score within a typical range of assessment. It proves that the difference is superficial, even if it is observable.

Cognition is not just intelligence but a runway of multiple mental processes, irrespective of the neural inclination. It is just a different way of cognition and reproduction than normal. Nonetheless, neurodivergent users need assistive technologies more than neuro-typicals; it fills the gap quickly. This will allow them to function at the same level by making technology more inclusive.

Example That Can Be Improved

ClickUp is a project management tool that has plenty of automation baked into it. It allows users to automate or customize their daily routine, which helps everyone on the team to focus on their goals. It also lets users connect various productivity and management apps to make it a seamless experience and a one-stop shop for everything they need. The caveat is that the automation is limited to some actions.

Opportunity: Neurodivergent users sometimes need more cognitive assistance than neuro-typicals. Initiating and completing tasks is difficult, and a push could help them get started or complete them. The tool could also help them with organization, benefiting them greatly. Autistic individuals prefer to complete a task in one go, while ADHD people like to mix it up as they get the necessary break from each task and refocus. An intelligent AI system could help users by creating more personalized planned days and a to-do list to get things started.

Neurodivergent Friendly Example

Motion focuses on planning and scheduling the user’s day to help with their productivity goals. When users connect their calendars to this tool, they can schedule their meetings with AI by considering heads-down time or focused attention sessions based on each user’s requirement. The user can personalize their entire schedule according to their liking. The tool will proactively schedule incoming meetings or make recommendations on time. This AI assistive technology also aids them with planning around deadlines.

Adaptive Onboarding

For: Learning Disabilities and ADHD
Focus: Reduce Frustration

According to Epsilon, 80% of consumers want a personalized experience. All of these personalization experiences are to make the user’s workflow easier. These personalized experiences start from the introduction to the usage of the product. Onboarding helps users learn about the product, but learning continues after the initial product presentation.

We cannot expect users to know about the product once the onboarding has been completed and they need assistance in the future. Over time, if users have a hard time comprehending or completing a task, they get frustrated; this is particularly true for ADHD users. At the same time, users with learning disabilities do not remember every step either because they are too complex or have multiple steps.

Adaptive onboarding will allow everyone to re-learn when needed; it would benefit them more since help is available when needed. This type of onboarding could be AI-driven and much more generative. It could focus on different learning styles, either assistive, audio, or video presentation.

Example That Can Be Improved:

Product Fruits has a plethora of offerings, including onboarding. It offers personalization and the ability to tailor the onboarding to cover the product for new users. Allowing customization with onboarding gives the product team more control over what needs attention. It also provides the capability to track product usage based on the onboarding.

Opportunity: Offering AI interventions for different personas or segments will give the tool an additional layer of experience tailored to the needs of individuals. Imagine a user with ADHD who is trying to figure out how to use the feature; they will get frustrated if they do not identify how to use it. What if the tool intuitively nudges the user on how to complete the task? Similarly, if completing the task is complex and requires multiple steps, users with learning disabilities have difficulty following and reproducing it.

Neurodivergent Friendly Example

Onboarding does not always need to be at the start of the product introduction. Users always end up in situations where they need to find a step in the feature of completing a task but might have difficulty discovering it. In such cases, they usually seek help by asking colleagues or looking it up on the product help page.

Chameleon helps by offering features that let users use AI more effectively. Users can ask for help anytime, and the AI will generate answers to help them.

Considerations

All the issues I mentioned are present in everyone; the difference is the occurrence and intensity between neurotypical and neurodiverse individuals. Everyday things, discussions, conclusions, critical thinking, comprehension, and so on, are vastly different. It is like neurodiverse individuals’ brains are wired differently. It becomes more important to build tools that solve problems for neurodiverse users, which we inadvertently solve for everyone.

An argument that every human goes through those problems is easy to make. But, we tend to forget the intensity and criticality of those problems for neurodiverse individuals, which is far too complex than shrugging it off like neuro-typicals who can adapt to it much more quickly. Similarly, AI too has to learn and understand the problems it needs to solve. It can be confusing for the algorithm to learn unless it does not have multiple examples.

Large Language Models (LLM) are trained on vast amounts of data, such as ChatGPT, for example. It is accurate most of the time; however, sometimes, it hallucinates and gives an inaccurate answer. That might be a considerable problem when no additional guidelines exist except for the LLM. As mentioned above, there is still a possibility in most cases, but having the company guidelines and information would help give correct results.

It could also mean the users will be more dependent on AI, and there is no harm in it. If neurodiverse individuals need assistance, there cannot be a human present all the time carrying the patience required every time. Being direct is an advantage of AI, which is helpful in the case of their profession.

Conclusion

Designers should create efficient workflows for neurodivergent users who are having difficulty with working memory, comprehending complex language, learning intricate details, and so on. AI could help by providing cognitive assistance and adaptive technologies that benefit neurodivergent users greatly. Neurodiversity should be considered in product design; it needs more attention.

AI has become increasingly tied in every aspect of the user’s lives. Some are obvious, like conversational UI, chatbots, and so on, while others are hidden algorithms like recommendation engines.

Many problems specific to accessibility are being solved, but are they being solved while keeping neurodiverse issues in mind?

Jamie Diamon famously said:

“Problems don’t age well.”

— Jamie Diamon (CEO, JP Morgan)

This means we have to take critical issues into account sooner. Building an inclusive world for those 1.6 billion people is not a need for the future but a necessity of the present. We should strive to create an inclusive world for neurodiverse users; it is especially true because AI is booming, and making it inclusive now would be easy as it will scale into a behemoth set of features in every aspect of our lives in the future.

The Future Of User Research: Expert Insights And Key Trends

This article is a sponsored by Maze

How do product teams conduct user research today? How do they leverage user insights to make confident decisions and drive business growth? And what role does AI play? To learn more about the current state of user research and uncover the trends that will shape the user research landscape in 2024 and beyond, Maze surveyed over 1,200 product professionals between December 2023 and January 2024.

The Future of User Research Report summarized the data into three key trends that provide precious insights into an industry undergoing significant changes. Let’s take a closer look at the main findings from the report.

Trend 1: The Demand For User Research Is Growing

62% of respondents who took the Future of User Research survey said the demand for user research has increased in the past 12 months. Industry trends like continuous product discovery and research democratization could be contributing to this growth, along with recent layoffs and reorganizations in the tech industry.

Emma Craig, Head of UX Research at Miro, sees one reason for this increase in the uncertain times we’re living in. Under pressure to beat the competition, she sensed a “shift towards more risk-averse attitudes, where organizations feel they need to ‘get it right’ the first time.” By conducting user research, organizations can mitigate risk and clarify the strategy of their business or product.

Research Is About Learning

As the Future of User Research report found out, organizations are leveraging research to make decisions across the entire product development lifecycle. The main consumers of research are design (86%) and product (83%) teams, but it’s also marketing, executive teams, engineering, data, customer support, and sales who rely on the results from user research to inform their decision-making.

As Roberta Dombrowski, Research Partner at Maze, points out:

“At its core, research is about learning. We learn to ensure that we’re building products and services that meet the needs of our customers. The more we invest in growing our research practices and team, the higher our likelihood of meeting these needs.”

Benefits And Challenges Of Conducting User Research

As it turns out, the effort of conducting user research on a regular basis pays off. 85% of respondents said that user research improved their product’s usability, 58% saw an increase in customer satisfaction, and 44% in customer engagement.

Connecting research insights to business outcomes remains a key challenge, though. While awareness for measuring research impact is growing (73% of respondents track the impact of their research), 41% reported they find it challenging to translate research insights into measurable business outcomes. Other significant challenges teams face are time and bandwidth constraints (62%) and recruiting the right participants (60%).

Growing A Research Mindset

With the demand for user research growing, product teams need to find ways to expand their research initiatives. 75% of the respondents in the Maze survey are planning to scale research in the next year by increasing the number of research studies, leveraging AI tools, and providing training to promote research democratization.

Janelle Ward, Founder of Janelle Ward Insights, sees great potential in growing research practices, as an organization will grow a research mindset in tandem. She shares:

“Not only will external benefits like competitive advantage come into play, but employees inside the organization will also better understand how and why important business decisions are made, resulting in more transparency from leadership and a happier and more thriving work culture for everyone.”

Trend 2: Research Democratization Empowers Stronger Decision-Making

Research democratization involves empowering different teams to run research and get access to the insights they need to make confident decisions. The Future of User Research Report shows that in addition to researchers, product designers (61%), product managers (38%), and marketers (17%) conduct user research at their companies to inform their decision-making.

Teams with a democratized research culture reported a greater impact on decision-making. They are 2× more likely to report that user research influences strategic decisions, 1.8× more likely to state that it impacts product decisions, and 1.5× more likely to express that it inspires new product opportunities.

The User Researcher’s New Role

Now, if more people are conducting user research in an organization, does this mark the end of the user researcher role? Not at all. Scaling research through democratization doesn’t mean anyone can do any type of research. You’ll need the proper checks and balances to allow everyone to participate in research responsibly and effectively. The role is shifting from a purely technical to an educational role where user researchers become responsible for guiding the organization in its learning and curiosity.

To guarantee data quality and accuracy, user researchers can train partners on research methods and best practices and give them hands-on experience before they start their own research projects. This can involve having them shadow a researcher during a project, holding mock interviews, or leading collaborative analysis workshops.

Democratizing user research also means that UX researchers can open up time to focus on more complex research initiatives. While tactical research, such as usability testing, can be delegated to designers and product managers, UX researchers can conduct foundational studies to inform the product and business strategy.

User Research Tools And Techniques

It’s also interesting to see which tools and techniques product teams use to gather user insights. Maze (46%), Hotjar (26%), and UserTesting (24%) are the most widely used user research tools. When it comes to user research methods, product teams mostly turn to user interviews (89%), usability testing (85%), surveys (82%), and concept testing (56%).

According to Morgan Mullen, Lead UX Researcher at User Interviews, a factor to consider is the type of projects teams conduct. Most teams don’t change their information architecture regularly, which requires tree testing or card sorting. But they’re likely launching new features often, making usability testing a more popular research method.

Trend 3: New Technology Allows Product Teams To Significantly Scale Research

AI is reshaping how we work in countless ways, and user research is no exception. According to the Future of User Research Report, 44% of product teams are already using AI tools to run research and an additional 41% say they would like to adopt AI tools in the future.

ChatGPT is the most widely-used AI tool for conducting research (82%), followed by Miro AI (20%), Notion AI (18%), and Gemini (15%). The most commonly used research tools with AI features are Maze AI (15%), UserTesting AI (9%), and Hotjar AI (5%).

The Strengths Of AI

The tactical aspect of research is where AI truly shines. More than 60% of respondents use AI to analyze user research data, 54% for transcription, 48% for generating research questions, and 45% for synthesis and reporting. By outsourcing these tasks to artificial intelligence, respondents reported that their team efficiency improved (56%) and turnaround time for research projects decreased (50%) — freeing up more time to focus on the human and strategic side of research (35%).

The Irreplaceable Value Of Research

While AI is great at tackling time-consuming, tactical tasks, it is not a replacement for a skilled researcher. As Kate Pazoles, Head of Flex User Research at Twilio, points out, we can think of AI as an assistant. The value lies in connecting the dots and uncovering insights with a level of nuance that only UX researchers possess.

Jonathan Widawski, co-founder and CEO at Maze, sums up the growing role that AI plays in user research as follows:

“AI will be able to support the entire research process, from data collection to analysis. With automation powering most of the tactical aspects, a company’s ability to build products fast is no longer a differentiating factor. The key now lies in a company’s ability to build the right product — and research is the power behind all of this.”

Looking Ahead

With teams adopting a democratized user research culture and AI tools on the rise, the user researcher’s role is shifting towards that of a strategic partner for the organization.

Instead of gatekeeping their knowledge, user researchers can become facilitators and educate different teams on how to engage with customers and use those insights to make better decisions. By doing so, they help ensure research quality and accuracy conducted by non-researchers, while opening up time to focus on more complex, strategic research. Adopting a research mindset also helps teams value user research more and foster a happier, thriving work culture. A win-win for the organization, its employees, and customers.

If you’d like more data and insights, read the full Future of User Research Report by Maze here.

Sketchnotes And Key Takeaways From SmashingConf Antwerp 2023

I have been reading and following Smashing Magazine for years — I’ve read many of the articles and even some of the books published. I’ve also been able to attend several Smashing workshops, and perhaps one of the peak experiences of my isolation times was the online SmashingConf in August 2020. Every detail of that event was so well-designed that I felt genuinely welcomed. The mood was exceptional, and even though it was a remote event, I experienced similar vibes to an in-person conference. I felt the energy of belonging to a tribe of other great design professionals.

I was really excited to find out that the talks at SmashingConf Antwerp 2023 were going to be focused on design and UX! This time, I attended remotely again, just like back in 2020: I could watch and live-sketch note seven talks (and I’m already looking forward to watching the remaining talks I couldn’t attend live).

Even though I participated remotely, I got really inspired. I had a lot of fun, and I felt truly involved. There was an online platform where the talks were live-streamed, as well as a dedicated Slack channel for the conference attendees. Additionally, I shared my key takeaways and sketchnotes right after each talk on social media. That way, I could have little discussions around the topics &mdash, even though I wasn’t there in person.

In this article, I would like to offer a brief summary of each talk, highlighting my takeaways (and my screenshots). Then, I will share my sketchnotes of those seven talks (+ two more I watched after the conference).

Day 1 Talks

Introduction

At the very beginning of the conference, Vitaly said hello to everyone watching online, so even though I participated remotely, I felt welcomed. :-) He also shared that there is an overarching mystery theme of the conference, and the first one who could guess it would get a free ticket for the next Smashing conference — I really liked this gamified approach.

Vitaly also reminded us that we should share our success stories as well as our failure stories (how we’ve grown, learned, and improved over time).

We were introduced to the Pac-man rule: if we are having a conversation, and someone is speaking from the back and wants to join, open the door for them — just like Pac-man does (well, Pac-man opens his mouth because he wants to eat, you want to encourage conversations).

In between talks, Vitaly told us a lot of design jokes; for instance, this one related to design systems was a great fit for the first talk:

Where did Gray 500 and Button Primary go on their first date?

To a naming convention.

After this little warm-up, Molly Hellmuth delivered the first talk of the event. Molly has been a great inspiration for me not only as a design system consultant but also as a content creator and community builder. I’m also enthusiastic about learning the more advanced aspects of Figma, so I was really glad that Molly chose this topic for her talk.

“Design System Traps And Pitfalls” by Molly Hellmuth

Molly is a design system expert specializing in Figma design systems, and she teaches a course called Design System Bootcamp. Every time she runs this course, she sees students make similar mistakes. In this talk, she shared the most common mistakes and how to avoid them.

Molly shared the most common mistakes she experienced during her courses:

  • Adopting new features too quickly,
  • Adding too many color variables,
  • Using groups instead of frames,
  • Creating jumbo component sets,
  • Not prepping icons for our design system.

She also shared some rapid design tips:

  • Set the nudge amount to 8
  • We can hide components in a library by adding a period or an underscore
  • We can go to a specific layer by double-clicking on the layer icon
  • Scope variables, e.g., colors meant for text is, only available for text
  • Use auto layout stacking order (it is not only for avatars, e.g., it is great for dropdown menus, too).

“How AI Ate My Website” by Luke Wroblewski

I have been following Luke Wroblewski since the early days of my design career. I read his book “Web Form Design: Filling in the Blanks” back in 2011, so I was really excited to attend his talk. Also, the topic of AI and design has been a hot one lately, so I was very curious about the conversational interface he created.

Luke has been creating content for 27 years; for example, there are 2,012 articles on his website. There are also videos, books, and PDFs. He created an experience that lets us ask questions from AI that have been fed with this data (all of his articles, videos, books, and so on).

In his talk, he explained how he created the interaction pattern for this conversational interface. It is more like a FAQ pattern and not a chatbot pattern. Here are some details:

  • He also tackled the “what I should ask” problem by providing suggested questions below the most recent answer; that way, he can provide a smoother, uninterrupted user flow.

  • He linked all the relevant sources so that users can dig deeper (he calls it the “object experience”). Users can click on a citation link, and then they are taken to, e.g., a specific point of a video.

He also showed us how AI eats all this stuff (e.g., processing, data cleaning) and talked about how it assembles the answers (e.g., how to pick the best answers).

So, to compare Luke’s experience to e.g., Chat GPT, here are some points:

  • It is more opinionated and specific (Chat GPT gives a “general world knowledge” answer);
  • We can dig deeper by using the relevant resources.

You can try it out on the ask.lukew.com website.

“A Journey in Enterprise UX” by Stéphanie Walter

Stéphanie Walter is also a huge inspiration and a designer friend of mine. I really appreciate her long-form articles, guides, and newsletters. Additionally, I have been working in banking and fintech for the last couple of years, so working for an enterprise (in my case, a bank) is a situation I’m familiar with, and I couldn’t wait to hear about a fellow designer’s perspective and insights about the challenges in enterprise UX.

Stéphanie’s talk resonated with me on so many levels, and below is a short summary of her insightful presentation.

On complexity, she discussed the following points:

  1. Looking at quantitative data: What? How much?
    Doing some content analysis (e.g., any duplicates?)
  2. After the “what” and discovering the “as-is”: Why? How?
    • By getting access to internal users;
    • Conducting task-focused user interviews;
    • Documenting everything throughout the process;
    • “Show me how you do this today” to tackle the “jumping into solutions” mindset.

Stéphanie shared with us that there are two types of processes:

  • Fast track
    Small features, tweaks on the UI — in these cases, there is no time or no need to do intensive research; it involves mostly UI design.
  • Specific research for high-impact parts
    When there is a lot of doubt (“we need more data”). It involves gathering the results of the previous research activities; scheduling follow-up sessions; iterating on design solutions and usability testing with prototypes (usually using Axure).
    • Observational testing
      “Please do the things you did with the old tool but with the new tool” (instead of using detailed usability test scripts).
    • User diary + longer studies to help understand the behavior over a period of time.

She also shared what she wishes she had known sooner about designing for enterprise experiences, e.g., it can be a trap to oversimplify the UI or the importance of customization and providing all the data pieces needed.

It was also very refreshing that she corrected the age-old saying about user interfaces: you know, the one that starts with, “The user interface is like a joke...”. The thing is, sometimes, we need some prior knowledge to understand a joke. This fact doesn’t make a joke bad. It is the same with user interfaces. Sometimes, we just need some prior knowledge to understand it.

Finally, she talked about some of the main challenges in such environments, like change management, design politics and complexity.

Her design process in enterprise UX looks like this:

  • Complexity
    How am I supposed to design that?
  • Analysis
    Making sense of this complexity.
  • Research
    Finding and understanding the puzzle pieces.
  • Solution design
    Eventually, everything clicks into place.

The next talk was about creating a product with a Point of View, meaning that a product’s tone of voice can be “unique,” “unexpected,” or “interesting.”

“Designing A Product With A Point Of View” by Nick DiLallo

Unlike in the case of the other eight speakers whose talks I sketched, I wasn’t familiar with Nick’s work before the conference. However, I’m really passionate about UX writing (and content design), so I was excited to hear Nick’s points. After his talk, I have become a fan of his work; check out his great articles on Medium).

In his talk, Nick DiLallo shared many examples of good and not-so-good UX copies.

His first tip was to start with defining our target audience since the first step towards writing anything is not writing. Rather, it is figuring out who is going to be reading it. If we manage to define who will be reading as a starting point, we will be able to make good design decisions for our product.

For instance, instead of designing for “anyone who cooks a lot”, it is a lot better to design for “expert home chefs”. We don’t need to tell them to “salt the water when they are making pasta”.

After defining our audience, the next step is saying something interesting. Nick’s recommendation is that we should start with one good sentence that can unlock the UI and the features, too.

The next step is about choosing good words; for example, instead of “join” or “subscribe,” we can say “become a member.” However, sometimes we shouldn’t get too creative, e.g., we should never say “add to submarine” instead of “add to cart” or “add to basket”.

We should design our writing. This means that what we include signals what we care about, and the bigger something is visual, the more it will stand out (it is about establishing a meaningful visual hierarchy).

We should also find moments to add voice, e.g., the footer can contain more than a legal text. On the other hand, there are moments and places that are not for adding more words; for instance, a calendar or a calculator shouldn’t contain brand voice.

Nick also highlighted that the entire interface speaks about who we are and what our worldview is. For example, what options do we include when we ask the user’s gender?

He also added that what we do is more important than what we write. For example, we can say that it is a free trial, but if the next thing the UI asks is to enter our bank card details, well, it’s like saying that we are vegetarian, and then we eat a cheeseburger in front of me.

Nick closed his talk by saying that companies should hire writers or content designers since words are part of the user experience.

“When writing and design work together, the results are remarkable.”

“The Invisible Power of UI Typography” by Oliver Schöndorfer

This year, Oliver has quickly become one of my favorite design content creators. I attended some of his webinars, I’m a subscriber of his Font Friday newsletter, and I really enjoy his “edutainment style”. He is like a stand-up comedian. His talks and online events are full of great jokes and fun, but at the same time, Oliver always manages to share his extensive knowledge about typography and UI design. So I knew that the following talk was going to be great. :)

During his talk, Oliver redesigned a banking app screen live, gradually adding the enhancements he talked about. His talk started with this statement:

“The UI is the product, and a big part of it is the text.”

After that, he asked an important question:

“How can we make the type work for us?”

Some considerations we should keep in mind:

  • Font Choice
    System fonts are boring. We should think about what the voice of our product is! So, pick fonts that:
    • are in the right category (mostly sans, sometimes slabs),
    • have even strokes with a little contrast (it must work in small sizes),
    • have open-letter shapes,
    • have letterforms that are easy to distinguish (the “Il1” test).

  • Hierarchy
    i.e. “What is the most important thing in this view?”

Start with the body text, then emphasize and deemphasize everything else — and watch out for the accessibility aspects (e.g. minimum contrast ratios).

Accessibility is important, too!

  • Spacing
    Relations should be clear (law of proximity) and be able to define a base unit.

Then we can add some final polish (and if it is appropriate, some delight).

As Oliver said, “Go out there and pimp that type!

Day 2 Talks

“Design Beyond Breakpoints” by Christine Vallaure

I’m passionate about the designer-developer collaboration topic (I have a course and some articles about it), so I was very excited to hear Christine’s talk! Additionally, I really appreciate all the Figma content she shares, so I was sure that I’d learn some new exciting things about our favorite UI design software.

Christine’s talk was about pushing the current limits of Figma: how to do responsive design in Figma, e.g., by using the so-called container queries. These queries are like media queries, but we are not looking at the viewport size. Instead, we are looking at the container. So a component behaves differently if, e.g., it is inside a sidebar, and we can also nest container queries, e.g., tell an icon button inside a card that upon resizing, the icon should disappear).

Recommended Reading: A Primer On CSS Container Queries by Stephanie Eckles

She also shared that there is a German fairy tale about a race between a hedgehog and a rabbit. The hedgehog wins the race even though he is slower. Since he is smarter, he sends his wife (who looks exactly like him) to the finish line in advance. Christine told us that she had mixed feelings about this story because she didn’t like the idea of pretending to be fast when someone has other great skills. In her analogy, the rabbits are the developers, and the hedgehogs are the designers. Her lesson was that we should embrace each others’ tools and skills instead of trying to mimic each others’ work.

The lesson of the talk was not really about pushing the limits. Rather, the talk was about reminding us of why we are doing all this:

  • To communicate our design decisions better to the developers,
  • To try out how our design behaves in different cases (e.g., where it should break and how), and
  • It is also great for documentation purposes; she recommended the EightShapes Specs plugin by Nathan Curtis.

Her advice is:

  • We should create a playground inside Figma and try out how our components and designs work (and let developers try out our demo, too);
  • Have many discussions with developers, and don’t start these discussions from zero, e.g., read a bit about frontend development and have a fundamental knowledge of development aspects.

“It’s A Marathon, And A Sprint” by Fabricio Teixeira

If you are a design professional, you have surely encountered at least a couple of articles published by the UX Collective, a very impactful design publication. Fabricio is one of the founders of that awesome corner of the Internet, so I knew that his talk would be full of insights and little details. He shared four case studies and included a lot of great advice.

During his talk, Fabricio used the analogy of running. When we prepare for a long-distance running competition, 80% of the time, we should do easy runs, and 20% of the time should be devoted to intensive because short interval runs get the best results. He also highlighted that just like during a marathon running, things will get hard during our product design projects, but we must remember how much we trained. When someone from the audience asked how not to get overly confident, he said that we should build an environment of trust so that other people on our team can make us realize if we’ve become too confident.

He then mentioned four case studies; all of these projects required a different, unique approach and design process:

  • Product requirements are not required.
    Vistaprint and designing face masks — the world needed them to design really fast; it was a 15-day sprint, and they did not have time to design all the color and sizing selectors (and only after the launch did it turn into a marathon).

  • Timelines aren’t straight lines.
    The case study of Equinox treadmill UI: they created a fake treadmill to prototype the experience; they didn’t wait for the hardware to get completed (the hardware got delayed due to manufacturing issues), so there was no delay in the project even in the face of uncertainty and ambiguity. For example, they took into account the hand reach zones, increased the spacing between UI elements so that these remained usable even while the user was running, and so on.

Exciting challenge: Average treadmill interface, a complicated dashboard, everything is fighting for our attention.

  • Research is a mindset, not a step.
    He mentioned the Gofundme project, where they applied a fluid approach to research meaning that design and research ran in parallel, the design informed research and vice versa. Also, insights can come from anyone from the team, not just from researchers. I really liked that they started a book club, everyone read a book about social impact, and they created a Figma file that served as a knowledge hub.

  • Be ready for some math
    During the New York City Transit project, they created a real-time map of the subway system, which required them to create a lot of vectors and do some math. One of the main design challenges was, “How to clean up complexity?”

Fabricio shared that we should be “flexibly rigorous”: just as during running, we should listen to our body, we should listen to the special context of a given project. There is no magic formula out there. Rigor and discipline is important, but we must listen to our body so that we don’t lose touch of reality.

The key takeaway is that because, we as a design community focus a lot on processes, and of course there is no one way to do design, we should combine sprints and marathons, adjust our approach to the needs of the given project, and most of all, focus more on principles, e.g. how we, as a team, want to work together?

A last note is when Fabricio mentioned in the post-talk discussion with Vitaly Friedman that having a 1–3-hour long kick-off meeting with our team is too short, we will work on something for e.g. 6 months, so Fabricio’s team introduced kick-off weeks.

Kat delivered one of the most important talks (or maybe the most important talk) of the conference. The ethics of design is a topic that has been around for many years now. Delivering a talk like this is challenging because it requires a perspective that easily gets lost in our everyday design work. I was really curious about how Kat would make us think and have us question our way of working.

“Design Ethically: From Imperative To Action” by Kat Zhou

Kat’s talk walked us through our current reality such as how algorithms have built in biases, manipulate users, hide content that shouldn’t be hidden, and don’t block things that shouldn’t be allowed. The main question, however, is:

Why is that happening? Why do designers create such experiences?

Kat’s answer is that companies must ruthlessly design for growth. And we, as designers, have the power to exercise control over others.

She showed us some examples of what she considers oppressive design, like the Panopticon by Jeremy Bentham. She also provided an example of hostile architecture (whose goal is to prevent humans from resting in public places). There are also dark patterns within digital experiences similar to the New York Times subscription cancellation flow (users had to make a call to cancel).

And the end goal of oppressive design is always to get more user data, more users’ time, and more of the users’ money. What amplifies this effect is that from an employee’s (designer’s) perspective, the performance is tied to achieving OKRs.

Our challenge is how we might redesign the design process so that it doesn’t perpetuate the existing systems of power. Kat’s suggestion is that we should add some new parts to the design process:

  • There are two phases:
    Intent: “Is this problem a worthy problem to solve?”
    Results: “What consequences do our solutions have? Who is it helping? Who is it harming?”
  • Add “Evaluate”:
    “Is the problem statement we defined even ethically worthy of being addressed?”
  • Add “Forecast”:
    “Can any ethical violations occur if we implement this idea?”
  • Add “Monitor”:
    “Are there any new ethical issues occurring? How can we design around them?”

Kat shared a toolkit and framework that help us understand the consequences of the things we are building.

Kat talked about forecasting in more detail. As she said,

“Forecasted consequences often are design problems.”

Our responsibility is to design around those forecasted consequences. We can pull a product apart by thinking about the layers of effect:

  • The primary layer of effect is intended and known, e.g.: Google Search is intended and known as a search engine.
  • The secondary effect is also known, and intended by the team, e.g. Google Search is an ad revenue generator.
  • The tertiary effect: typically unintended, possibly known, e.g. Algorithms of Oppression, Safiya Umoja Noble talks about the biases built in Google Search.

So designers should define and design ethical primary and secondary effects, and forecast tertiary effects, and ensure that they don’t pose any significant harm.

I first encountered atomic design in 2015, and I remember that I was so fascinated by the clear logical structure behind this mental model. Brad is one of my design heroes because I really admire all the work he has done for the design community. I knew that behind the “clickbait title” (Brad said it himself), there’ll be some great points. And I was right: he mentioned some ideas I have been thinking about since his talk.

“Is Atomic Design Dead?” by Brad Frost

In the first part of the talk, Brad gave us a little WWW history starting from the first website all the way to web components. Then he summarized that design systems inform and influence products and vice versa.

I really liked that he listed three problematic cases:

  • When the design system team is very separated, sitting in their ivory tower.
  • When the design system police put everyone in the design system jail for detaching an instance.
  • When the product roadmaps eat the design system efforts.

He then summarized the foundations of atomic design (atoms, molecules, organisms, templates and pages) and gave a nice example using Instagram.

He answered the question asked in the title of the talk: atomic design is not dead, since it is still a useful mental model for thinking about user interfaces, and it helps teams find a balance, and equilibrium between design systems and products.

And then here came the most interesting and thought-provoking part: where do we go from here?

  1. What if we don’t waste any more human potential on designing yet another date picker, but instead, we create a global design system together, collaboratively? It’d be an unstyled component that we can style for ourselves.

  2. The other topic he brought up is the use of AI, and he mentioned Luke Wroblewski’s talk, too. He also talked about the project he is working on with Kevin Coyle: it is about converting a codebase (and its documentation) to a format that GPT 4 can understand. Brad showed us a demo of creating an alert component using ChatGPT (and this limited corpus).

His main point was that since the “genie” is out of the bottle, it is on us to use AI more responsibly. Brad closed his talk by highlighting the importance of using human potential and time for better causes than designing one more date picker.

Mystery Theme/Other Highlights

When Vitaly first got on stage, one of the things he asked the audience to keep an eye out for was an overarching mystery theme that connects all the talks. At the end of the conference, he finally revealed the answer: the theme was connected to the city of Antwerp!

Where does the name "Antwerp" come from? “Hand werpen” or “to throw a hand”. Once upon a time, there was a giant that collected money from everyone passing the river. One time, a soldier came and just cut off the hand of this giant and threw it to the other side, liberating the city. So, the story and the theme were “legends.” For instance, Molly Hellmuth included Bigfoot (Sasquatch), Stéphanie mentioned Prometheus, Nick added the word "myth" to one of his slides, Oliver applied a typeface usually used in fairy tales, Christine mentioned Sisyphus and Kat talked about Pandora’s box.

My Very Own Avatar

One more awesome thing that happened thanks to attending this conference is that I got a great surprise from the Smashing team! I won the hidden challenge 'Best Sketch Notes', and I have been gifted a personalized avatar created by Smashing Magazine’s illustrator, Ricardo.

Full Agenda

There were other great talks — I’ll be sure to watch the recordings! For anyone asking, here is the full agenda of the conference.

A huge thanks again to all of the organizers! You can check out all the current and upcoming Smashing conferences planned on the SmashingConf website anytime.

Saving The Best For Last: Photos And Recordings

The one-and-only Marc Thiele captured in-person vibes at the event — you can see the stunning, historic Bourla venue it took place in and how memorable it all must have been for the attendees! 🧡

For those who couldn’t make it in person and are curious to watch the talks, well, I have good news for you! The recordings have been recently published — you can watch them over here:


Thank you for reading! I hope you enjoyed reading this as much as I did writing it! See you at the next design & UX SmashingConf in Antwerp, maybe?

Incident Management: Checklist, Tools, and Prevention

What Is Incident Management?

Incident management is the process of identifying, responding, resolving, and learning from incidents that disrupt the normal operation of a service or system. An incident can be anything from a server outage, a security breach, a performance degradation, or a customer complaint. Incident management aims to restore the service as quickly as possible, minimize the impact on users and the business, and prevent the recurrence of similar incidents.

Incident Management Checklist

Incident management can be a complex and stressful process, especially when dealing with high-severity incidents that affect a large number of users or have a significant business impact. To help you navigate the incident management process, here is a checklist of the main steps and best practices to follow:

A Web Designer’s Accessibility Advocacy Toolkit

Web accessibility can be challenging, particularly for clients unfamiliar with tech or compliance with The Americans With Disabilities Act (ADA). My role as a digital designer often involves guiding clients toward ADA-compliant web designs. I’ve acquired many strategies over the years for encouraging clients to adopt accessible web practices and invest in accessible user interfaces. It’s something that comes up with nearly every new project, and I decided to develop a personal toolkit to help me make the case.

Now, I am opening up my toolkit for you to have and use. While some of the strategies may be specific to me and my work, there are plenty more that cast a wider net and are more universally applicable. I’ve considered different real-life scenarios where I have had to make a case for accessibility. You may even personally identify with a few of them!

Please enjoy. As you do, remember that there is no silver bullet for “selling” accessibility. We can’t win everyone over with cajoling or terse arguments. My hope is that you are able to use this collection to establish partnerships with your colleagues and clients alike. Accessibility is something that anyone can influence at various stages in a project, and “winning” an argument isn’t exactly the point. It’s a bigger picture we’re after, one that influences how teams work together, changes habits, and develops a new level of empathy and understanding.

I begin with general strategies for discussing accessibility with clients. Following that, I provide specific language and responses you can use to introduce accessibility practices to your team and clients and advocate its importance while addressing client skepticism and concerns. Use it as a starting point and build off of it so that it incorporates points and scenarios that are more specific to your work. I sincerely hope it helps you advance accessible practices.

General Strategies

We’ll start with a few ways you can position yourself when interacting with clients. By adopting a certain posture, we can set ourselves up to be the experts in the room, the ones with solutions rather than arguments.

Showcasing Expertise

I tend to establish my expertise and tailor the information to the client’s understanding of accessibility, which could be not very much. For those new to accessibility, I offer a concise overview of its definition, evaluation, and business impact. For clients with a better grasp of accessible practices, I like to use the WCAG as a point of reference for helping frame productive discussions based on substance and real requirements.

Aligning With Client Goals

I connect accessibility to the client’s goals instead of presenting accessibility as a moral imperative. No one loves being told what to do, and talking to clients on their terms establishes a nice bridge for helping them connect the dots between the inherent benefits of accessible practices and what they are trying to accomplish. The two aren’t mutually exclusive!

In fact, there are many clear benefits for apps that make accessibility a first-class feature. Refer to the “Accessibility Benefits” section to help describe those benefits to your colleagues and clients.

Defining Accessibility In The Project Scope

I outline accessibility goals early, typically when defining the project scope and requirements. Baking accessibility into the project scope ensures that it is at least considered at this crucial stage where decisions are being made for everything from expected outcomes to architectural requirements.

User stories and personas are common artifacts for which designers are often responsible. Use these as opportunities to define accessibility in the same breath as defining who the users are and how they interact with the app. Framing stories and outcomes as user interactions in an “as-when-then-so” format provides an opening to lead with accessibility:

As a user, when I __, then I expect that __, so I can _.

Fill in the blanks. I think you’ll find that user’s expected outcomes are typically aligned with accessible experiences. Federico Francioni published his take on developing inclusive user personas, building off other excellent resources, including Microsoft’s Inclusive Design guidelines.

Being Ready With Resources and Examples

I maintain a database of resources for clients interested in learning more about accessibility. Sharing anecdotes, such as clients who’ve seen benefits from accessibility or examples of companies penalized for non-compliance, can be very impactful.

Microsoft is helpful here once again with a collection of brief videos that cover a variety of uses, from informing your colleagues and clients on basic accessibility concepts to interviews with accessibility professionals and case studies involving real users.

There are a few go-to resources I’ve bookmarked to share with clients who are learning about accessibility for the first time. What I like about these is the approachable language and clarity. “Learn Accessibility” from web.dev is especially useful because it’s framed as a 21-part course. That may sound daunting, but it’s organized in small chunks that make it manageable, and sometimes I will simply point to the Glossary to help clients understand the concepts we discuss.

And where “Learn Accessibility” is focused on specific components of accessibility, I find that the Inclusive Design Principles site has a perfect presentation of the concepts and guiding principles of inclusion and accessibility on the web.

Meanwhile, I tend to sit beside a client to look at The A11Y Project. I pick a few resources to go through. Otherwise, the amount of information can be overwhelming. I like to offer this during a project’s planning phase because the site is focused on actionable strategies that help scope work.

Leveraging User Research

User research that is specific to the client’s target audience is more convincing than general statistics alone. When possible, I try to understand those user’s needs, including what they expect, what sort of technology they use to browse online, and where they are geographically. Painting a more complete picture of users — based on real-life factors and information — offers a more human perspective and plants the first seeds of empathy in the design process.

Web analytics are great for identifying who users are and how they currently interact with the app. At the same time, they are also wrought with caveats as far as accuracy goes, depending on the tool you use and how you collect your data. That said, I use the information to support my user persona decisions and the specific requirements I write. Analytics add nice brush strokes to the picture but do not paint the entire view. So, leverage it!

The big caveat with web analytics? There’s no way to identify traffic that uses assistive tech. That’s a good thing in general as far as privacy goes, but it does mean that researching the usability of your site is best done with real users — as it is with any user research, really. The A11Y Project has excellent resources for testing screen readers, including a link to this Smashing Magazine article about manual accessibility testing by Eric Bailey as well as a vast archive of links pointing to other research.

That said, web analytics can still be very useful to help accommodate other impairments, for example, segmenting traffic by age (for improving accessibility for low vision) and geography (for improving performance gaps for those on low-powered devices). WebAIM also provides insights in a report they produced from a 2018 survey of users who report having low vision.

Leaving Room For Improvements

Chances are that your project will fall at least somewhat short of your accessibility plans. It happens! I see plenty of situations where a late deadline translates into rushed work that sacrifices quality for speed, and accessibility typically falls victim to degraded quality.

I keep track of these during the project’s various stages and attempt to document them. This way, there’s already a roadmap for inclusive and accessible improvements in subsequent releases. It’s scoped, backlogged, and ready to drop into a sprint.

For projects involving large sites with numerous accessibility issues, I emphasize that partial accessibility compliance is not the same as actual compliance. I often propose phased solutions, starting with incremental changes that fit within the current scope and budget.

And remember, just because something passes a WCAG success criterion doesn’t necessarily mean it is accessible. Passing tests is a good sign, but there will always be room for improvement.

Commonly Asked Accessibility Questions

Accessibility is a broad topic, and we can’t assume that everyone knows what constitutes an “accessible” interface. Often, when I get pushback from a colleague or client, it’s because they simply do not have the same context that I do. That’s why I like to keep a handful of answers to commonly asked questions in my back pocket. It’s amazing how answering the “basics” leads to productive discussions filled with substance rather than ones grounded in opinion.

What Do We Mean By “Web Accessibility”?

When we say “web accessibility,” we’re generally talking about making online content available and usable for anyone with a disability, whether it’s a permanent impairment or a temporary one. It’s the practice of removing friction that excludes people from gaining access to content or from completing a task. That usually involves complying with a set of guidelines that are designed to remove those barriers.

Who Creates Accessibility Guidelines?

The Web Content Accessibility Guidelines (WCAG) are created by a working group of the World Wide Web Consortium (W3C) called the Web Accessibility Initiative (WAI). The W3C develops guidelines and principles to help designers, developers, and authors like us create web experiences based on a common set of standards, including those for HTML, CSS, internationalization, privacy, security, and yes, accessibility, among many, many other areas. The WAI working group maintains the accessibility standards we call WCAG.

Who Needs Web Accessibility?

Twenty-seven percent of the U.S. population has a disability, emphasizing the widespread need for accessible web design. WCAG primarily focuses on three groups:

  1. Cognitive or learning disabilities,
  2. Visual impairments,
  3. Motor skills.

When we make web experiences that solve these issues based on established guidelines, we’re not only doing good for those who are directly impacted by impairment but those who may be impaired in less direct ways as well, such as establishing large target sizes for those tapping a touchscreen phone with their hands full, or using proper color contrast for those navigating a screen in bright sunlight. Everyone needs — and benefits from — accessibility!

Further Reading

How Is Web Accessibility Regulated?

The Americans with Disabilities Act (ADA) is regulated by the Civil Rights Division of the U.S. Department of Justice, which was established by the Civil Rights Act of 1957. Even though there is a lot of bureaucracy in that last sentence, it’s reassuring to know the U.S. government not only believes in web accessibility but enforces it as well.

Non-compliance can result in legal action, with first-time ADA violations leading to fines of up to $75,000, increasing to $150,000 for subsequent violations. The number of lawsuits for alleged ADA breaches has surged in recent years, with more than 4,500 lawsuits filed in 2023 against sites that fail to comply with WCAG AA 2.1 alone — roughly 500 more lawsuits than 2022!

Further Reading

How Is Web Accessibility Evaluated?

Web accessibility is something we can test against. Many tools have been created to audit sites on the spot based on WCAG success criteria that specify accessible requirements. That would be a standards-based evaluation using WCAG as a reference point for auditing compliance.

WebAIM has an excellent page that compares different types of accessibility testing, reporting, and tooling. They are also quick to note that automated testing, while convenient, is not a comprehensive way to audit accessibility. Automated tools that scan websites may be able to pick up instances where mistakes in the HTML might contribute to accessibility issues and where color contrasts are insufficient. But they cannot replace or perfectly imitate a real-life person. Testing in real browsers with real people continues to be the most effective way to truly evaluate accessible web experiences.

This isn’t to say automated tools should not be part of an accessibility testing suite. In fact, they often highlight areas you may have overlooked. Even false positives are good in the sense that they force you to pause and look more closely at something. Some of the most widely used automated tools include the following:

These are just a few of the most frequent tools I use in my own testing, but there are many more, and the WAI maintains an extensive list of available tools that are worth considering. But again, remember that automated testing is not a one-to-one replacement for testing with real users.

Checklists can be handy for ensuring you are covering your bases:

Accessibility Benefits

When discussing accessibility, I find the most effective arguments are ones that are framed around the interests of clients and stakeholders. That way, the discussion stays within scope and helps everyone see that proper accessibility practices actually benefit business goals. Speaking in business terms is something I openly embrace because it typically supports my case.

The following are a few ways I would like to explain the positive impacts that accessibility has on business goals.

Case Studies

Sometimes, the most convincing approach is to offer examples of companies that have committed to accessible practices and come out better for it. And there are plenty of examples! I like to use case studies and reports in a similar industry or market for a more apples-to-apples comparison that stakeholders can identify with.

That said, there are great general cases involving widely respected companies and brands, including This American Life and Tesco, that demonstrate benefits such as increased organic search traffic, enhanced user engagement, and reduced site load times. For a comprehensive guide on framing these benefits, I refer to the W3C’s resource on building the business case for accessibility.

What To Say To Your Client

Let me share how focusing on accessibility can directly benefit your business. For instance, in 2005, Legal & General revamped their website with accessibility in mind and saw a substantial increase in organic search traffic exceeding 50%. This isn’t just about compliance; it’s about reaching a wider audience more effectively. By making your site more accessible, we can improve user engagement and potentially decrease load times, enhancing the overall user experience. This approach not only broadens your reach to include users with disabilities but also boosts your site’s performance in search rankings. In short, prioritizing accessibility aligns with your goal to increase online visibility and customer engagement.

Further Reading

The Curb-Cut Effect

The “curb-cut effect” refers to how features originally designed for accessibility end up benefiting a broader audience. This concept helps move the conversation away from limiting accessibility as an issue that only affects the minority.

Features like voice control, auto-complete, and auto-captions — initially created to enhance accessibility — have become widely used and appreciated by all users. This effect also includes situational impairments, like using a phone in bright sunlight or with one hand, expanding the scope of who benefits from accessible design. Big companies have found that investing in accessibility can spur innovation.

What To Say To Your Client

Let’s consider the ‘curb-cut effect’ in the context of your website. Originally, curb cuts were designed for wheelchair users, but they ended up being useful for everyone, from parents with strollers to travelers with suitcases. Similarly, many digital accessibility features we implement can enhance the experience for all your users, not just those with disabilities. For example, features like voice control and auto-complete were developed for accessibility but are now widely used by everyone. This isn’t just about inclusivity; it’s about creating a more versatile and user-friendly website. By incorporating these accessible features, we’re not only catering to a specific group but also improving the overall user experience, which can lead to increased engagement and satisfaction across your entire customer base.

Further Reading

SEO Benefits

I would like to highlight the SEO benefits that come with accessible best practices. Things like nicely structured sitemaps, a proper heading outline, image alt text, and unique link labels not only improve accessibility for humans but for search engines as well, giving search crawlers clear context about what is on the page. Stakeholders and clients care a lot about this stuff, and if they are able to come around on accessibility, then they’re effectively getting a two-for-one deal.

What To Say To Your Client

Focusing on accessibility can boost your website’s SEO. Accessible features, like clear link names and organized sitemaps, align closely with what search engines prioritize. Google even includes accessibility in its Lighthouse reporting. This means that by making your site more accessible, we’re also making it more visible and attractive to search engines. Moreover, accessible websites tend to have cleaner, more structured code. This not only improves website stability and loading times but also enhances how search engines understand and rank your content. Essentially, by improving accessibility, we’re also optimizing your site for better search engine performance, which can lead to increased traffic and higher search rankings.

Further Reading

Better Brand Alignment

Incorporating accessibility into web design can significantly elevate how users perceive a brand’s image. The ease of use that comes with accessibility not only reflects a brand’s commitment to inclusivity and social responsibility but also differentiates it in competitive markets. By prioritizing accessibility, brands can convey a personality that is thoughtful and inclusive, appealing to a broader, more diverse customer base.

What To Say To Your Client

Implementing web accessibility is more than just a compliance measure; it’s a powerful way to enhance your brand image. In the competitive landscape of e-commerce, having an accessible website sets your brand apart. It shows your commitment to inclusivity, reaching out to every potential customer, regardless of their abilities. This not only resonates with a diverse audience but also positions your brand as socially responsible and empathetic. In today’s market, where consumers increasingly value corporate responsibility, this can be a significant differentiator for your brand, helping to build a loyal customer base and enhance your overall brand reputation.

Further Reading

Cost Efficiency

I mentioned earlier how developing accessibility enhances SEO like a two-for-one package. However, there are additional cost savings that come with implementing accessibility during the initial stages of web development rather than retrofitting it later. A proactive approach to accessibility saves on the potential high costs of auditing and redesigning an existing site and helps avoid expensive legal repercussions associated with non-compliance.

What To Say To Your Client

Retrofitting a website for accessibility can be quite expensive. Consider the costs of conducting an accessibility audit, followed by potentially extensive (and expensive) redesign and redevelopment work to rectify issues. These costs can significantly exceed the investment required to build accessibility into the website from the start. Additionally, by making your site accessible now, we can avoid the legal risks and potential fines associated with ADA non-compliance. Investing in accessibility early on is a cost-effective strategy that pays off in the long run, both financially and in terms of brand reputation. Besides, with the SEO benefits that we get from implementing accessibility, we’re saving lots of money and work that would otherwise be sunk into redevelopment.

Further Reading

Addressing Client Concerns

Still getting pushback? There are certain arguments I hear time and again, and I have started keeping a collection of responses to them. In some cases, I have left placeholder instructions for tailoring the responses to your project.

“Our users don’t need it.”

Statistically, 27% of the U.S. population does have some form of disability that affects their web use. [Insert research on your client’s target audience, if applicable.] Besides permanent impairments, we should also take into account situational ones. For example, imagine one of your potential clients trying to access your site on a sunny golf course, struggling to see the screen due to glare, or someone in a noisy subway unable to hear audio content. Accessibility features like high contrast modes or captions can greatly enhance their experience. By incorporating accessibility, we’re not only catering to users with disabilities but also ensuring a seamless experience for anyone in less-than-ideal conditions. This approach ensures that no potential client is left out, aligning with the goal to reach and engage a wider audience.

“Our competitors aren’t doing it.”

It’s interesting that your competitors haven’t yet embraced accessibility, but this actually presents a unique opportunity for your brand. Proactively pursuing accessibility not only protects you from the same legal exposure your competitors face but also positions your brand as a leader in customer experience. By prioritizing accessibility when others are not, you’re differentiating your brand as more inclusive and user-friendly. This both appeals to a broader audience and showcases your brand’s commitment to social responsibility and innovation.

“We’ll do it later because it’s too expensive.”

I understand concerns about timing and costs. However, it’s important to note that integrating accessibility from the start is far more cost-effective than retrofitting it later. If accessibility is considered after development is complete, you will face additional expenses for auditing accessibility, followed by potentially extensive work involving a redesign and redevelopment. This process can be significantly more expensive than building in accessibility from the beginning. Furthermore, delaying accessibility can expose your business to legal risks. With the increasing number of lawsuits for non-compliance with accessibility standards, the cost of legal repercussions could far exceed the expense of implementing accessibility now. The financially prudent move is to work on accessibility now.

“We’ve never had complaints.”

It’s great to hear that you haven’t received complaints, but it’s important to consider that users who struggle to access your site might simply choose not to return rather than take the extra step to complain about it. This means you could potentially be missing out on a significant market segment. Additionally, when accessibility issues do lead to complaints, they can sometimes escalate into legal cases. Proactively addressing accessibility can help you tap into a wider audience and mitigate the risk of future lawsuits.

“It will affect the aesthetics of the site.”

Accessibility and visual appeal can coexist beautifully. I can show you examples of websites that are both compliant and visually stunning, demonstrating that accessibility can enhance rather than detract from a site’s design. Additionally, when we consider specific design features from an accessibility standpoint, we often find they actually improve the site’s overall usability and SEO, making the site more intuitive and user-friendly for everyone. Our goal is to blend aesthetics with functionality, creating an inclusive yet visually appealing online presence.
Handling Common Client Requests

This section looks at frequent scenarios I’ve encountered in web projects where accessibility considerations come into play. Each situation requires carefully balancing the client’s needs/wants with accessibility standards. I’ll leave placeholder comments in the examples so you are able to address things that are specific to your project.

The Client Directly Requests An Inaccessible Feature

When clients request features they’ve seen online — like unfocusable carousels and complex auto-playing animations — it’s crucial to discuss them in terms that address accessibility concerns. In these situations, I acknowledge the appealing aspects of their inspirations but also highlight their accessibility limitations.

That’s a really neat feature, and I like it! That said, I think it’s important to consider how users interact with it. [Insert specific issues that you note, like carousels without pause buttons or complex animations.] My recommendation is to take the elements that work well &mdahs; [insert specific observation] — and adapt them into something more accessible, such as [Insert suggestion]. This way, we maintain the aesthetic appeal while ensuring the website is accessible and enjoyable for every visitor.

The Client Provides Inaccessible Content

This is where we deal with things like non-descriptive page titles, link names, form labels, and color contrasts for a better “reading” experience.

Page Titles

Sometimes, clients want page titles to be drastically different than the link in the navigation bar. Usually, this is because they want a more detailed page title while keeping navigation links succinct.

I understand the need for descriptive and engaging page titles, but it’s also essential to maintain consistency with the navigation bar for accessibility. Here’s our recommendation to balance both needs:
  • Keyword Consistency: You can certainly have a longer page title to provide more context, but it should include the same key terms as the navigation link. This ensures that users, especially those using screen readers to announce content, can easily understand when they have correctly navigated between pages.
  • Succinct Titles With Descriptive Subtitles: Another approach is to keep the page title succinct, mirroring the navigation link, and then add a descriptive tagline or subtitle on the page itself. This way, the page maintains clear navigational consistency while providing detailed context in the subtitle. These approaches aim to align the user’s navigation experience with their expectations, ensuring clarity and accessibility.

Links

A common issue with web content provided by clients is the use of non-descriptive calls to action with phrases and link labels, like “Read More” or “Click Here.” Generic terms can be confusing for users, particularly for those using screen readers, as they don’t provide context about what the link leads to or the nature of the content on the other end.

I’ve noticed some of the link labels say things like “Read More” or “Click Here” in the design. I would consider revising them because they could be more descriptive, especially for those relying on screen readers who have to put up with hearing the label announced time and again. We recommend labels that clearly indicate where the link leads. [Provide a specific example.] This approach makes links more informative and helps all users alike by telling them in advance what to expect when clicking a certain link. It enhances the overall user experience by providing clarity and context.

Forms

Proper form labels are a critical aspect of accessible web design. Labels should clearly indicate the purpose of each input field, whether it’s required, and the expected format of the information. This clarity is essential for all users, especially for those using screen readers or other assistive technologies. Plus, there are accessible approaches to pairing labels and inputs that developers ought to be familiar with.

It’s important that each form field is clearly labeled to inform users about the type of data expected. Additionally, indicating which fields are required and providing format guidelines can greatly enhance the user experience. [Provide a specific example from the client’s content, e.g., we can use ‘Phone (10 digits, no separators)’ for a phone number field to clearly indicate the format.] These labels not only aid in navigation and comprehension for all users but also ensure that the forms are accessible to those using assistive technologies. Well-labeled forms improve overall user engagement and reduce the likelihood of errors or confusion.

Brand Palette

Clients will occasionally approach me with color palettes that produce too low of contrast when paired together. This happens when, for instance, on a website with a white background, a client wants to use their brand accent color for buttons, but that color simply blends into the background color, making it difficult to read. The solution is usually creating a slightly adjusted tint or shade that’s used specifically for digital interfaces — UI colors, if you will. Atul Varma’s “Accessible Color Palette Builder” is a great starting point, as is this UX Lift lander with alternatives.

We recommend expanding the brand palette with color values that work more effectively in web designs. By adjusting the tint or shade just a bit, we can achieve a higher level of contrast between colors when they are used together. Colors render differently depending on the device and screen they are on, and even though we might be using colors consistent with brand identity, those colors will still display differently to users. By adding colors that are specifically designed for web use, we can enhance the experience for our users while staying true to the brand’s essence.

Suggesting An Accessible Feature To Clients

Proactively suggesting features like sitemaps, pause buttons, and focus indicators is crucial. I’ll provide tips on how to effectively introduce these features to clients, emphasizing their importance and benefit.

Sitemap

Sitemaps play a crucial role in both accessibility and SEO, but clients sometimes hesitate to include them due to concerns about their visual appeal. The challenge is to demonstrate the value of site maps without compromising the site’s overall aesthetic.

I understand your concerns about the visual appeal of sitemaps. However, it’s important to consider their significant role in both accessibility and SEO. For users with screen readers, a sitemap greatly simplifies site navigation. From an SEO perspective, it acts like a directory, helping search engines effectively index all your pages, making your site more discoverable and user-friendly. To address the aesthetic aspect, let’s look at how major companies like Apple and Microsoft incorporate sitemaps. Their designs are minimal yet consistent with the site’s overall look and feel. [If applicable, show how a competitor is using sitemaps.] By incorporating a well-designed sitemap, we can improve user experience and search visibility without sacrificing the visual quality of your website.

Accessible Carousels

Carousels are contentious design features. While some designers are against them and have legitimate reasons for it, I believe that with the right approach, they can be made accessible and effective. There are plenty of resources that provide guidance on creating accessible carousels.

When a client requests a home page carousel in a new site design, it’s worth considering alternative solutions that can avoid the common pitfalls of carousels, such as low click-through rates, increased load times, content being pushed below the fold, and potentially annoying auto-advancing features.

I see the appeal of using a carousel on your homepage, but there are a few considerations to keep in mind. Carousels often have low engagement rates and can slow down the site. They also tend to move key content below the fold, which might not be ideal for user engagement. An auto-advancing carousel can also be distracting for users. Instead, we could explore alternative design solutions that effectively convey your message without these drawbacks. [Insert recommendation, e.g., For instance, we could use a hero image or video with a strong call-to-action or a grid layout that showcases multiple important segments at once.] These alternatives can be more user-friendly and accessible while still achieving the visual and functional goals of a carousel.

If we decide to use a carousel, I make a point of discussing the necessary accessibility features with the client right from the start. Many clients aren’t aware that elements like pause buttons are crucial for making auto-advancing carousels accessible. To illustrate this, I’ll show them examples of accessible carousel designs that incorporate these features effectively.

Further Reading

Pause Buttons

Any animation that starts automatically, lasts more than five seconds, and is presented in parallel with other content, needs a pause button per WCAG Success Criterion 2.2.2. A common scenario is when clients want a full-screen video on their homepage without a pause button. It’s important to explain the necessity of pause buttons for meeting accessibility standards and ensuring user comfort without compromising the website’s aesthetics.

I understand your desire for a dynamic, engaging homepage with a full-screen video. However, it’s essential for accessibility purposes that any auto-playing animation that is longer than five seconds includes a pause button. This is not just about compliance; it’s about ensuring that all visitors, including those with disabilities, can comfortably use your site.

The good news is that pause buttons can be designed to be sleek and non-intrusive, complementing your site’s aesthetics rather than detracting from them. Think of it like the sound toggle buttons on videos. They’re there when you need them, but they don’t distract from the viewing experience. I can show you some examples of beautifully integrated pause buttons that maintain the immersive feel of the video while ensuring accessibility standards are met.
Conclusion

That’s it! This is my complete toolkit for discussing web accessibility with colleagues and clients at the start of new projects. It’s not always easy to make a case, which is why I try to appeal from different angles, using a multitude of resources and research to support my case. But with practice, care, and true partnership, it’s possible to not only influence the project but also make accessibility a first-class feature in the process.

Please use the resources, strategies, and talking points I have provided. I share them to help you make your case to your own colleagues and clients. Together, incrementally, we can take steps toward a more accessible web that is inclusive to all people.

And when in doubt, remember the core principles we covered:

  • Show your expertise: Adapt accessibility discussions to fit the client’s understanding, offering basic or in-depth explanations based on their familiarity.
  • Align with client goals: Connect accessibility with client-specific benefits, such as SEO and brand enhancement.
  • Define accessibility in project scope: Include accessibility as an integral part of the design process and explain how it is evaluated.
  • Be prepared with Resources: Keep a collection of relevant resources, including success stories and the consequences of non-compliance.
  • Utilize User Research: Use targeted user research to inform design choices, demonstrating accessibility’s broad impact.
  • Manage Incremental Changes: Suggest iterative changes for large projects to address accessibility in manageable steps.

Advanced Brain-Computer Interfaces With Java

In the first part of this series, we introduced the basics of brain-computer interfaces (BCIs) and how Java can be employed in developing BCI applications. In this second part, let's delve deeper into advanced concepts and explore a real-world example of a BCI application using NeuroSky's MindWave Mobile headset and their Java SDK.

Advanced Concepts in BCI Development

  1. Motor Imagery Classification: This involves the mental rehearsal of physical actions without actual execution. Advanced machine learning algorithms like deep learning models can significantly improve classification accuracy.
  2. Event-Related Potentials (ERPs): ERPs are specific patterns in brain signals that occur in response to particular events or stimuli. Developing BCI applications that exploit ERPs requires sophisticated signal processing techniques and accurate event detection algorithms.
  3. Hybrid BCI Systems: Hybrid BCI systems combine multiple signal acquisition methods or integrate BCIs with other physiological signals (like eye tracking or electromyography). Developing such systems requires expertise in multiple signal acquisition and processing techniques, as well as efficient integration of different modalities.

Real-World BCI Example

Developing a Java Application With NeuroSky's MindWave Mobile

NeuroSky's MindWave Mobile is an EEG headset that measures brainwave signals and provides raw EEG data. The company provides a Java-based SDK called ThinkGear Connector (TGC), enabling developers to create custom applications that can receive and process the brainwave data.

Unleashing Greatness: Alexander the Great’s Journey With Generative AI

Alexander the Great’s Greatness Is a Subject of Fascination and Admiration for Several Reasons

Alexander, by age 30, created a vast empire from Greece to India, showcasing tactical brilliance and leadership. His conquests led to the Hellenistic era, blending Greek culture with conquered regions impacting art, science, and philosophy. A student of Aristotle, he valued intellectual pursuits and founded cities of learning. Alexander's visionary leadership aimed at a unified world empire, and he showed tolerance towards conquered people's customs, facilitating cultural assimilation. His legacy persisted through the Diadochi, spreading Hellenistic culture and influencing future leaders.

Aristotle's Role in Mentoring Alexander the Great

Aristotle served as Alexander's tutor, shaping his education and influencing his intellectual and leadership development. Key areas of impact include philosophy and ethics, where Aristotle fostered critical thinking and ethical values. Exposure to Greek literature and poetry contributed to Alexander's appreciation for culture and the arts. Aristotle's teachings on science and natural philosophy provided a foundation for Alexander's interactions with diverse cultures. While not a conventional political mentor, Aristotle's insights on governance informed Alexander's leadership and strategies. Aristotle's emphasis on critical thinking played a role in Alexander's military strategy, and his rigorous teaching methods instilled discipline and intellectual rigor in Alexander's pursuit of knowledge.

IoT Security: Strategies, Challenges, and Essential Tools

The Internet of Things (IoT) has ushered in a new era of connectivity, transforming the way we live, work, and interact with our surroundings. It encompasses a vast network of devices, ranging from everyday appliances to industrial machinery, all connected and exchanging data. While this interconnectedness brings convenience and efficiency, it also presents a multitude of security challenges. In this article, we will delve into the complexities of IoT security and explore strategies, best practices, and essential tools to safeguard this dynamic ecosystem.

Understanding IoT Security Challenges

  • Lack of Encryption: One of the primary challenges in IoT security is the lack of robust encryption. Many IoT devices transmit data without adequate encryption, leaving it vulnerable to interception and manipulation. Encryption is a fundamental defense mechanism against unauthorized access and data compromise.
  • Insufficient Testing and Updating: The rapid proliferation of IoT devices often leads to a rush to market, resulting in inadequate security testing and infrequent updates. This leaves devices and systems exposed to vulnerabilities and exploits.
  • Default Password Risks: Weak or default passwords on IoT devices make them susceptible to brute-force attacks. Manufacturers must encourage users to set strong, unique passwords to protect against unauthorized access.
  • IoT Malware and Ransomware: The increasing number of IoT devices has given rise to malware and ransomware attacks. These threats can compromise data privacy, demand ransoms for data recovery, and pose significant challenges for IoT security.
  • IoT Botnets and Cryptocurrency: IoT botnets have the potential to manipulate data privacy, posing significant risks to the cryptocurrency market and blockchain technologies. Malicious actors can exploit vulnerabilities in IoT devices to create botnets for various purposes.
  • Inadequate Device Security: Many IoT devices lack proper security features, making them susceptible to hacking, data theft, and unauthorized access. Strengthening device security is paramount to addressing this challenge.

Strategies to Address IoT Security Challenges

  • Encryption and Strong Authentication: Implement robust encryption methods and enforce strong authentication mechanisms to protect data confidentiality and integrity during transmission and storage.
  • Regular Testing and Updates: Prioritize thorough security testing and frequent updates for IoT devices. Regular updates are essential to patch vulnerabilities and improve overall resilience.
  • Password Hygiene: Educate users about the importance of setting strong, unique passwords for IoT devices. Avoid default credentials, which are a common target for brute-force attacks.
  • IoT Security Best Practices: Promote industry-wide best practices for IoT security, including secure coding, vulnerability management, and adherence to recognized security standards.
  • Network Security Measures: Deploy robust network security measures, including firewalls and intrusion detection systems, to protect against network-based attacks such as denial-of-service (DoS) attacks.
  • Standardization Efforts: Advocate for IoT security standards and protocols to ensure consistency and compatibility across devices and systems. Standardization promotes secure development practices.
  • Privacy by Design: Prioritize privacy by design principles to protect user data. Be transparent about data collection and usage, and respect individuals' rights to control their information.
  • Firmware and Software Updates: Promptly release security patches and updates for IoT devices to address software vulnerabilities. Keep devices up-to-date to mitigate potential threats.
  • Employee Training: Educate employees and contractors about IoT security risks and insider threat awareness. Security awareness training is essential to create a security-conscious culture.

Essential IoT Security Tools

  • Device Management Platforms: Device management platforms like AWS IoT Device Management and Microsoft Azure IoT Hub provide centralized control and security features for IoT devices, including provisioning, authentication, and monitoring.
  • Security Information and Event Management (SIEM) Systems: SIEM systems such as Splunk and IBM QRadar offer real-time monitoring and analysis of security events in IoT environments, aiding in threat detection and response.
  • IoT Security Gateways: IoT security gateways, like Cisco IoT Security, act as intermediaries between IoT devices and networks, implementing security policies and inspecting traffic for threats.
  • Blockchain Technology: Blockchain enhances data security and integrity in IoT by ensuring data immutability. Platforms like IOTA and VeChain provide blockchain solutions tailored for IoT security.
  • Vulnerability Scanners: Vulnerability scanners like Nessus and Qualys identify and remediate vulnerabilities in IoT devices and networks through penetration testing and assessments.
  • IoT Security Analytics Tools: Security analytics tools like Darktrace and Vectra AI use machine learning to detect abnormal behavior patterns in IoT networks, aiding in threat identification.
  • Network Segmentation Solutions: Network segmentation tools, including firewalls like Palo Alto Networks and Cisco ASA, isolate IoT devices from critical networks, limiting potential attack surfaces.
  • IoT Security Testing Services: Third-party security testing services and tools assess the security of IoT devices and applications through penetration testing and vulnerability assessments.

Conclusion

Securing the IoT is an ongoing endeavor that demands vigilance and collaboration. By implementing robust security strategies, adhering to best practices, and leveraging essential IoT security tools, we can navigate the intricate landscape of IoT security challenges. Software developers, organizations, and users all play pivotal roles in fortifying IoT ecosystems against evolving threats, ensuring a safer and more resilient connected world.

A High-Level Overview Of Large Language Model Concepts, Use Cases, And Tools

Even though a simple online search turns up countless tutorials on using Artificial Intelligence (AI) for everything from generative art to making technical documentation easier to use, there’s still plenty of mystery around it. What goes inside an AI-powered tool like ChatGPT? How does Notion’s AI feature know how to summarize an article for me on the fly? Or how are a bunch of sites suddenly popping up that can aggregate news and auto-publish a slew of “new” articles from it?

It all can seem like a black box of mysterious, arcane technology that requires an advanced computer science degree to understand. What I want to show you, though, is how we can peek inside that box and see how everything is wired up.

Specifically, this article is about large language models (LLMs) and how they “imbue” AI-powered tools with intelligence for answering queries in diverse contexts. I have previously written tutorials on how to use an LLM to transcribe and evaluate the expressed sentiment of audio files. But I want to take a step back and look at another way around it that better demonstrates — and visualizes — how data flows through an AI-powered tool.

We will discuss LLM use cases, look at several new tools that abstract the process of modeling AI with LLM with visual workflows, and get our hands on one of them to see how it all works.

Large Language Models Overview

Forgoing technical terms, LLMs are vast sets of text data. When we integrate an LLM into an AI system, we enable the system to leverage the language knowledge and capabilities developed by the LLM through its own training. You might think of it as dumping a lifetime of knowledge into an empty brain, assigning that brain to a job, and putting it to work.

“Knowledge” is a convoluted term as it can be subjective and qualitative. We sometimes describe people as “book smart” or “street smart,” and they are both types of knowledge that are useful in different contexts. This is what artificial “intelligence” is created upon. AI is fed with data, and that is what it uses to frame its understanding of the world, whether it is text data for “speaking” back to us or visual data for generating “art” on demand.

Use Cases

As you may imagine (or have already experienced), the use cases of LLMs in AI are many and along a wide spectrum. And we’re only in the early days of figuring out what to make with LLMs and how to use them in our work. A few of the most common use cases include the following.

  • Chatbot
    LLMs play a crucial role in building chatbots for customer support, troubleshooting, and interactions, thereby ensuring smooth communications with users and delivering valuable assistance. Salesforce is a good example of a company offering this sort of service.
  • Sentiment Analysis
    LLMs can analyze text for emotions. Organizations use this to collect data, summarize feedback, and quickly identify areas for improvement. Grammarly’s “tone detector” is one such example, where AI is used to evaluate sentiment conveyed in content.
  • Content Moderation
    Content moderation is an important aspect of social media platforms, and LLMs come in handy. They can spot and remove offensive content, including hate speech, harassment, or inappropriate photos and videos, which is exactly what Hubspot’s AI-powered content moderation feature does.
  • Translation
    Thanks to impressive advancements in language models, translation has become highly accurate. One noteworthy example is Meta AI’s latest model, SeamlessM4T, which represents a big step forward in speech-to-speech and speech-to-text technology.
  • Email Filters
    LLMs can be used to automatically detect and block unwanted spam messages, keeping your inbox clean. When trained on large datasets of known spam emails, the models learn to identify suspicious links, phrases, and sender details. This allows them to distinguish legitimate messages from those trying to scam users or market illegal or fraudulent goods and services. Google has offered AI-based spam protection since 2019.
  • Writing Assistance
    Grammarly is the ultimate example of an AI-powered service that uses LLM to “learn” how you write in order to make writing suggestions. But this extends to other services as well, including Gmail’s “Smart Reply” feature. The same thing is true of Notion’s AI feature, which is capable of summarizing a page of content or meeting notes. Hemmingway’s app recently shipped a beta AI integration that corrects writing on the spot.
  • Code and Development
    This is the one that has many developers worried about AI coming after their jobs. It hit the commercial mainstream with GitHub Copilot, a service that performs automatic code completion. Same with Amazon’s CodeWhisperer. Then again, AI can be used to help sharpen development skills, which is the case of MDN’s AI Help feature.

Again, these are still the early days of LLM. We’re already beginning to see language models integrated into our lives, whether it’s in our writing, email, or customer service, among many other services that seem to pop up every week. This is an evolving space.

Types Of Models

There are all kinds of AI models tailored for different applications. You can scroll through Sapling’s large list of the most prominent commercial and open-source LLMs to get an idea of all the diverse models that are available and what they are used for. Each model is the context in which AI views the world.

Let’s look at some real-world examples of how LLMs are used for different use cases.

Natural Conversation
Chatbots need to master the art of conversation. Models like Anthropic’s Claude are trained on massive collections of conversational data to chat naturally on any topic. As a developer, you can tap into Claude’s conversational skills through an API to create interactive assistants.

Emotions
Developers can leverage powerful pre-trained models like Falcon for sentiment analysis. By fine-tuning Falcon on datasets with emotional labels, it can learn to accurately detect the sentiment in any text provided.

Translation
Meta AI released SeamlessM4T, an LLM trained on huge translated speech and text datasets. This multilingual model is groundbreaking because it translates speech from one language into another without an intermediary step between input and output. In other words, SeamlessM4T enables real-time voice conversations across languages.

Content Moderation
As a developer, you can integrate powerful moderation capabilities using OpenAI’s API, which includes a LLM trained thoroughly on flagging toxic content for the purpose of community moderation.

Spam Filtering
Some LLMs are used to develop AI programs capable of text classification tasks, such as spotting spam emails. As an email user, the simple act of flagging certain messages as spam further informs AI about what constitutes an unwanted email. After seeing plenty of examples, AI is capable of establishing patterns that allow it to block spam before it hits the inbox.

Not All Language Models Are Large

While we’re on the topic, it’s worth mentioning that not all language models are “large.” There are plenty of models with smaller sets of data that may not go as deep as ChatGPT 4 or 5 but are well-suited for personal or niche applications.

For example, check out the chat feature that Luke Wrobleski added to his site. He’s using a smaller language model, so the app at least knows how to form sentences, but is primarily trained on Luke’s archive of blog posts. Typing a prompt into the chat returns responses that read very much like Luke’s writings. Better yet, Luke’s virtual persona will admit when a topic is outside of the scope of its knowledge. An LLM would provide the assistant with too much general information and would likely try to answer any question, regardless of scope. Members from the University of Edinburgh and the Allen Institute for AI published a paper in January 2023 (PDF) that advocates the use of specialized language models for the purpose of more narrowly targeted tasks.

Low-Code Tools For LLM Development

So far, we’ve covered what an LLM is, common examples of how it can be used, and how different models influence the AI tools that integrate them. Let’s discuss that last bit about integration.

Many technologies require a steep learning curve. That’s especially true with emerging tools that might be introducing you to new technical concepts, as I would argue is the case with AI in general. While AI is not a new term and has been studied and developed over decades in various forms, its entrance to the mainstream is certainly new and sparks the recent buzz about it. There’s been plenty of recent buzz in the front-end development community, and many of us are scrambling to wrap our minds around it.

Thankfully, new resources can help abstract all of this for us. They can power an AI project you might be working on, but more importantly, they are useful for learning the concepts of LLM by removing advanced technical barriers. You might think of them as “low” and “no” code tools, like WordPress.com vs. self-hosted WordPress or a visual React editor that is integrated with your IDE.

Low-code platforms make it easier to leverage large language models without needing to handle all the coding and infrastructure yourself. Here are some top options:

Chainlit

Chainlit is an open-source Python package that is capable of building a ChatGPT-style interface using a visual editor.

LLMStack is another low-code platform for building AI apps and chatbots by leveraging large language models. Multiple models can be chained together into “pipelines” for channeling data. LLMStack supports standalone app development but also provides hosting that can be used to integrate an app into sites and products via API or connected to platforms like Slack or Discord.

LLMStack is also what powers Promptly, a cloud version of the app with freemium subscription pricing that includes a free tier.

FlowiseAI

Stack AI is another no-code offering for developing AI apps integrated with LLMs. It is much like FlowiseAI, particularly the drag-and-drop interface that visualizes connections between apps and APIs. One thing I particularly like about Stack AI is how it incorporates “data loaders” to fetch data from other platforms, like Slack or a Notion database.

I also like that Stack AI provides a wider range of LLM offerings. That said, it will cost you. While Stack AI offers a free pricing tier, it is restricted to a single project with only 100 runs per month. Bumping up to the first paid tier will set you back $199 per month, which I suppose is used toward the costs of accessing a wider range of LLM sources. For example, Flowise AI works with any LLM in the Hugging Face community. So does Stack AI, but it also gives you access to commercial LLM offerings, like Anthropic’s Claude models and Google’s PaLM, as well as additional open-source offerings from Replicate.

Voiceflow

Install FlowiseAI

First things first, we need to get FlowiseAI up and running. FlowiseAI is an open-source application that can be installed from the command line.

You can install it with the following command:

npm install -g flowise

Once installed, start up Flowise with this command:

npx flowise start

From here, you can access FlowiseAI in your browser at localhost:3000.

It’s possible to serve FlowiseAI so that you can access it online and provide access to others, which is well-covered in the documentation.

Setting Up Retrievers

Retrievers are templates that the multi-prompt chain will query.

Different retrievers provide different templates that query different things. In this case, we want to select the Prompt Retriever because it is designed to retrieve documents like PDF, TXT, and CSV files. Unlike other types of retrievers, the Prompt Retriever does not actually need to store those documents; it only needs to fetch them.

Let’s take the first step toward creating our career assistant by adding a Prompt Retriever to the FlowiseAI canvas. The “canvas” is the visual editing interface we’re using to cobble the app’s components together and see how everything connects.

Adding the Prompt Retriever requires us to first navigate to the Chatflow screen, which is actually the initial page when first accessing FlowiseAI following installation. Click the “Add New” button located in the top-right corner of the page. This opens up the canvas, which is initially empty.

The “Plus” (+) button is what we want to click to open up the library of items we can add to the canvas. Expand the Retrievers tab, then drag and drop the Prompt Retriever to the canvas.

The Prompt Retriever takes three inputs:

  1. Name: The name of the stored prompt;
  2. Description: A brief description of the prompt (i.e., its purpose);
  3. Prompt system message: The initial prompt message that provides context and instructions to the system.

Our career assistant will provide career suggestions, tool recommendations, salary information, and cities with matching jobs. We can start by configuring the Prompt Retriever for career suggestions. Here is placeholder content you can use if you are following along:

  • Name: Career Suggestion;
  • Description: Suggests careers based on skills and experience;
  • Prompt system message: You are a career advisor who helps users identify a career direction and upskilling opportunities. Be clear and concise in your recommendations.

Be sure to repeat this step three more times to create each of the following:

  • Tool recommendations,
  • Salary information,
  • Locations.

Adding A Multi-Prompt Chain

A Multi-Prompt Chain is a class that consists of two or more prompts that are connected together to establish a conversation-like interaction between the user and the career assistant.

The idea is that we combine the four prompts we’ve already added to the canvas and connect them to the proper tools (i.e., chat models) so that the career assistant can prompt the user for information and collect that information in order to process it and return the generated career advice. It’s sort of like a normal system prompt but with a conversational interaction.

The Multi-Prompt Chain node can be found in the “Chains” section of the same inserter we used to place the Prompt Retriever on the canvas.

Once the Multi-Prompt Chain node is added to the canvas, connect it to the prompt retrievers. This enables the chain to receive user responses and employ the most appropriate language model to generate responses.

To connect, click the tiny dot next to the “Prompt Retriever” label on the Multi-Prompt Chain and drag it to the “Prompt Retriever” dot on each Prompt Retriever to draw a line between the chain and each prompt retriever.

Integrating Chat Models

This is where we start interacting with LLMs. In this case, we will integrate Anthropic’s Claude chat model. Claude is a powerful LLM designed for tasks related to complex reasoning, creativity, thoughtful dialogue, coding, and detailed content creation. You can get a feel for Claude by registering for access to interact with it, similar to how you’ve played around with OpenAI’s ChatGPT.

From the inserter, open “Chat Models” and drag the ChatAnthropic option onto the canvas.

Once the ChatAnthropic chat model has been added to the canvas, connect its node to the Multi-Prompt Chain’s “Language Model” node to establish a connection.

It’s worth noting at this point that Claude requires an API key in order to access it. Sign up for an API key on the Anthropic website to create a new API key. Once you have an API key, provide it to the Mutli-Prompt Chain in the “Connect Credential” field.

Adding A Conversational Agent

The Agent component in FlowiseAI allows our assistant to do more tasks, like accessing the internet and sending emails.

It connects external services and APIs, making the assistant more versatile. For this project, we will use a Conversational Agent, which can be found in the inserter under “Agent” components.

Once the Conversational Agent has been added to the canvas, connect it to the Chat Model to “train” the model on how to respond to user queries.

Integrating Web Search Capabilities

The Conversational Agent requires additional tools and memory. For example, we want to enable the assistant to perform Google searches to obtain information it can use to generate career advice. The Serp API node can do that for us and is located under “Tools” in the inserter.

Like Claude, Serp API requires an API key to be added to the node. Register with the Serp API site to create an API key. Once the API is configured, connect Serp API to the Conversational Agent’s “Allowed Tools” node.

Building In Memory

The Memory component enables the career assistant to retain conversation information.

This way, the app remembers the conversation and can reference it during the interaction or even to inform future interactions.

There are different types of memory, of course. Several of the options in FlowiseAI require additional configurations, so for the sake of simplicity, we are going to add the Buffer Memory node to the canvas. It is the most general type of memory provided by LangChain, taking the raw input of the past conversation and storing it in a history parameter for reference.

Buffer Memory connects to the Conversational Agent’s “Memory” node.

The Final Workflow

At this point, our workflow looks something like this:

  • Four prompt retrievers that provide the prompt templates for the app to converse with the user.
  • A multi-prompt chain connected to each of the four prompt retrievers that chooses the appropriate tools and language models based on the user interaction.
  • The Claude language model connected to the multi-chain prompt to “train” the app.
  • A conversational agent connected to the Claude language model to allow the app to perform additional tasks, such as Google web searches.
  • Serp API connected to the conversational agent to perform bespoke web searches.
  • Buffer memory connected to the conversational agent to store, i.e., “remember,” conversations.

If you haven’t done so already, this is a great time to save the project and give it a name like “Career Assistant.”

Final Demo

Watch the following video for a quick demonstration of the final workflow we created together in FlowiseAI. The prompts lag a little bit, but you should get the idea of how all of the components we connected are working together to provide responses.

Conclusion

As we wrap up this article, I hope that you’re more familiar with the concepts, use cases, and tools of large language models. LLMs are a key component of AI because they are the “brains” of the application, providing the lens through which the app understands how to interact with and respond to human input.

We looked at a wide variety of use cases for LLMs in an AI context, from chatbots and language translations to writing assistance and summarizing large blocks of text. Then, we demonstrated how LLMs fit into an AI application by using FlowiseAI to create a visual workflow. That workflow not only provided a visual of how an LLM, like Claude, informs a conversation but also how it relies on additional tools, such as APIs, for performing tasks as well as memory for storing conversations.

The career assistant tool we developed together in FlowiseAI was a detailed visual look inside the black box of AI, providing us with a map of the components that feed the app and how they all work together.

Now that you know the role that LLMs play in AI, what sort of models would you use? Is there a particular app idea you have where a specific language model would be used to train it?

References

Apache Kafka as Mission Critical Data Fabric for GenAI

Apache Kafka serves thousands of enterprises as the mission-critical and scalable real-time data fabric for machine learning infrastructures. The evolution of Generative AI (GenAI) with large language models (LLM) like ChatGPT changed how people think about intelligent software and automation. This blog post explains the relationship between data streaming and GenAI and shows the enormous opportunities and some early adopters of GenAI beyond the buzz.

data streaming as data fabric for generative AI

Generative AI (GenAI) and Data Streaming

Let’s set the context first to have the same understanding of the buzzwords.

Facilitating Inclusive Online Workshops (Part 1)

Have you ever found yourself trapped in an hour-long meeting, listening to someone’s endless talk without understanding their main point? Or sat through a discussion where everyone speaks, but no actions are decided upon in the end? Or perhaps felt like the meeting you’re participating in is simply a waste of time?

If you have, you’re not alone.

According to a survey conducted by Clarizen and Harris Poll (2017), three in five employed adults reported that preparing for a meeting “takes longer than the meeting itself,” and 35% of those who attend status meetings called them a waste of time. In fact, 46% of employed Americans would rather engage in any unpleasant activity than sit in a meeting.

Meetings, when organized well, can serve as an effective way to share information and make decisions. The harsh reality, however, is that many meetings are poorly structured, ending up as a drain of resources.

One of the possible ways to replace meetings with something better and more effective is the implementation of workshops. But while workshops can be a highly effective way to foster collaboration and generate innovative solutions, they often require active participation from everyone involved. Yet, not everyone feels comfortable voicing their thoughts or taking the lead in a group setting, even though these quieter voices can be just as valuable and insightful. This is what led to the concept of an “inclusive” workshop — a workshop that ensures everyone feels heard, connected, and comfortable expressing their ideas.

What’s Inclusivity, And What’s Its Impact?

Before we dive into the concept of an inclusive workshop, let’s first talk about the foundation. At its core, inclusivity means recognizing, appreciating, and respecting the diverse tapestry of human individuality. It’s about valuing the uniqueness everyone brings to the table, from attributes like ethnicity, gender, age, and religion to less apparent characteristics such as cognitive style or socioeconomic background.

Inclusivity is deeply rooted in the social identity theory, introduced by Tajfel & Turner in 1979. This theory suggests that our identities — who we think we are — are partly defined by the social groups we feel part of. It’s human nature to seek acceptance and to want to belong to a group that appreciates us for who we are. This need for social acceptance influences how we view ourselves and how we interact with others.

An inclusive environment embraces this diversity and uses this as an advantage to create a collaborative environment. Think of an orchestra, for example. Every instrument, whether it’s a violin, a trumpet, a cello, or a drum, brings with it its unique sound. Some may play a melody, others a harmony, and some keep the rhythm. Each of these sounds is different, but when combined, they create a harmonious symphony. In an inclusive setting, each person, with their distinct qualities, comes together with others to form a symphony of collaboration and understanding.

However, people are beautifully complex, and although this complexity is what often breathes life into a workshop, it can also introduce an element of unpredictability, which, if not managed well, can potentially lead to discord among the participants.

The only thing we can do is acknowledge the fact that not everyone will be comfortable speaking up, particularly in group settings. There could be various reasons for this, including individual personality traits, cultural backgrounds, past experiences, or simply the fear of judgment. As a facilitator, it is your responsibility to ensure that every individual in the room feels comfortable expressing their opinions and ideas. In the remainder of this article, I will be introducing some practical principles and techniques that could guide you in facilitating an inclusive workshop.

Preparing For An Inclusive Workshop

If you are familiar with design thinking and design in general, you’ll find similarities between the design process and structuring an inclusive workshop. In design, we start by trying to understand our users, identifying their goals, and then crafting an effective user experience to guide them from start to finish. The same principles apply to designing an inclusive workshop:

  1. Understand the participants,
  2. Recognize their goals,
  3. Plan an engaging experience to achieve these goals.

Here are some “pre-works” you can do to better prepare for your workshop.

Step 1. Make Sure You Include The Right People

The most important thing in any meeting or workshop is including the right participants. Failing to do so could prevent you from guiding the team in order to reach the goals of the workshop.

If you are facilitating a workshop for your own team (or within your company, which you know well), ask yourself the following questions so as to decide if a person should be included:

  • Is the meeting relevant to this person’s work and core responsibilities?
  • Can this person provide critical information, aid in the decision-making, or contribute meaningfully to the conversation?
  • Is this person’s presence necessary to achieve the meeting’s goals?

On the other hand, if you’re facilitating a workshop with an external team, provide a list of criteria that outlines what the ideal participant looks like and ask your client to include all the relevant individuals.

At this stage, bear in mind that adding a new participant not only means that you are bringing in a new viewpoint but you’re also increasing the number of necessary agreements among the members of the group. This could potentially lead to more disagreements and conflicts among different parties. Have a look at the points of agreement graph (below) to better understand how this mechanism works.

Step 2. Know Your Participants Well

Once your participant wish list is set, it’s important to invite all the participants to the workshop in a manner that is welcoming and inclusive — for example, not just sending a calendar invite and expecting them to show up. If it’s feasible, try to arrange a pre-workshop call or meeting with each participant to gain a better understanding of them. Building these personal connections before the workshop is important in ensuring your workshop activities are inclusive and productive.

Here are some more detailed steps to consider.

Personalize The Invitation

Instead of a generic invite, personalize your invitations. Clearly outline the workshop’s purpose and activities and why you think they would be a valuable addition. Be open to participant’s opinions and concerns about attendance. If there’s uncertainty about their availability or relevance to the workshop, offer them an option to contribute asynchronously if they can’t participate in real-time.

An example message could be crafted along these lines:

“Hey Lewis, I am reaching out as I am planning to run a workshop with Max to brainstorm around how we can build the AI dashboard (which is the next initiative on our roadmap), and I would love to invite you to the workshop as I believe your front-end knowledge will help us a lot to understand the tech limitations. How does that sound to you? The workshop will be approximately two hours long and is scheduled for next week. Here’s the high-level agenda...”

A message like this — where you can explain what the workshop will be about, who will be involved, and how they can contribute to the workshop — will help the participant decide whether their involvement will be useful or not.

Schedule Pre-Workshop Conversations

If possible, have a brief, informal chat with the participants, especially if you’re unfamiliar with them. This could be as simple as a quick coffee chat where you just talk about your hobbies and favorite movies. Such interactions can help build rapport before the workshop and provide insights into the participant’s goals and expectations.

Identify Personality Types And Preferences

The personality traits of your workshop participants can be grouped along two main axes: their preference for group work (individualistic vs. collaborative) and their communication style (introverted vs. extroverted). By understanding and accommodating these preferences, you can create a workshop environment that truly values and harnesses the benefits of diversity.

  • Individualistic participants
    They may prefer tasks they can work on independently and discussions in smaller groups. Designing certain portions of the workshop that allow for individual thinking and ideation can help engage these participants.
  • Collaborative participants
    They enjoy large group discussions and team activities. Incorporating collaborative tasks where everyone can contribute can keep these participants involved and motivated.
  • Introverted participants
    They might feel more comfortable with structured turn-taking or with written contributions. They might not voice their thoughts as readily in a group setting, but that doesn’t make their ideas any less valuable. Establishing clear turn-taking rules or offering opportunities for written input can help ensure their voices are heard, too.
  • Extroverted participants
    They might be more engaged in free-flowing discussions or roles that involve presenting to the group. Ensuring that the workshop format has room for open discussions can cater to their preference.

Step 3. Plan The Workshop Steps

Once you have defined your participant list and understand the participants well, the next step is to plan the workshop accordingly to meet their needs.

In the next section, I will share some high-level tips relevant in particular to inclusivity.

The specifics of planning an entire workshop is another topic altogether — I recommend reading a few books on this topic, such as Gamestorming by Dave Gray, Sunni Brown, and James Macanufo and Facilitator’s Guide to Participatory Decision-Making by Sam Kaner, as well as Smashing Magazine articles that will help you dive deeper into the specifics of crafting a workshop. You might also want to look at the 4C framework developed by AJ & Smart, which can guide you in structuring your workshop logically.

Start By Defining The Break Time

No matter the nature of your workshop, it’s important to plan regular breaks which will promote an inclusive environment.

Catering to individuals’ unique needs, be they physical or cognitive, is very important. Breaks help counter the “Zoom fatigue” prevalent in virtual workshops and respect cultural sensitivities by allowing time for personal and cultural practices.

Numerous studies indicate that people can generally focus effectively for about 45 to 50 minutes at a time. Hence, consider scheduling a 5–10 minute break every hour. These intervals offer participants time to relax, regroup, and reset their mental focus, thereby maintaining engagement and productivity throughout the session.

“Excessive focus exhausts the focus circuits in your brain. It can drain your energy and make you lose self-control. This energy drain can also make you more impulsive and less helpful. As a result, decisions are poorly thought out, and you become less collaborative. So what do we do then? Focus or unfocus? In keeping with recent research, both focus and unfocus are vital. The brain operates optimally when it toggles between focus and unfocus, allowing you to develop resilience, enhance creativity, and make better decisions too.”

— Srini Pillay, “Your Brain Can Only Take So Much Focus” (Harvard Business Review)

Choose The Right Tools

The selection of tools in a remote environment can substantially influence participants’ experience. Complex or unfamiliar tools can affect the effectiveness of even the most well-planned workshop. Thus, it’s important to select tools that can help with collaboration, address diverse participants’ needs, and are accessible and straightforward to use.

Below, you’ll find a list of some popular tools for facilitating workshops, including their advantages, disadvantages, and best-suited participant types.

Whiteboard Tools

  • Miro
    Miro is a popular and user-friendly digital whiteboard tool known for its intuitive interface and collaborative features. It’s great for visual learners and those who thrive on a free-form canvas. Additionally, for facilitators, a wide range of workshop templates is provided so that you don’t need to start from scratch. However, this tool can be overwhelming for those who prefer more structured interfaces. In addition, in Miroverse, there are more than 1000+ templates for you to get inspired by and to better plan your workshop.
  • Mural
    Another alternative digital whiteboard platform, Mural, is known for its ability to onboard new users easily. People can join and edit your Mural board without the need to create an account, which reduces friction. However, for facilitators who wish to have more powerful features such as AI, tables, charts, and integrations, Mural might not completely satisfy their needs.
  • FigJam
    FigJam is a top choice for designers and those familiar with Figma. It combines the flexibility of a digital whiteboard and design-focused features. This tool suits visually oriented and design-minded participants alike but may feel less intuitive to those unfamiliar with design software in general.

Video Conferencing Tools

  • Microsoft Teams, Zoom, and Google Meet
    These are the “traditional” tools that almost everyone knows how to use. While their interfaces may lack extensive interactive features (which leads to a somewhat basic look and feel), their key strength lies in their consistent performance and accessibility. Their familiarity among users ensures a low learning curve, contributing to smooth and efficient workshop sessions.
  • Butter
    This tool is purpose-built for running interactive workshops, offering creative features specifically focused on workshops and collaborative meetings. Butter can cater to a broad range of personality types. For extroverted participants, it enables easy active engagement in discussions, while it also allows more introverted participants to express themselves using reactions, emojis, or GIFs in a less confrontational manner.

Group Discussion Tools

  • Mixerchat
    This platform facilitates interactive group discussions. Participants can freely navigate different breakout rooms, engaging in breakout sessions or world café-style activities.

Choose The Activities And Communication Methods

The next and perhaps most critical step is to carefully choose activities that cater to the diversity of your participants. Use the personality types and goals you identified in Step 2 to guide your decisions. Here are some suggestions to help you get started.

If most of your participants are Introverted & Individualistic.
These participants prefer to think through ideas independently before sharing them, and they may be more comfortable in a quieter setting. A few activities for this group of people could be:

  • Silent Brainstorming: Each participant works individually to generate ideas and write them down. After a set period, everyone shares their ideas one by one. This approach gives introverted participants time to think and formulate their ideas before sharing them.
  • Silent Dot-Voting: These tools allow participants to share their views or vote on ideas anonymously, which can be less intimidating than speaking up in a group.

If most of your participants are Introverted & Collaborative.
These participants may enjoy working in groups, but they prefer quieter, more thoughtful discussions. A few activities for this group of people could be:

  • Small Group Discussions: Divide participants into smaller groups of 3-4 people to discuss a topic or question. This setup can feel less overwhelming than large group discussions.

  • Think-Pair-Share: In this activity, participants first consider a question or problem individually, then they pair up to discuss their thoughts, and finally, they share their ideas with the larger group.
  • Fishbowl Conversation: A small group sits in a circle (the fishbowl) to discuss a topic while the rest of the participants observe. After a while, allow participants to switch places, ensuring everyone gets a chance to contribute.

If most of your participants are Extroverted & Individualistic.
These participants enjoy expressing their ideas and might prefer to work independently. A few activities for this group of people could be:

  • Lightning Talks: Each participant prepares a short presentation on a topic related to the workshop's theme. This activity allows participants to express their ideas and share their expertise.
  • Idea Gallery/Lightning Demo: Participants work individually to create a visual or written representation of their ideas (like a poster), then everyone walks around to view the “gallery” and discuss the ideas.
    Note: The idea is not new and is also known as a “Poster Session.” The goal of a poster session is to create a set of compelling images that summarize a challenge or topic for further discussion. Creating this set might be an “opening act,” which then sets the stage for choosing an idea to pursue, or it might be a way to get indexed on a large topic. The act of creating a poster forces experts and otherwise passionate people to stop and think about the best way to communicate the core concepts of their material.

  • Jigsaw Activity: In this exercise, each participant becomes an “expert” in a specific aspect of a larger topic. They then share their knowledge with the group, allowing for individual exploration and public speaking.

If most of your participants are Extroverted & Collaborative.
These participants enjoy the energy of group discussions and collaborative work. A few activities suitable for this group of people could be:

  • Open Discussions: Provide a topic or question and allow the conversation to flow naturally. Extroverted participants typically thrive in this open format.
  • Group Projects: Split participants into teams and assign a project related to the workshop’s theme. This could be anything from creating a mock-up for a new product to brainstorming strategies for overcoming a business challenge.
  • Role-play: This group activity allows participants to act out different scenarios or perspectives related to the workshop theme, encouraging dynamic discussion and cooperative problem-solving.

Step 4. Estimate The Right Time And Allocate An Extra “Buffer”

Once you have completed planning the workshop activities, if possible, try to conduct a pilot run so that you can decide the appropriate duration for each activity.

Time management is critical for inclusivity since participants may have other engagements, and if time management is out of control, both the facilitator and the participants may feel uncomfortable as they may need to shift focus to other commitments. Therefore, it is crucial to estimate the appropriate time and allocate extra “buffer” time to avoid rushing participants during engaging discussions.

I usually like to add a 20% buffer to each activity to ensure there is always some time to spare in case some people are slower. For instance, if you have set a 10-minute brainstorming session, schedule it to be 12 minutes so that you can have some extra buffer.

Step 5. Send A Pre-note Out

Inclusion often starts before the workshop even begins. A pre-workshop note can make a significant difference in setting the stage for inclusivity. It allows all participants, regardless of their background or understanding of the key topic, to start from a common ground.

For example, if you’re conducting a workshop on project management, your pre-note could include an outline of the topics to be covered, such as agile methodology, risk management, or team leadership. You could also include a brief case study for participants to review before the workshop.

The pre-note can also include logistical details such as the workshop date, time, location (or virtual meeting link), any software they need to install (like Zoom or Microsoft Teams), and what they should bring with them (like a laptop or a notebook). By sending a pre-note, you ensure that all participants come prepared and are aligned with the workshop’s objectives right from the start.

Step 6. Bonus Tip: Personalize The Experience

Once you’ve established the framework of your workshop, it’s time to season it with some personal flair. This “secret sauce” could be a unique icebreaker or an element of surprise that sparks laughter and lightens up the atmosphere.

For instance, I’ve often incorporated the pets of my colleagues into the Miro board during my workshops. The sight of familiar furry friends not only brings a smile but also fosters a sense of community and connection within the team.

Conclusion

This was the first half of our journey exploring inclusive remote workshops where we’ve “peeled back” the layers of their essence and highlighted some critical techniques and approaches to lay out the groundwork. Remember, the key to running a successful inclusive workshop is to know your participants well and to create a space where they feel at ease. Take the time to understand them and shape the workshop activities in such a way that they match participant’s personal preferences. The most important bit, perhaps, is that every attendee should feel valued and heard.

In the second part of this two-part series, I will dive deeper into what you can do during and after the workshop in order to better tailor the experience to the workshop attendees, and I will also introduce you to the P.A.R.T.S. principle, which stands for Promote, Acknowledge, Respect, Transparency, and Share.

Types of Resources for Medical Delivery App Developers

I am looking for resources for medical delivery app developers. I am particularly interested in learning about:

  • The different technologies that can be used to develop medical delivery apps.
  • The regulatory requirements for medical delivery apps.
  • The best practices for designing and developing a user-friendly medical delivery app.
  • The challenges and solutions in developing a medical delivery app.

Falling For Oklch: A Love Story Of Color Spaces, Gamuts, And CSS

I woke up one morning in early 2022 and caught an article called “A Whistle-Stop Tour of 4 New CSS Color Features” over at CSS-Tricks.

Wow, what a gas! A new and wider color gamut! New color spaces! New color functions! New syntaxes! It is truly a lot to take in.

Now, I’m no color expert. But I enjoyed adding new gems to my CSS toolbox and made a note to come back to that article later for a deeper read. That, of course, led to a lot of fun rabbit holes that helped put the CSS Color Module Level 4 updates in a better context for me.

That’s where Oklch comes into the picture. It’s a new color space in CSS that, according to experts smarter than me, offers upwards of 50% more color than the sRGB gamut we have worked with for so long because it supports a wider gamut of color.

Color spaces? Gamuts? These are among many color-related terms I’m familiar with but have never really understood. It’s only now that my head is wrapping around these concepts and how they relate back to CSS, and how I use color in my own work.

That’s what I want to share with you. This article is less of a comprehensive “how-to” guide than it is my own personal journey grokking new CSS color features. I actually like to this of this more as a “love story” where I fall for Oklch.

The Deal With Gamuts And Color Spaces

I quickly learned that there’s no way to understand Oklch without at least a working understanding of the difference between gamuts and color spaces. My novice-like brain thinks of them as the same: a spectrum of colors. In fact, my mind goes straight to the color pickers we all know from apps like Figma and Sketch.

I’ve always assumed that gamut is just a nerdier term for the available colors in a color picker and that a color picker is simply a convenient interface for choosing colors in the gamut.

(Assumed. Just. Simply. Three words you never want to see in the same sentence.)

Apparently not. A gamut really boils down to a range of something, which in this case, is a range of colors. That range might be based on a single point if we think of it on a single axis.

Or it might be a range of multiple coordinates like we would see on a two-axe grid. Now the gamut covers a wider range that originates from the center and can point in any direction.

The levels of those ranges can also constitute an axis, which results in some form of 3D space.

sRGB is a gamut with an available range of colors. Display P3 is another gamut offering a wider range of colors.

So, gamuts are ranges, and ranges need a reference to determine the upper and lower limits of those axes. That’s where we start talking about color spaces. A color space is what defines the format for plotting points on the gamut. While more trained folks certainly have more technical explanations, my basic understanding of color spaces is that they provide the map — or perhaps the “shape” — for the gamut and define how color is manipulated in it. So, sRGB is a color gamut that spans a range of colors, and Hex, RGB, and HSL (among others, of course) are the spaces we have to explore the gamut.

That’s why you may hear a color space as having a “wider” or “narrower” gamut than another — it’s a range of possibilities within a shape.

If I’ve piqued your interest enough, I’ve compiled a list of articles that will give you more thorough definitions of gamuts and color spaces at the end of this article.

Why We Needed New Color Spaces

The short answer is that the sRGB gamut serves as the reference point for color spaces like Hex, RGB, and HSL that provide a narrower color gamut than what is available in the newer Display P3 gamut.

We’re well familiar with many of sRGB-based color notations and functions in CSS. The values are essentially setting points along the gamut space with different types of coordinates.

  /* Hex */ #f8a100
  /* RGB */ rgb(248, 161, 2)
  /* HSL */ hsl(38.79 98% 49%)

For example, the rgb() function is designed to traverse the RGB color space by mixing red, blue, and green values to produce a point along the sRGB gamut.

If the difference between the two ranges in the image above doesn’t strike you as particularly significant or noticeable, that’s fair. I thought they were the same at first. But the Display P3 stripe is indeed a wider and smoother range of colors than the sRGB stripe above it when you examine it up close.

The problem is that Hex, RGB, and HSL (among other existing spaces) only support the sRGB gamut. In other words, they are unable to map colors outside of the range of colors that sRGB offers. That means there’s no way to map them to colors in the Display P3 gamut. The traditional color formats we’ve used for a long time are simply incompatible with the range of colors that has started rolling out in new hardware. We needed a new space to accommodate the colors that new technology is offering us.

Dead Grey Zones

I love this term. It accurately describes an issue with the color spaces in the sRGB gamut — greyish areas between two color points. You can see it in the following demo.

Oklch (as well as the other new spaces in the Level 4 spec) doesn’t have that issue. Hues are more like mountains, each with a different elevation.

That’s why we needed new color spaces — to get around those dead grey zones. And we needed new color functions in CSS to produce coordinates on the space to select from the newly available range of colors.

But there’s a catch. That mountain-shaped gamut of Oklch doesn’t always provide a straight path between color points which could result in clipped or unexpected colors between points. The issue appears to be case-specific depending on the colors in use, but that also seems to indicate that there are situations where using a different color space is going to yield better gradients.

Consistent Lightness

It’s the consistent range of saturation in HSL muddying the waters that leads to another issue along this same train of thought: inconsistent levels of lightness between colors.

The classic example is showing two colors in HSL with the same lightness value:

The Oklab and Oklch color spaces were created to fix that shift. Black is more, well, black because the hues are more consistent in Oklab and Oklch than they are in LAB and LCH.

So, that’s why it’s likely better to use the oklch() and oklab() functions in CSS than it is to use their lch() and lab() counterparts. There’s less of a shift happening in the hues.

So, while Oklch/LCH and Oklab/LAB all use the same general color space, the Cartesian coordinates are the key difference. And I agree with Sitnik and Turner, who make the case that Oklch and LCH are easier to understand than LAB and Oklab. I wouldn’t be able to tell you the difference between LAB’s a and b values on the Cartesian coordinate system. But chroma and hue in LCH and Oklch? Sure! That’s as easy to understand as HSL but better!

The reason I love Oklch over Oklab is that lightness, chroma, and hue are much more intuitive to me than lightness and a pair of Cartesian coordinates.

And the reason I like Oklch better than HSL is because it produces more consistent results over a wider color gamut.

OKLCH And CSS

This is why you’re here, right? What’s so cool about all this is that we can start using Oklch in CSS today — there’s no need to wait around.

“Browser support?” you ask. We’re well covered, friends!

In fact, Firefox 113 shipped support for Oklch a mere ten days before I started writing the first draft of this article. It’s oven fresh!

Using oklch() is a whole lot easier to explain now that we have all the context around color spaces and gamuts and how the new CSS Color Module Level 4 color functions fit into the picture.

I think the most difficult thing for me is working with different ranges of values. For example, hsl() is easy for me to remember because the hue is measured in degrees, and both saturation and lightness use the same 0% to 100% range.

oklch() is different, and that’s by design to not only access the wider gamut but also produce perceptively consistent results even as values change. So, while we get what I’m convinced is a way better tool for specifying color in CSS, there is a bit of a learning curve to remembering the chroma value because it’s what separates OKLCH from HSL.

The oklch() Values

Here they are:

  • l: This controls the lightness of the color, and it’s measured in a range of 0% to 100% just like HSL.
  • c: This is the chroma value, measured in decimals between 0 and 0.37.
  • h: This is the same ol’ hue we have in HSL, measured in the same range of 0deg to 360deg.

Again, it’s chroma that is the biggest learning curve for me. Yes, I had to look it up because I kept seeing it used somewhat synonymously with saturation.

Chroma and saturation are indeed different. And there are way better definitions of them out there than what I can provide. For example, I like how Cameron Chapman explains it:

“Chroma refers to the purity of a color. A hue with high chroma has no black, white, or gray added to it. Conversely, adding white, black, or gray reduces its chroma. It’s similar to saturation but not quite the same. Chroma can be thought of as the brightness of a color in comparison to white.”

— Cameron Chapman

I mentioned that chroma has an upper limit of 0.37. But it’s actually more nuanced than that, as Sitnik and Turner explain:

“[Chroma] goes from 0 (gray) to infinity. In practice, there is actually a limit, but it depends on a screen’s color gamut (P3 colors will have bigger values than sRGB), and each hue has a different maximum chroma. For both P3 and sRGB, the value will always be below 0.37.”

— Andrey Sitnik and Travis Turner

I’m so glad there are smart people out there to help sort this stuff out.

The oklch() Syntax

The formal syntax? Here it is, straight from the spec:

oklab() = oklab( [ <percentage> | <number> | none]
    [ <percentage> | <number> | none]
    [ <percentage> | <number> | none]
    [ / [<alpha-value> | none] ]? )

Maybe we can “dumb” it down a bit:

oklch( [ lightness ] [ chroma ] [ hue ] )

And those values, again, are measured in different units:

oklch( [ lightness = <percentage> ] [ chroma <number> ] [ hue <degrees> ]  )

Those units have min and max limits:

oklch( [ lightness = <percentage (0%-100%)> ] [ chroma <number> (0-0.37) ] [ hue <degrees> (0deg-360deg) ]  )

An example might be the following:

color: oklch(70.9% 0.195 47.025);

Did you notice that there are no commas between values? Or that there is no unit on the hue? That’s thanks to the updated syntax defined in the CSS Color Module Level 4 spec. It also applies to functions in the sRGB gamut:

/* Old Syntax */
hsl(26.06deg, 99%, 51%)

/* New Syntax */
hsl(26.06 99% 51%)

Something else that’s new? There’s no need for a separate function to set alpha transparency! Instead, we can indicate that with a / before the alpha value:

/* Old Syntax */
hsla(26.06deg, 99%, 51%, .75)

/* New Syntax */
hsl(26.06 99% 51% / .75)

That’s why there is no oklcha() function — the new syntax allows oklch() to handle transparency on its own, like a grown-up.

Providing A Fallback

Yeah, it’s probably worth providing a fallback value for oklch() even if it does enjoy great browser support. Maybe you have to support a legacy browser like IE, or perhaps the user’s monitor or screen simply doesn’t support colors in the Display P3 gamut.

Providing a fallback doesn’t have to be hard:

color: hsl(26.06 99% 51%);
color: oklch(70.9% 0.195 47.025);

There are “smarter” ways to provide a fallback, like, say, using @supports:

.some-class {
  color: hsl(26.06 99% 51%);
}

@supports (oklch(100% 0 0)) {
  .some-class {
    color: oklch(70.9% 0.195 47.025);
  }
}

Or detecting Display P3 support on the @media side of things:

.some-class {
  color: hsl(26.06 99% 51%);
}

@media (color-gamut: p3) {
  .some-class {
    color: oklch(70.9% 0.195 47.025);
  }
}

Those all seem overly verbose compared to letting the cascade do the work. Maybe there’s a good reason for using media queries that I’m overlooking.

There’s A Polyfill

Of course, there’s one! There are two, in fact, that I am aware of: postcss-oklab-function and color.js. The PostCSS plugin will preprocess support for you when compiling to CSS. Alternatively, color.js will convert it on the client side.

That’s Oklch 🥰

O, Oklch! How much do I love thee? Let me count the ways:

  • You support a wider gamut of colors that make my designs pop.
  • Your space transitions between colors smoothly, like soft butter.
  • You are as easy to understand as my former love, HSL.
  • You are well-supported by all the major browsers.
  • You provide fallbacks for handling legacy browsers that will never have the pleasure of knowing you.

I know, I know. Get a room, right?!

Resources

Make WordPress PDFing Simple, Easy, Fast & Flexible With Forminator’s PDF Generator Addon

Meet Forminator’s powerful PDF Generator Addon…the simplest, easiest and most automated way to create, edit, and send out form-submitted PDFs without leaving your WordPress dashboard!

Forminator plugin users spoke to us about the challenges they face creating and sending out form-generated PDFs on the fly that seamlessly integrate with their business processes.

For example:

  • “I would like to send a PDF of our forms with email notifications using Forminator. But I don’t want to use the E2PDF method because it’s too limited for us.”
  • “We need to create a form for our user, and generate a PDF after they write on it, and give them the possibility to pay.”
  • “Does anyone know how I can generate a PDF from a form submission like Gravity PDF?”

Forminator users, we heard you!

Forminator Pro gives you the ability to integrate, create, generate, and automate PDFs using our nifty PDF generator addon!

Install with just one click and say goodbye to limited free 3rd-party plugins, costly upgrades, and unnecessary integrations!

In this post, we’ll cover the following areas:

PDF Generator Addon – Key Features

Built to make it easy for any user to create and customize a PDF file from form submission regardless of their technical level, here are some of the key features of Forminator’s PDF Generator Addon:

Easier PDF Generation

“I am working on a free course for artists who want to start their own websites. They fill out a form and then get a PDF download of their answers. This will serve as a ‘Scope of Work’ for their project.”

Forminator’s PDF Builder uses the same intuitively easy-to-use drag and drop visual interface as the Form builder, providing a seamless user experience with no additional learning curve required.

In fact, the PDF creation option is part of the Form Builder, so it only takes a couple of clicks to create a PDF file.

Customizable PDFs

Forminator gives users high flexibility by not only making it easy to customize the PDF form structure and layout using its form builder, but also customize PDF content using the Rich Text field, add additional form fields, and insert field tags (see “How to Use” section below).

Autogenerated PDFs

PDFs can be autogenerated from your existing form structure and form fields, so you don’t need to create your PDF from scratch.

However, Forminator is flexible enough so that if you want to design your PDF fom scratch, you can.

Attach PDFs to Emails

“It would be great if PDFs could be created of the form submissions and could be attached and sent over emails.”

You can send customized email notifications to admins and visitors with PDF attachments automatically. (see “How to Use” section below.)

Downloadable PDFs

Download the PDFs of the form submissions on the Submissions page.

Unlimited PDFs

No limits on usage of fields, number of pages, or number of PDFs.

PDFs and More PDFs

Create multiple PDFs on the same form.

PDF Templates (Coming Soon)

Generate PDF files for payment receipts, invoices, and quotations in seconds with easy-to-use pre-designed templates.

We also have loads more features coming soon (e.g. payment and quotation fields, more settings to customize PDF form appearance with colors and fonts, allowing form submitters to download PDFs after submission, etc.), so watch this space!

How to Use Forminator’s PDF Generator Addon

As mentioned earlier, one of the key features of Forminator’s PDF Generation Addon is that it works just like the plugin’s Form builder, so once you’ve installed it, configuring your PDF forms is so easy.

Note: This is a Pro feature, so make sure you have Forminator Pro installed, or consider becoming a WPMU DEV member if you are currently using our free Forminator plugin.

Creating PDFs

To create PDFs, first make sure to install the addon. You can do this from your WPMU DEV Dashboard plugin, or by going to Forminator Pro > Add-ons.

Forminator Pro Add-ons screen.
Install the PDF Generator from Forminator Pro’s Add-Ons section.

Note: to use the PDF Generator Addon, make sure that you have created at least one form on your site. Remember too, that you can generate multiple PDF files for the same form.

Once the add-on has been installed and activated, edit the form you want to attach a PDF to, and in the Edit Form > PDF section, click on Create New PDF.

Forminator PDF Generation Screen
Create a new PDF in Forminator’s Edit Form > PDF section.

Give your new PDF a filename and click the + Create button.

Forminator - PDF Filename Modal
Give your PDF a name for internal identification purposes.

Next, choose a template for your PDF. Note: As we develop this feature further, we’ll be adding all kinds of new templates to this section for generating PDF receipts, quotations, etc.

After selecting your template, click the Continue button.

Forminator PDF Templates
Forminator Pro users can choose from a range of professionally designed PDF templates.

The Preload PDF Content modal gives you the choice of preloading form fields into your new PDF file, or creating your PDF from scratch.

Choose an option and click the Continue button to proceed.

Forminator - Preload PDF Content
Forminator gives you the choice of preloading form fields or starting with a blank file.

Once your PDF file is created, you can edit it or continue the setup process.

Forminator PDF Modal
Once your PDF file is created, you can edit it or continue building your form.

If you selected the Preload Form Fields in PDF File option, the fields in your form will load in your PDF file.

Editing PDFs

While the Page Header and Page Footer elements are static and cannot be moved, you can edit the settings and style for all fields by clicking on the gear icon to the right of the fields.

You can also rearrange non-static fields using drag and drop to fully customize the layout of your PDF.

Forminator - Edit PDF Fields
Insert, edit, and preview your form fields.

As well as preloading form fields, you can insert additional fields to add custom text and labels, add page breaks to create multipage PDFs, insert payment and quotation fields, and more.

Forminator - Add PDF Fields
There are many PDF form field options to choose from.

Note: To add custom text in your forms, use the Rich Text field. Use either a label for the field, or hide the label and add your own text with formatting options like bold, italics, bullet points, and hyperlinks.

You can also insert form fields into the text area to create a customized PDF template that will autopopulate your form details when generated.

Forminator - Rich Text PDF Field
Use the Rich Text field to format and style your form field content.

Additionally, you can adjust the appearance of your PDFs using appearance options, which allow you to control how your PDFs will look and their layout.

Forminator - Edit PDF Appearance

The Page settings section lets you set the page size from a dropdown menu, with the recommended default being A4. The default page margin is 30 pixels, and you can change this under the Custom tab.

You can also enable the RTL (Right-to-Left) option to output your PDF in languages like Arabic, Hebrew, Farsi, Urdu, etc., and if you’re familiar with CSS, you can use the Custom CSS option to further customize your PDF. Many selectors are included to help you, and if you need further assistance, make sure to contact our 24/7 Live Support team.

After creating or editing your PDF, you can save it as a Draft to continue working on it at a later time. You can also preview, edit, or delete it, and publish or unpublish it.

Forminator - Edit Form screen.
A Forminator Form with a Forminator generated PDF.

Emailing PDFs

With Forminator’s PDF Generator Addon, attaching PDFs to emails is really simple and easy.

After creating your PDF form, go to Forminator > Edit Form > Email Notifications and select the PDF file(s) to attach to the email notification you have set up.

Forminator - Edit Form - Email Notifications
Select one or more PDFs to attach to the email.

Note: You can also set up conditional email rules to automatically send specific PDFs to specific users.

Forminator - Add Email Notification - Conditions.
Use the power of conditional emails to send PDFs to specific users.

Downloading PDFs

You can download PDFs on form submissions from the Submissions page for forms with PDF templates. There are no restrictions on the number of PDFs you can download.

If you have more than one PDF template available for a single form, you can download the form submission PDF for each template separately or the PDFs of all the templates as a zip file.

Download PDFs
Download PDFs for all submission forms.

For full details on using the PDF generator addon and all of its features refer to our Forminator documentation.

With Forminator Pro, You Can’t Go PDF’ing Wrong!

Forminator Pro’s new PDF generator allows you to generate an unlimited number of PDFs with your forms and form submissions, customize, edit, and style PDF templates, and a whole lot more.

If you are a WPMU DEV member, there is nothing else you need to purchase to start generating professional PDFs. Simply install the addon in Forminator, tweak the appearance and settings in your forms, and you’re all good to go.

If you’re not a member yet, consider choosing one of our risk-free membership options (Pro or Agency). You’ll not only get all of our Pro plugins, you’ll also get access to everything else you need to use PDFs effectively, including site management, client report, and client billing tools, white label and reseller options, 24/7 expert support on all areas related to WordPress, CSS, hosting, etc, and so much more!

Start using Forminator’s PDF Generator Addon today…it’s PDF’ing great!

[Editor’s note: This post was originally published in August 2023 and updated in April 2024 for accuracy.]

Make WordPress PDFing Simple, Easy, Fast & Flexible With Forminator’s New PDF Generator Addon

Meet Forminator’s powerful PDF Generator Addon…the simplest, easiest and most automated way to create, edit, and send out form-submitted PDFs without leaving your WordPress dashboard!

Forminator plugin users spoke to us about the challenges they face creating and sending out form-generated PDFs on the fly that seamlessly integrate with their business processes.

For example:

  • “I would like to send a PDF of our forms with email notifications using Forminator. But I don’t want to use the E2PDF method because it’s too limited for us.”
  • “We need to create a form for our user, and generate a PDF after they write on it, and give them the possibility to pay.”
  • “Does anyone know how I can generate a PDF from a form submission like Gravity PDF?”

Forminator users, we heard you!

Forminator Pro now gives you the ability to integrate, create, generate, and automate PDFs using our nifty new PDF generator addon!

Install with just one click and say goodbye to limited free 3rd-party plugins, costly upgrades, and unnecessary integrations!

In this post, we’ll cover the following areas:

PDF Generator Addon – Key Features

Built to make it easy for any user to create and customize a PDF file from form submission regardless of their technical level, here are some of the key features of Forminator’s PDF Generator Addon:

Easier PDF Generation

“I am working on a free course for artists who want to start their own websites. They fill out a form and then get a PDF download of their answers. This will serve as a ‘Scope of Work’ for their project.”

Forminator’s PDF Builder uses the same intuitively easy-to-use drag and drop visual interface as the Form builder, providing a seamless user experience with no additional learning curve required.

In fact, the PDF creation option is part of the Form Builder, so it only takes a couple of clicks to create a PDF file.

Customizable PDFs

Forminator gives users high flexibility by not only making it easy to customize the PDF form structure and layout using its form builder, but also customize PDF content using the Rich Text field, add additional form fields, and insert field tags (see “How to Use” section below).

Autogenerated PDFs

PDFs can be autogenerated from your existing form structure and form fields, so you don’t need to create your PDF from scratch.

However, Forminator is flexible enough so that if you want to design your PDF fom scratch, you can.

Attach Emails to PDFs

“It would be great if PDFs could be created of the form submissions and could be attached and sent over emails.”

You can send customized email notifications to admins and visitors with PDF attachments automatically. (see “How to Use” section below.)

Downloadable PDFs

Download the PDFs of the form submissions on the Submissions page.

Unlimited PDFs

No limits on usage of fields, number of pages, or number of PDFs.

PDFs and More PDFs

Create multiple PDFs on the same form.

PDF Templates

Generate PDF files for payment receipts, invoices, and quotations in seconds with easy-to-use pre-designed templates. (Coming soon!)

We also have loads more features coming soon (e.g. payment and quotation fields, more settings to customize PDF form appearance with colors and fonts, allowing form submitters to download PDFs after submission, etc.), so watch this space!

How to Use Forminator’s PDF Generator Addon

As mentioned earlier, one of the key features of Forminator’s PDF Generation Addon is that it works just like the plugin’s Form builder, so once you’ve installed it, configuring your PDF forms is so easy.

Note: This is a Pro feature, so make sure you have Forminator Pro installed, or consider becoming a WPMU DEV member if you are currently using our free Forminator plugin.

Creating PDFs

To create PDFs, first make sure to install the addon. You can do this from your WPMU DEV Dashboard plugin, or by going to Forminator Pro > Add-ons .

Forminator PDF Add-on Screen
Install the PDF Generator from Forminator Pro’s Add-Ons section.

Note: to use the PDF Generator Addon, make sure that you have created at least one form on your site. Remember too, that you can generate multiple PDF files for the same form.

Once the add-on has been installed and activated, edit the form you want to attach a PDF to, and in the Edit Form > PDF section, click on Create New PDF.

Forminator PDF Generation Screen
Create a new PDF in Forminator’s Edit Form > PDF section.

Give your new PDF a filename and click the + Create button.

Forminator - PDF Filename Modal
Give your PDF a name for internal identification purposes.

Next, choose a template for your PDF. Note: As we develop this feature further, we’ll be adding all kinds of new templates to this section for generating PDF receipts, quotations, etc.

After selecting your template, click the Continue button.

Forminator PDF Templates
Forminator Pro users can choose from a range of professionally designed PDF templates.

The Preload PDF Content modal gives you the choice of preloading form fields into your new PDF file, or creating your PDF from scratch.

Choose an option and click the Continue button to proceed.

Forminator - Preload PDF Content
Forminator gives you the choice of preloading form fields or starting with a blank file.

Once your PDF file is created, you can edit it or continue the setup process.

Forminator PDF Modal
Once your PDF file is created, you can edit it or continue building your form.

If you selected the Preload Form Fields in PDF File option, the fields in your form will load in your PDF file.

Editing PDFs

While the Page Header and Page Footer elements are static and cannot be moved, you can edit the settings and style for all fields by clicking on the gear icon to the right of the fields.

You can also rearrange non-static fields using drag and drop to fully customize the layout of your PDF.

Forminator - Edit PDF Fields
Insert, edit, and preview your form fields.

As well as preloading form fields, you can insert additional fields to add custom text and labels, add page breaks to create multipage PDFs, insert payment and quotation fields, and more.

Forminator - Add PDF Fields
There are many PDF form field options to choose from.

Note: To add custom text in your forms, use the Rich Text field. Use either a label for the field, or hide the label and add your own text with formatting options like bold, italics, bullet points, and hyperlinks.

You can also insert form fields into the text area to create a customized PDF template that will autopopulate your form details when generated.

Forminator - Rich Text PDF Field
Use the Rich Text field to format and style your form field content.

Additionally, you can adjust the appearance of your PDFs using appearance options, which allow you to control how your PDFs will look and their layout.

Forminator - Edit PDF Appearance

The Page settings section lets you set the page size from a dropdown menu, with the recommended default being A4. The default page margin is 30 pixels, and you can change this under the Custom tab.

You can also enable the RTL (Right-to-Left) option to output your PDF in languages like Arabic, Hebrew, Farsi, Urdu, etc., and if you’re familiar with CSS, you can use the Custom CSS option to further customize your PDF. Many selectors are included to help you, and if you need further assistance, make sure to contact our 24/7 Live Support team.

After creating or editing your PDF, you can save it as a Draft to continue working on it at a later time. You can also preview, edit, or delete it, and publish or unpublish it.

Forminator - Edit Form screen.
A Forminator Form with a Forminator generated PDF.

Emailing PDFs

With Forminator’s PDF Generator Addon, attaching PDFs to emails is really simple and easy.

After creating your PDF form, go to Forminator > Edit Form > Email Notifications and select the PDF file(s) to attach to the email notification you have set up.

Forminator - Edit Form - Email Notifications
Select one or more PDFs to attach to the email.

Note: You can also set up conditional email rules to automatically send specific PDFs to specific users.

Forminator - Add Email Notification - Conditions.
Use the power of conditional emails to send PDFs to specific users.

Downloading PDFs

You can download PDFs on form submissions from the Submissions page for forms with PDF templates. There are no restrictions on the number of PDFs you can download.

If you have more than one PDF template available for a single form, you can download the form submission PDF for each template separately or the PDFs of all the templates as a zip file.

Download PDFs
Download PDFs for all submission forms.

For full details on using the PDF generator addon and all of its features refer to our Forminator documentation.

With Forminator Pro, You Can’t Go PDF’ing Wrong!

Forminator Pro’s new PDF generator allows you to generate an unlimited number of PDFs with your forms and form submissions, customize, edit, and style PDF templates, and a whole lot more.

If you are a WPMU DEV member, there is nothing else you need to purchase to start generating professional PDFs. Simply install the addon in Forminator, tweak the appearance and settings in your forms, and you’re all good to go.

If you’re not a member yet, consider choosing one of our risk-free membership options (Pro or Agency). You’ll not only get all of our Pro plugins, you’ll also get access to everything else you need to use PDFs effectively, including site management, client report, and client billing tools, white label and reseller options, 24/7 expert support on all areas related to WordPress, CSS, hosting, etc, and so much more!

Start using Forminator’s PDF Generator Addon today…it’s PDF’ing great!