Conducting Accessibility Research In An Inaccessible Ecosystem

Ensuring technology is accessible and inclusive relies heavily on receiving feedback directly from disabled users. You cannot rely solely on checklists, guidelines, and good-faith guesses to get things right. This is often hindered, however, by a lack of accessible prototypes available to use during testing.

Rather than wait for the digital landscape to change, researchers should leverage all the available tools they can use to create and replicate the testing environments they need to get this important research completed. Without it, we will continue to have a primarily inaccessible and not inclusive technology landscape that will never be disrupted.

Note: I use “identity first” disability language (as in “disabled people”) rather than “people first” language (as in “people with disabilities”). Identity first language aligns with disability advocates who see disability as a human trait description or even community and not a subject to be avoided or shamed. For more, review “Writing Respectfully: Person-First and Identity-First Language”.

Accessibility-focused Research In All Phases

When people advocate that UX Research should include disabled participants, it’s often with the mindset that this will happen on the final product once development is complete. One primary reason is because that’s when researchers have access to the most accessible artifact with which to run the study. However,

The real ability to ensure an accessible and inclusive system is not by evaluating a final product at the end of a project; it’s by assessing user needs at the start and then evaluating the iterative prototypes along the way.

Prototype Research Should Include Disabled Participants

In general, the iterative prototype phase of a project is when teams explore various design options and make decisions that will influence the final project outcome. Gathering feedback from representative users during this phase can help teams make informed decisions, including key pivots before significant development and testing resources are used.

During the prototype phase of user testing, the representative users should include disabled participants. By collecting feedback and perspectives of people with a variety of disabilities in early design testing phases, teams can more thoughtfully incorporate key considerations and supplement accessibility guidelines with real-world feedback. This early-and-often approach is the best way to include accessibility and inclusivity into a process and ensure a more accessible final product.

If you instead wait to include disabled participants in research until a product is near final, this inevitably leads to patchwork fixes of any critical feedback. Then, for feedback not deemed critical, it will likely get “backlogged” where the item priorities compete with new feature updates. With this approach, you’ll constantly be playing catch-up rather than getting it right up front and in an elegant and integrated way.

Accessibility Research Can’t Wait Until The End

Not only does research with disabled participants often occur too late in a project, but it is also far too often viewed as separate from other research studies (sometimes referred to as the “main research”). It cannot be understated that this reinforces the notion of separate-and-not-equal as compared to non-disabled participants and other stakeholder feedback. This has a severe negative impact on how a team will view the priority of inclusive design and, more broadly, the value of disabled people. That is, this reinforces “ableism”, a devaluing of disabled people in society.

UX Research with diverse participants that include a wide variety of disabilities can go a long way in dismantling ableist views and creating vitally needed inclusive technology.

The problem is that even when a team is on board with the idea, it’s not always easy to do inclusive research, particularly when involving prototypes. While discovery research can be conducted with minimal tooling and summative research can leverage fully built and accessible systems, prototype research quickly reveals severe accessibility barriers that feel like they can’t be overcome.

Inaccessible Technology Impedes Accessibility Research

Most technology we use has accessibility barriers for users with disabilities. As an example, the WebAIM Million report consistently finds that 96% of web homepages have accessibility errors that are fixable and preventable.

Just like websites, web, and mobile applications are similarly inaccessible, including those that produce early-stage prototypes. Thus, the artifacts researchers might want to use for prototype testing to help create accessible products are themselves inaccessible, creating a barrier for disabled research participants. It quickly becomes a vicious cycle that seems hard to break.

The Limitations Of Figma

Currently, the most popular industry tool for initial prototyping is Figma. These files become the artifacts researchers use to conduct a research study. However, these files often fall short of being accessible enough for many participants with disabilities.

To be clear, I absolutely applaud the Figma employees who have worked very hard on including screen reader support and keyboard functionality in Figma prototypes. This represents significant progress towards removing accessibility barriers in our core products and should not be overlooked. Nevertheless, there are still limitations and even blockers to research.

For one, the Figma files must be created in a way that will mimic the website layout and code. For example, for screen reader navigation to be successful, the elements need to be in their correct reading order in the Layers panel (not solely look correct visually), include labeled elements such as buttons (not solely items styled to look like buttons), and include alternative text for images. Often, however, designers do not build iterative prototypes with these considerations in mind, which prevents the keyboard from navigating correctly and the screen reader from providing the necessary details to comprehend the page.

In addition, Figma’s prototypes do not have selectable, configurable text. This prevents key visual adjustments such as browser zoom to increase text size, dark mode, which is easier for some to view, and selecting text to have it read aloud. If a participant needs these kinds of adjustments (or others I list in the table below), a Figma prototype will not be accessible to them.

Table: Figma prototype limitations per assistive technology

Assistive Technology Disability Category Limitation
Keyboard-only navigation Mobility Must use proper element type (such as button or input) in expected page order to ensure operability
Screen reader Vision Must include structure to ensure readability:
  • Including elements in logical order to ensure correct reading order
  • Alternative text added to images
  • Descriptive names added for buttons
Dark mode/High contrast mode Low Vision
Neurodiversity
Not available
Browser zoom Low Vision
Neurodiversity
Mobility
Not available
Screen reader used with mouse hover
Read aloud software with text selection
Vision
Neurodiversity
Cannot be used
Voice control
Switch control device
Mobility Cannot be used

Inclusive Research Is Needed Regardless

Having accessibility challenges with a prototype doesn’t mean we give up on the research. Instead, it means we need to get creative in our approach. This research is too important to keep waiting for the ideal set-up, particularly when our findings are often precisely what’s needed to create accessible technology.

Part of crafting a research study is determining what artifact to use during the study. Thus, when considering prototype research, it is a matter of creating the artifact best suited for your study. If this isn’t going to be, say, a Figma file you receive from designers, then consider what else can be used to get the job done.

Working Around the Current State

Being able to include diverse perspectives from disabled research participants throughout a project’s creation is possible and necessary. Keeping in mind your research questions and the capabilities of your participants, there are research methods and strategies that can be made accessible to gather authentic feedback during the critical prototype design phase.

With that in mind, I propose five ways you can accomplish prototype research while working around inaccessible prototypes:

  1. Use a survey.
  2. Conduct a co-design session.
  3. Test with a similar system.
  4. Build your own rapid prototype.
  5. Use the Wizard of Oz method.

Use a Survey Instead

Not all research questions at this phase need a full working prototype to be answered, particularly if they are about the general product features or product wording and not the visual design. Oftentimes, a survey tool or similar type of evaluation can be just as effective.

For example, you can confirm a site’s navigation options are intuitive by describing a scenario with a list of navigation choices while also testing if key content is understandable by confirming the user’s next steps based on a passage of text.

Image description
+

Acme Company Website Survey

Complete this questionnaire to help us determine if our site will be understandable.

  1. Scenario: You want to find out this organization's mission statement. Which menu option do you choose?
    [List of radio buttons]
    • Home
    • About
    • Resources
    • Find an Office
    • Search
  2. The following describes directions for applying to our grant. After reading, answer the following question:

    The Council’s Grant serves to advance Acme's goals by sponsoring community events. In determining whether to fund an event, the Council also considers factors including, but not limited to:
    • Target audiences
    • Alignment with the Council’s goals and objectives
    • Evaluations measuring participant satisfaction
To apply, download the form below.

Based on this wording, what would you include in your grant application?
[Input Field]

Just be sure you build a WCAG-compliant survey that includes accessible form layouts and question types. This will ensure participants can navigate using their assistive technologies. For example, Qualtrics has a specific form layout that is built to be accessible, or check out these accessibility tips for Google Forms. If sharing a document, note features that will enhance accessibility, such as using the ribbon for styling in Microsoft Word.

Tip: To find accessibility documentation for the software you’re using, search in your favorite search engine for the product name plus the word “accessibility” to find a product’s accessibility documentation.

Conduct Co-design Sessions

The prototyping phase might be a good time to utilize co-design and participatory design methods. With these methods, you can co-create designs with participants using any variety of artifacts that match the capabilities of your participants along with your research goals. The feedback can range from high-level workflows to specific visual designs, and you can guide the conversation with mock-ups, equivalent systems, or more creative artifacts such as storyboards that illustrate a scenario for user reaction.

For the prototype artifacts, these can range from low- to high-fidelity. For instance, participants without mobility or vision impairments can use paper-and-pencil sketching or whiteboarding. People with somewhat limited mobility may prefer a tablet-based drawing tool, such as using an Apple pencil with an iPad. Participants with visual impairments may prefer more 3-dimensional tools such as craft supplies, modeling clay, and/or cardboard. Or you may find that simply working on a collaborative online document offers the best accessibility as users can engage with their personalized assistive technology to jot down ideas.

Notably, the types of artifacts you use will be beneficial across differing user groups. In fact, rather than limiting the artifacts, try to offer a variety of ways to provide feedback by default. By doing this, participants can feel more empowered and engaged by the activity while also reassuring them you have created an inclusive environment. If you’re not sure what options to include, feel free to confirm what methods will work best as you recruit participants. That is, as you describe the primary activity when they are signing up, you can ask if the materials you have will be operable for the participant or allow them to tell you what they prefer to use.

The discussion you have and any supplemental artifacts you use then depend on communication styles. For example, deaf participants may need sign language interpreters to communicate their views but will be able to see sample systems, while blind participants will need descriptions of key visual information to give feedback. The actual study facilitation comes down to who you are recruiting and what level of feedback you are seeking; from there, you can work through the accommodations that will work best.

I conducted two co-design sessions at two different project phases while exploring how to create a wearable blind pedestrian navigation device. Early in the project, when we were generally talking about the feature set, we brought in several low-fidelity supplies, including a Braille label maker, cardboard, clay, Velcro, clipboards, tape, paper, and pipe cleaners. Based on user feedback, I fashioned a clipboard hanging from pipe cleaners as one prototype.

Later in the project when we were discussing the size and weight, we taped together Arduino hardware pieces representing the features identified by the participants. Both outcomes are pictured below and featured in a paper entitled, “What Not to Wearable: Using Participatory Workshops to Explore Wearable Device Form Factors for Blind Users.”

Ultimately, the benefit of this type of study is the participant-led feedback. In this way, participants are giving unfiltered feedback that is less influenced by designers, which may lead to more thoughtful design in the end.

Test With an Equivalent System

Very few projects are completely new creations, and often, teams use an existing site or application for project inspiration. Consider using similar existing systems and equivalent scenarios for your testing instead of creating a prototype.

By using an existing live system, participants can then use their assistive technology and adaptive techniques, which can make the study more accessible and authentic. Also, the study findings can range from the desirability of the available product features to the accessibility and usability of individual page elements. These lessons can then inform what design and code decisions to make in your system.

One caveat is to be aware of any accessibility barriers in that existing system. Particularly for website and web applications, you can look for accessibility documentation to determine if the company has reported any WCAG-conformance accessibility efforts, use tools like WAVE to test the system yourself, and/or mimic how your participants will use the system with their assistive technology. If there are workarounds for what you find, you may be able to avoid certain parts of the application or help users navigate past the inaccessible parts. However, if the site is going to be completely unusable for your participants, this won’t be a viable option for you.

If the system is usable enough for your testing, however, you can take the testing a step further by making updates on the fly if you or someone you collaborate with has engineering experience. For example, you can manipulate a website’s code with developer tools to add, subtract, or change the elements and styling on a page in real-time. (See “About browser developer tools”.) This can further enhance the feedback you give to your teams as it may more closely match your team’s intended design.

Build a Rapid Website Prototype

Notably, when conducting research focused on physical devices and hardware, you will not face the same obstacles to inaccessibility as with websites and web applications. You can use a variety of materials to create your prototypes, from cardboard to fabric to 3D printed material. I’ve sewn haptic vibration modules to a makeshift leather bracelet when working with wearables, for instance.

However, for web testing, it may be necessary to build a rapid prototype, especially to work around inaccessible artifacts such as a Figma file. This will include using a site builder that allows you to quickly create a replica of your team’s website. To create an accessible website, you’ll need a site builder with accessibility features and capabilities; I recommend WordPress, SquareSpace, Webflow, and Google Sites.

I recently used Google Sites to create a replica of a client’s draft pages in a matter of hours. I was adamant we should include disabled participants in feedback loops early and often, and this included after a round of significant visual design and content decisions. The web agency building the client’s site used Figma but not with the required formatting to use the built-in screen reader functionality. Rather than leave out blind user feedback at such a crucial time in the project, I started with a similar Google Sites template, took a best guess at how to structure the elements such as headings, recreated the anticipated column and card layouts as best I could, and used placeholder images with projected alt text instead of their custom graphics.

The screen reader testing turned into an impromptu co-design session because I could make changes in-the-moment to the live site for the participant to immediately test out. For example, we determined that some places where I used headings were not necessary, and we talked about image alt text in detail. I was able to add specific design and code feedback to my report, as well as share the live site (and corresponding code) with the team for comparison.

The downside to my prototype was that I couldn’t create the exact 1-to-1 visual design to use when testing with the other disabled participants who were sighted. I wanted to gather feedback on colors, fonts, and wording, so I also recruited low vision and neurodiverse participants for the study. However, my data was skewed because those participants couldn’t make the visual adjustments they needed to fully take in the content, such as recoloring, resizing, and having text read aloud. This was unfortunate, but we at least used the prototype to spark discussions of what does make a page accessible for them.

You may find you are limited in how closely you can replicate the design based on the tools you use or lack of access to developer assistance. When facing these limitations, consider what is most important to evaluate and determine if a paired-down version of the site will still give you valuable feedback over no site at all.

Use Wizard of Oz

The Wizard of Oz (WoZ) research method involves the facilitators mimicking system interactions in place of a fully working system. With WoZ, you can create your system’s approximate functionality using equivalent accessible tools and processes.

As an example, I’ll refer you to the talk by an Ally Financial research team that used this method for participants who used screen readers. They pre-programmed screen reader prompts into a clickable spreadsheet and had participants describe aloud what keyboard actions they would take to then trigger the corresponding prompt. While not the ideal set-up for the participants or researchers, it at least brought screen reader user feedback (and recognition of the users themselves) to the early design phases of their work. For more, review their detailed talk “Removing bias with wizard of oz screen reader usability testing”.

This isn’t just limited to screen reader testing, however. In fact, I’ve also often used Wizard of Oz for Voice User Interface (VUI) design. For instance, when I helped create an Alexa “skill” (their name for an app on Amazon speech-enabled devices), our prototype wouldn’t be ready in time for user testing. So, I drafted an idea to use a Bluetooth speaker to announce prompts from a clickable spreadsheet instead. When participants spoke a command to the speaker (thinking it was an Alexa device), the facilitator would select the appropriate pre-recorded prompt or a generic “I don’t understand” message.

Any system can be mimicked when you break down its parts and pieces and think about the ultimate interaction for the user. Creating WoZ set-ups can take creativity and even significant time to put together, but the outcomes can be worth it, particularly for longer-term projects. Once the main pieces are created, the prototype set-up can be edited and reused indefinitely, including during the study or between participants. Also, the investment in an easily edited prototype pays off exponentially if it uncovers something prior to finishing the entire product. In fact, that’s the main goal of this phase of testing: to help teams know what to look out for before they go through the hard work of finishing the product.

Inclusive Research Can No Longer Wait

Much has been documented about inclusive design to help teams craft technology for the widest possible audience. From the Web Content Accessibility Guidelines that help define what it means to be accessible to the Microsoft Inclusive Design Toolkits that tell the human stories behind the guidelines, there is much to learn even before a product begins.

However, the best approach is with direct user feedback. With this, we must recognize the conundrum many researchers are facing: We want to include disabled participants in UX research prior to a product being complete, but often, prototypes we have available for testing are inaccessible. This means testing with something that is essentially broken and will negatively impact our findings.

While it may feel like researchers will always be at a disadvantage if we don’t have the tools we need for testing, I think, instead, it’s time for us to push back. I propose we do this on two fronts:

  1. We make the research work as best we can in the current state.
  2. We advocate for the tools we need to make this more streamlined.

The key is to get disabled perspectives on the record and in the dataset of team members making the decisions. By doing this, hopefully, we shift the culture to wanting and valuing this feedback and bringing awareness to what it takes to make it happen.

Ideally, the awareness raised from our bootstrap efforts will lead to more people helping reduce the current prototype barriers. For some of us, this means urging companies to prioritize accessibility features in their roadmaps. For those working within influential prototype companies, it can mean getting much-needed backing to innovate better in this area.

The current state of our inaccessible digital ecosystem can sometimes feel like an entanglement too big to unravel. However, we must remain steadfast and insist that this does not remain the status quo; disabled users are users, and their diverse and invaluable perspectives must be a part of our research outcomes at all phases.

Using AI For Neurodiversity And Building Inclusive Tools

In 1998, Judy Singer, an Australian sociologist working on biodiversity, coined the term “neurodiversity.” It means every individual is unique, but sometimes this uniqueness is considered a deficit in the eyes of neuro-typicals because it is uncommon. However, neurodiversity is the inclusivity of these unique ways of thinking, behaving, or learning.

Humans have an innate ability to classify things and make them simple to understand, so neurodivergence is classified as something different, making it much harder to accept as normal.

“Why not propose that just as biodiversity is essential to ecosystem stability, so neurodiversity may be essential for cultural stability?”

— Judy Singer

Culture is more abstract in the context of biodiversity; it has to do with values, thoughts, expectations, roles, customs, social acceptance, and so on; things get tricky.

Discoveries and inventions are driven by personal motivation. Judy Singer started exploring the concept of neurodiversity because her daughter was diagnosed with autism. Autistic individuals are people who are socially awkward but are very passionate about particular things in their lives. Like Judy, we have a moral obligation as designers to create products everyone can use, including these unique individuals. With the advancement of technology, inclusivity has become far more important. It should be a priority for every company.

As AI becomes increasingly tangled in our technology, we should also consider how being more inclusive will help, mainly because we must recognize such a significant number. AI allows us to design affordable, adaptable, and supportive products. Normalizing the phenomenon is far easier with AI, and it would help build personalized tools, reminders, alerts, and usage of language and its form.

We need to remember that these changes should not be made only for neurodiverse individuals; it would help everyone. Even neurotypicals have different ways of grasping information; some are kinesthetic learners, and others are auditory or visual.

Diverse thinking is just a different way of approaching and solving problems. Remember, many great minds are neurodiverse. Alan Turing, who cracked the code of enigma machines, was autistic. Fun fact: he was also the one who built the first AI machine. Steve Jobs, the founder and pioneer design thinker, had dyslexia. Emma Watson, famously known for her role as Hermione Granger from the Harry Potter series, has Attention-Deficit/Hyperactivity Disorder (ADHD). There are many more innovators and disruptors out there who are different.

Neurodivergence is a non-medical umbrella term.) used to classify brain function, behavior, and processing, which is different from normal. Let’s also keep in mind that these examples and interpretations are meant to shed some light on the importance of the neglected topic. It should be a reminder for us to invest further and investigate how we can make this rapidly growing technology in favor of this group as we try to normalize neurodiversity.

Types Of Neurodiversities
  • Autism: Autism spectrum disorder (ASD) is a neurological and developmental disorder that affects how people interact with others, communicate, learn, and behave.
  • Learning Disabilities
    The common learning disabilities:
  • Attention-Deficit/Hyperactivity Disorder (ADHD): An ongoing pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development.
Making AI Technology More Neuro-inclusive

Artificial Intelligence (AI) enables machines to think and perform tasks. However, this thinking is based on algorithmic logic, and that logic is based on multiple examples, books, and information that AI uses to generate the resulting output. The network of information that AI mimics is just like our brains; it is called a neural network, so data processing is similar to how we process information in our brains to solve a problem.

We do not need to do anything special for neurodiversity, which is the beauty of AI technology in its current state. Everything already exists; it is the usage of the technology that needs to change.

There are many ways we could improve it. Let’s look at four ways that are crucial to get us started.

Workflow Improvements

For: Autistic and ADHD
Focus: Working memory

Gartner found that 80% of executives think automation can be applied to any business decision. Businesses realized that a tactical approach is less successful than a strategic approach to using AI. For example, it can support business decisions that would otherwise require a lot of manual research.

AI has played a massive role in automating various tasks till now and will continue to do so in the future; it helps users reduce the time they spend on repetitive aspects of their jobs. It saves users a lot of time to focus their efforts on things that matter. Mundane tasks get stacked in the working memory; however, there is a limit: humans can keep up to 3–5 ideas simultaneously. If there are more than five ideas at play, humans ought to forget or miss something unless they document it. When completing these typical but necessary tasks, it becomes time-consuming and frustrating for users to focus on their work. This is especially troublesome for neurodivergent employees.

Autistic and ADHD users might have difficulty following through or focusing on aspects of their work, especially if it does not interest them. Straying thoughts is not uncommon; it makes it even harder to concentrate. Autistic individuals are hyper-focused, preventing them from grasping other relevant information. On the contrary, ADHD users lose focus quickly as their attention span is limited, so their working memory takes a toll.

AI could identify this and help users overcome it. Improving and automating the workflow will allow them to focus on the critical tasks. It means less distractions and more direction. Since they have trouble with working memory, allowing the tool to assist them in capturing moments to help recall later would benefit them greatly.

Example That Can Be Improved

Zoom recently launched its AI companion. When a user joins a meeting as a host, they can use this tool for various actions. One of those actions is to summarize the meeting. It auto-generates meeting notes at the end and shares them. AI companion is an excellent feature for automating notes in the meeting, allowing all the participants to not worry about taking notes.

Opportunity: Along with the auto-generated notes, Zoom should allow users to take notes in-app and use them in their summaries. Sometimes, users get tangent thoughts or ideas that could be useful, and they can create notes. It should also allow users to choose the type of summary they want, giving them more control over it, e.g., short, simplified, or list. AI could also personalize this content to allow participants to comprehend it in their own way. Autistic users would benefit from their hyper-focused attention in the meeting. ADHD users can still capture those stray thoughts, which the AI will summarize in the notes. Big corporations usually are more traditional with incremental improvements. Small tech companies have less to lose, so we often see innovation there.

Neurodivergent Friendly Example

Fireflies.ai is an excellent example of how neuro-inclusivity can be considered, and it covers all the bases Zoom falls short of. It auto-generates meeting notes. It also allows participants to take notes, which are then appended to the auto-generated summary: this summary can be in a bullet list or a paragraph. The tool can also transcribe from the shared slide deck within the summary. It shares audio snippets of important points alongside the transcription. The product can support neurodivergent users far better.

Natural Language Processing

For: Autistic, Learning Disabilities, and ADHD
Focus: Use simple words and give emotional assistance

Words have different meanings for all. Some might understand the figurative language, but others might get offended by the choice of it. If this is so common with a neurotypical, imagine how tricky it will be for a neurodivergent. Autistic users have difficulty understanding metaphorical language and empathizing with others. Learning disabilities will have trouble with language, especially figurative language, which perplexes them. ADHD users have a short attention span, and using complex sentences would mean they will lose interest.

Using simple language aids users far better than complex sentence constructions for neurodivergent. Metaphors, jargon, or anecdotal information might be challenging to interpret and frustrate them. The frustration could avert them from pursuing things that they feel are complex. Providing them with a form of motivation by allowing them to understand and grow will enable them to pursue complexities confidently. AI could help multifold by breaking down the complex into straightforward language.

Example That Can Be Improved

Grammarly is a great tool for correcting and recommending language changes. It has grammatical and Grammarly-defined rules based on which the app makes recommendations. It also has a feature that allows users to select the tone of voice or goals, casual or academic style, enhancing the written language to the expectation. Grammarly also lets organizations define style guides; it could help the user write based on the organization’s expectations.

Opportunity: Grammarly still needs to implement a gen AI assistive technology, but that might change in the future. Large learning models (LLM) can further convert the text into inclusive language considering cultural and regional relevance. Most presets are specific to the rules Grammarly or the organization has defined, which is limiting. Sentimental analysis is still not a part of their rules. For example, if the write-up is supposed to be negative, the app recommends changing or making it positive.

Neurodivergent Friendly Example

Writer is another beautiful product that empowers users to follow guidelines established by the organization and, obviously, the grammatical rules. It provides various means to rewrite sentences that make sense, e.g., simplify, polish, shorten, and so on. Writers also assist with sentence reconstruction and recommendation based on the type of content the user writes, for instance, an error or a tooltip. Based on those features and many more under the gen AI list, Writer can perform better for neurodivergent users.

Cognitive Assistance

For: Autistic, Learning Disabilities, and ADHD
Focus: Suggestive technology

Equality Act 2010 was established to bring workplace equality with legislation on neurodiversity. Employers need to understand the additional needs of neurodivergent employees and make amendments to existing policies to incorporate them. The essence of the Equality Act can be translated into actionable digital elements to bring equality of usage of products.

Neurodiverse or not, cognitive differences are present in both groups. The gap becomes more significant when we talk about them separately. Think about it: all AI assistive technologies are cognition supplements.

Cognoassist did a study to understand cognition within people. They found that less than 10% of them score within a typical range of assessment. It proves that the difference is superficial, even if it is observable.

Cognition is not just intelligence but a runway of multiple mental processes, irrespective of the neural inclination. It is just a different way of cognition and reproduction than normal. Nonetheless, neurodivergent users need assistive technologies more than neuro-typicals; it fills the gap quickly. This will allow them to function at the same level by making technology more inclusive.

Example That Can Be Improved

ClickUp is a project management tool that has plenty of automation baked into it. It allows users to automate or customize their daily routine, which helps everyone on the team to focus on their goals. It also lets users connect various productivity and management apps to make it a seamless experience and a one-stop shop for everything they need. The caveat is that the automation is limited to some actions.

Opportunity: Neurodivergent users sometimes need more cognitive assistance than neuro-typicals. Initiating and completing tasks is difficult, and a push could help them get started or complete them. The tool could also help them with organization, benefiting them greatly. Autistic individuals prefer to complete a task in one go, while ADHD people like to mix it up as they get the necessary break from each task and refocus. An intelligent AI system could help users by creating more personalized planned days and a to-do list to get things started.

Neurodivergent Friendly Example

Motion focuses on planning and scheduling the user’s day to help with their productivity goals. When users connect their calendars to this tool, they can schedule their meetings with AI by considering heads-down time or focused attention sessions based on each user’s requirement. The user can personalize their entire schedule according to their liking. The tool will proactively schedule incoming meetings or make recommendations on time. This AI assistive technology also aids them with planning around deadlines.

Adaptive Onboarding

For: Learning Disabilities and ADHD
Focus: Reduce Frustration

According to Epsilon, 80% of consumers want a personalized experience. All of these personalization experiences are to make the user’s workflow easier. These personalized experiences start from the introduction to the usage of the product. Onboarding helps users learn about the product, but learning continues after the initial product presentation.

We cannot expect users to know about the product once the onboarding has been completed and they need assistance in the future. Over time, if users have a hard time comprehending or completing a task, they get frustrated; this is particularly true for ADHD users. At the same time, users with learning disabilities do not remember every step either because they are too complex or have multiple steps.

Adaptive onboarding will allow everyone to re-learn when needed; it would benefit them more since help is available when needed. This type of onboarding could be AI-driven and much more generative. It could focus on different learning styles, either assistive, audio, or video presentation.

Example That Can Be Improved:

Product Fruits has a plethora of offerings, including onboarding. It offers personalization and the ability to tailor the onboarding to cover the product for new users. Allowing customization with onboarding gives the product team more control over what needs attention. It also provides the capability to track product usage based on the onboarding.

Opportunity: Offering AI interventions for different personas or segments will give the tool an additional layer of experience tailored to the needs of individuals. Imagine a user with ADHD who is trying to figure out how to use the feature; they will get frustrated if they do not identify how to use it. What if the tool intuitively nudges the user on how to complete the task? Similarly, if completing the task is complex and requires multiple steps, users with learning disabilities have difficulty following and reproducing it.

Neurodivergent Friendly Example

Onboarding does not always need to be at the start of the product introduction. Users always end up in situations where they need to find a step in the feature of completing a task but might have difficulty discovering it. In such cases, they usually seek help by asking colleagues or looking it up on the product help page.

Chameleon helps by offering features that let users use AI more effectively. Users can ask for help anytime, and the AI will generate answers to help them.

Considerations

All the issues I mentioned are present in everyone; the difference is the occurrence and intensity between neurotypical and neurodiverse individuals. Everyday things, discussions, conclusions, critical thinking, comprehension, and so on, are vastly different. It is like neurodiverse individuals’ brains are wired differently. It becomes more important to build tools that solve problems for neurodiverse users, which we inadvertently solve for everyone.

An argument that every human goes through those problems is easy to make. But, we tend to forget the intensity and criticality of those problems for neurodiverse individuals, which is far too complex than shrugging it off like neuro-typicals who can adapt to it much more quickly. Similarly, AI too has to learn and understand the problems it needs to solve. It can be confusing for the algorithm to learn unless it does not have multiple examples.

Large Language Models (LLM) are trained on vast amounts of data, such as ChatGPT, for example. It is accurate most of the time; however, sometimes, it hallucinates and gives an inaccurate answer. That might be a considerable problem when no additional guidelines exist except for the LLM. As mentioned above, there is still a possibility in most cases, but having the company guidelines and information would help give correct results.

It could also mean the users will be more dependent on AI, and there is no harm in it. If neurodiverse individuals need assistance, there cannot be a human present all the time carrying the patience required every time. Being direct is an advantage of AI, which is helpful in the case of their profession.

Conclusion

Designers should create efficient workflows for neurodivergent users who are having difficulty with working memory, comprehending complex language, learning intricate details, and so on. AI could help by providing cognitive assistance and adaptive technologies that benefit neurodivergent users greatly. Neurodiversity should be considered in product design; it needs more attention.

AI has become increasingly tied in every aspect of the user’s lives. Some are obvious, like conversational UI, chatbots, and so on, while others are hidden algorithms like recommendation engines.

Many problems specific to accessibility are being solved, but are they being solved while keeping neurodiverse issues in mind?

Jamie Diamon famously said:

“Problems don’t age well.”

— Jamie Diamon (CEO, JP Morgan)

This means we have to take critical issues into account sooner. Building an inclusive world for those 1.6 billion people is not a need for the future but a necessity of the present. We should strive to create an inclusive world for neurodiverse users; it is especially true because AI is booming, and making it inclusive now would be easy as it will scale into a behemoth set of features in every aspect of our lives in the future.

‘30% of Activities Performed by Humans Could Be Automated with AI’

Alexander De Ridder, AI visionary and CTO of SmythOS, discusses the transformative power of specialized AI systems and the future of human-AI collaboration.

header-agi-talks-adr.jpg

In the newest interview of our AGI Talks series, Alexander De Ridder shares his insights on the potential impacts of Artificial General Intelligence (AGI) on business, entrepreneurship, and society.

About Alexander De Ridder

profile-alexander-de-ridder.jpg

With a robust background that spans over 15 years in computer science, entrepreneurship, and marketing, Alexander De Ridder possesses a rare blend of skills that enable him to drive technological innovation with strategic business insight. His journey includes founding and successfully exiting several startups.

Currently he serves as the Co-Founder and Chief Technology Officer of SmythOS, a platform seeks to streamline processes and escalate efficiency across various industries. SmythOS is the first operating system specifically designed to manage and enhance the interplay between specialized AI agents.

Stationed in Houston, Alexander is a proactive advocate for leveraging AI to extend human capabilities and address societal challenges. Through SmythOS and his broader endeavors, he aims to equip governments and enterprises with the tools needed to realize their potential, advocating for AI-driven solutions that promote societal well-being and economic prosperity.

AGI Talks: Interview with Alexander De Ridder

In our interview, Alexandre provides insights on the impact of AI on the world of business and entrepreneurship:

1.What is your preferred definition of AGI?

Alexander De Ridder: The way you need to look at AGI is simple. Imagine tomorrow there were 30 billion people on the planet. But only 8 billion people needed an income. So, what would happen? You would have a lot more competition, prices would be a lot more affordable, and you have a lot more, you know, services, wealth, everything going around.

AGI in most contexts is a term used to define any form of artificial intelligence that can understand, learn, and utilize its intelligence to solve any problem almost like a human can. This is unlike narrow AI which is limited to the scope it exists for and cannot do something outside the limited tasks.

2. and ASI (Artificial Superintelligence)?

ASI is an artificial intelligence that is on par with human intelligence in a variety of cognitive abilities, including creativity, comprehensive wisdom, and problem-solving.

ASI would be able to surpass the intelligence of even the best human minds in almost any area, from scientific creativity to general wisdom, to social or individual understanding.

3. In what ways do you believe AI will most significantly impact society in the next decade?

AI will enable businesses to achieve higher efficiency with fewer employees. This shift will be driven by the continuous advancement of technology, which will allow you to automate various tasks, streamline operations, and offer more personalized experiences to customers.

Businesses will build their own customized digital workers. These AI agents will integrate directly with a companys tools and systems. They will automate tedious tasks, collaborate via chat, provide support, generate reports, and much more.

The potential to offload repetitive work and empower employees is immense. Recent research suggests that around 30% of activities currently performed by humans could be automated with AI agents. This will allow people to focus their energy on more meaningful and creative responsibilities.

Agents will perform work 24/7 without getting tired or getting overwhelmed. So, companies will get more done with smaller teams, reducing hiring demands. Individuals will take on only the most impactful high-value work suited to human ingenuity.

4. What do you think is the biggest benefit associated with AI?

AI enhances productivity by automating complex workflows and introducing digital coworkers or specialized AI agents, leading to potential 10x productivity gains.

For example, AI automation will be accessible to organizations of any size or industry. There will be flexible no-code interfaces that allow anyone to build agents tailored to their needs. Whether its finance, healthcare, education or beyond AI will help enterprises globally unlock new levels of productivity.

The future of work blending collaborative digital and human team members is nearer than many realize. And multi-agent systems are the key to unlocking this potential and skyrocketing productivity.

5. and the biggest risk of AI?

The integration of AI in the workplace highlights and enables mediocre workers in some cases. As AI takes over routine and repetitive tasks, human workers need to adapt and develop new skills to stay relevant

6. In your opinion, will AI have a net positive impact on society?

I will be very grateful to present a campaign to improve the general good of the world by making sure many people become aware of and exploit the opportunities within Multi-Agent Systems Engineering (MASE) capabilities. That will enable the implementation of AI agents for benevolent purposes.

In the future, non-programmers will easily assemble specialized AI agents with the help of basic elements of logic, somewhat similar to children assembling their LEGO blocks. I would advocate for platforms like SmythOS that abstract away AI complexities so domain experts can teach virtual assistants. With reusable components and public model access, people can construct exactly the intelligent help they need.

And collaborative agent teams would unlock exponentially more value, coordinating interdependent goals. A conservation agent could model sustainability plans, collaborating with a drone agent collecting wildlife data and a social media agent spreading public awareness.

With some basic training, anyone could become a MASE engineer the architects of this AI-powered future. Rather than passive tech consumption, people would actively create solutions tailored to local needs.

By proliferating MASE design skills and sharing best agent components, I believe we can supercharge global problem solvers to realize grand visions. The collective potential to reshape society for the better rests in empowering more minds to build AI for good. This is the movement I would dedicate myself to sharing.

7. Where are the limits of human control over AI systems?

As AI proliferates, content supply will expand to incredible heights, and it will become impossible for people to be found by their audience unless you are a very big brand with incredible authority. In the post-AI agent world, everyone will have some sort of AI assistant or digital co-worker.

8. Do you think AI can ever truly understand human values or possess consciousness?

While AI continually progresses on rational tasks and data-based decision-making, for now it falls short on emotional intelligence, intuition, and the wisdom that comes from being human. We learned the invaluable lesson that the smartest systems arent the fully automated ones theyre the thoughtfully integrated blend of artificial and human strengths applied at the right times.

In areas like branding, campaign messaging, and customer interactions, we learned to rely more on talent from fields like marketing psychology paired with AI support, not pure unsupervised generative text. This balancing act between automated solutions and human-centric work is key for delivering business results while preserving that human touch that builds bonds, trust, and rapport.

This experience highlighted that todays AI still has significant limitations when it comes to emotional intelligence, cultural awareness, wisdom, and other intrinsically human qualities.

Logical reasoning and statistical patterns are one thing but true connection involves nuanced insight into complex psychological dynamics. No amount of data or processing power can yet replicate life experiences and the layered understandings they impart.

For now, AI exists best as collaborative enhancements, not wholesale replacements in areas fundamental to the human experience. The most effective solutions augment people rather than supplant them handling rote administrative tasks while empowering human creativity, judgment, and interpersonal skills.

Fields dealing directly in sensitive human matters like healthcare, education and governance need a delicate balance of automation coupled with experienced professionals. Especially when ethical considerations around bias are paramount.

Blending AIs speed and scalability with human wisdom and oversight is how we manifest the best possible futures. Neither is sufficient alone. This balance underpins our vision for SmythOS keeping a person in the loop for meaningful guidance while AI agents tackle tedious minutiae.

The limitations reveal where humans must lead, govern, and collaborate. AI is an incredible asset when thoughtfully directed, but alone lacks the maturity for full responsibility in societys foundational pillars. We have much refinement ahead before artificial intelligence rivals emotional and contextual human intelligence. Discerning appropriate integration is key as technology steadily advances.

9. Do you think your job as an entrepreneur will ever be replaced by AI?

Regarding job displacement we see AI as empowering staff, not replacing them. The goal is to effectively collaborate with artificial teammates to unlock new levels of innovation and fulfillment. We believe the future is blended teams with humans directing priorities while AI handles repetitive tasks.

Rather than redundancy, its an opportunity to elevate people towards more satisfying responsibilities better leveraging their abilities. Time freed from drudgery opens creative avenues previously unattainable when bogged down in administrative tasks. Just as past innovations like factories or computers inspired new human-centered progress, AI can propel society forward if harnessed judiciously.

With conscientious governance and empathy, automation can transform businesses without devaluing humanity. Blending inclusive policies and moral AI systems to elevate both artificial and human potential, we aim for SmythOS to responsibly unlock a brighter collaborative future.

10. We will reach AGI by the year?

I think that one one-year window is too short to achieve AGI in general. I think that we (humans) will discover challenges and face delusions on some aspects, in order to re-evaluate our expectations from AI, and maybe AGI is not actually the holy grail, and instead, we should focus on AIs that will multiply our capabilities, instead of ones that could potentially replace us

Connecting With Users: Applying Principles Of Communication To UX Research

Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.

In this article, I’ll focus on UX research. Communication is a core component of UX research, as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.

Fundamentals of Communication Theory

Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.

In this article, I will focus on the Transactional Model of Communication. There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.

The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback. Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.

Understanding the Transactional Model

Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:

  1. Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
  2. Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
  3. Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
  4. Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
  5. Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
  6. Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.
Applying the Transactional Model of Communication to Preparing for UX Research

We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:

  • Improve Clarity
    The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively.
  • Minimize misunderstanding
    By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated.
  • Enhance research participant participation
    With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.

You can address the specific elements of the Transactional Model through the following steps while preparing for research:

Defining the Sender and Receiver

In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.

For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.

Message Creation

The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.

Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.

Choosing the Right Channel

The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.

Minimizing Noise

Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.

Facilitating Feedback

You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.

Track Your Alignment to the Framework

You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.

You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.

Applying the Transactional Model to Common UX Research Methods

UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.

Interviews

Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.

Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.

Asking Open-ended Questions

To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.

Actively Listening

As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques, such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.

Being Responsive

Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.

Minimizing Noise

We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews. This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.

For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.

Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.

Strategizing the Flow

Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.

A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.

While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.

Context Considerations

You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.

For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.

Power Dynamics

Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.

We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible. You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.

Post-Interview Actions

You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:

  • Debriefing
    Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews.
  • Short surveys
    Send a brief survey via email or an online platform to gather feedback on the interview experience.
  • Follow-up calls
    Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted.
  • Thank you emails
    Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.

You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.

As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings.

We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.

By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.

Surveys

The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.

Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).

Designing the Survey

Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.

Crafting Questions

The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.

For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?”.

Minimizing Noise

Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.

Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.

When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.

For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.

You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.

The Transactional Model suggests active participation in communication is necessary for effective communication. Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.

This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.

Encouraging Feedback

While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.

You should be mindful that too many open-ended questions can cause fatigue, so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.

Post-Survey Actions

After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.

You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.

Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.

Usability Testing

Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.

Here are some ways a researcher can use elements of the Transactional Model during usability testing:

Task Assignment as Message Sending

When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.

You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.

Observing and Listening as Message Receiving

As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.

Real-time Interaction

The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).

Noise

In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.

The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.

One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.

Encouraging Feedback

Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols, where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.

You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.

We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.

Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model

The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.

The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.

Understanding Context

Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.

Deciphering Noise

In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.

Considering Sender-Receiver Dynamics

Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.

For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.

Reporting Results

Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:

  • Identify Stakeholder Groups
    Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives.
  • Prioritize Information
    Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact.
  • Adapt Communication Style
    Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.

Acknowledging Feedback

Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.

Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.

Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.

By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs.

Tracking Your Application of the Transactional Model to Your Practice

You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.

You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.

Stage Columns Description Example
Pre-Interview Planning Topic/Question (Aligned with research goals) Identify the research question and design questions that encourage open-ended responses and co-construction of meaning. Testing mobile banking app’s bill payment feature. How do you set up a new payee? How would you make a payment? What are your overall impressions?
Participant Context Note relevant demographic and personal information to tailor questions and avoid biased assumptions. 35-year-old working professional, frequent user of the online banking and mobile application but unfamiliar with using the app for bill pay.
Engagement Strategies Outline planned strategies for active listening, open-ended questions, clarification prompts, and building rapport. Open-ended follow-up questions (“Can you elaborate on XYZ? Or Please explain more to me what you mean by XYZ.”), active listening cues, positive reinforcement (“Thank you for sharing those details”).
Shared Understanding List potential challenges to understanding participant’s perspectives and strategies for ensuring shared meaning. Initially, the participant expressed some confusion about the financial jargon I used. I clarified and provided simpler [non-jargon] explanations, ensuring we were on the same page.
During Interview Verbal Cues Track participant’s language choices, including metaphors, pauses, and emotional expressions. Participant used a hesitant tone when describing negative experiences with the bill payment feature. When questioned, they stated it was “likely their fault” for not understanding the flow [it isn’t their fault].
Nonverbal Cues Note participant’s nonverbal communication like body language, facial expressions, and eye contact. Frowning and crossed arms when discussing specific pain points.
Researcher Reflexivity Record moments where your own biases or assumptions might influence the interview and potential mitigation strategies. Recognized my own familiarity with the app might bias my interpretation of users’ understanding [e.g., going slower than I would have when entering information]. Asked clarifying questions to avoid imposing my assumptions.
Power Dynamics Identify instances where power differentials emerge and actions taken to address them. Participant expressed trust in the research but admitted feeling hesitant to criticize the app directly. I emphasized anonymity and encouraged open feedback.
Unplanned Questions List unplanned questions prompted by the participant’s responses that deepen understanding. What alternative [non-bank app] methods for paying bills that you use? (Prompted by participant’s frustration with app bill pay).
Post-Interview Reflection Meaning Co-construction Analyze how both parties contributed to building shared meaning and insights. Through dialogue, we collaboratively identified specific design flaws in the bill payment interface and explored additional pain points and areas that worked well.
Openness and Flexibility Evaluate how well you adapted to unexpected responses and maintained an open conversation. Adapted questioning based on participant’s emotional cues and adjusted language to minimize technical jargon when that issue was raised.
Participant Feedback Record any feedback received from participants regarding the interview process and areas for improvement. Thank you for the opportunity to be in the study. I’m glad my comments might help improve the app for others. I’d be happy to participate in future studies.
Ethical Considerations Reflect on whether the interview aligned with principles of transparency, reciprocity, and acknowledging power dynamics. Maintained anonymity throughout the interview and ensured informed consent was obtained. Data will be stored and secured as outlined in the research protocol.
Key Themes/Quotes Use this column to identify emerging themes or save quotes you might refer to later when creating the report. Frustration with a confusing interface, lack of intuitive navigation, and desire for more customization options.
Analysis Notes Use as many lines as needed to add notes for consideration during analysis. Add notes here.

You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:

  • Date of Interview,
  • Participant ID,
  • Interview Format (e.g., in person, remote, video, phone).
Conclusion

By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.

As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.

As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice. By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.

Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.

References and Further Reading

‘Prepare for the Earliest Possible AGI Deployment Scenario’

Despite the uncertain timeline for Artificial General Intelligence (AGI) becoming a reality, we need to assure responsible and ethical development today says Jen Rosiere Reynolds.

header-agi-talks-jrr.webp

As part of our new AGI Talks, experts from different backgrounds share unique insights by answering 10 questions about AI, AGI, and ASI. Kicking off the series, we are privileged to feature Jen Rosiere Reynolds, a digital communication research and Director of Strategy at a Princeton-affiliated institute dedicated to shaping policy making and accelerating research in the digital age.

About Jen Rosiere Reynolds

jrr.webp

Jen Rosiere Reynolds focuses on digital communication technology, specifically the intersection between policy and digital experiences. Currently, she is supporting the development of the Accelerator, a new research institute for evidence-based policymaking in collaboration with Princeton University. Previously, she managed research operations and helped build the Center for Social Media and Politics at NYU. Jen holds a masters degree in government from Johns Hopkins University focusing her research on domestic extremism and hate speech on social media. She has a background in national security and intelligence.

The mission of the Accelerator is to power policy-relevant research by building shared infrastructure. Through a combination of data collection, analysis, tool development, and engagement, the Accelerator aims to support the international community working to understand todays information environment i.e. the space where cognition, technology, and content converge.

AGI Talks with Jen Rosiere Reynolds

We asked Jen 10 questions about the potential risks, benefits, and future of AI:

1. What is your preferred definition of AGI?

Jen Rosiere Reynolds: AGI is a hypothetical future AI system with cognitive and emotional abilities like a human. That would include understanding context-dependent human language and understanding belief systems, succeeding at both goals and adaptability.

2. and ASI?

ASI is a speculative future AI system capable of human-outsmarting creative and complex actions. It would be able to learn any tasks that humans can, but much faster and should be able to improve its own intelligence. With our current techniques, humans would not be able to reliably evaluate or supervise ASIs.

3. In what ways do you believe AI will most significantly impact society in the next decade?

I expect to see further algorithmic development, as well as improvements in storage and computing power, which can expedite AI.

Broadly, there are so many applications of AI in various fields, like health, finance, energy, etc., and these applications are all opportunities for either justice or misuse. Lots of folks are adopting and learning how to use human-in-the-loop technologies that augment human intelligence. But right now, we still don't understand how LLMs or other AI are influencing the information environment at a system level, and that's really concerning to me. It's not just about what happens when you input something into a generative AI system and whether it produces something egregious. It's also about what impact the use of AI may have on our society and world.

I've heard 2024 referred to as the year of elections. We see that in the United States as well as in so many global elections that have already taken place this year and will continue this summer and fall. We need to be really thoughtful about what effect influence operations have on elections and national security. It's challenging right now to understand the impact of deep fakes or the manipulation or creation of documents and images have to influence or affect people's decision-making. We saw CIA, FBI, and NSA confirm Russian interference in the 2016 US Presidential election and there was a US information operation on Facebook and Twitter that got taken down back in 2022, but what's the impact? The US-led online effort got thousands of followers, but that doesn't mean that thousands of people saw the information, that their minds or actions changed. I hope very soon we can understand how people typically understand and interact with the information environment, so we can talk about measurements and impact more precisely. In the next decade I expect we can much more specifically understand how AI and the use of AI affects our world.

4. What do you think is the biggest benefit associated with AI?

Right now, I think that the biggest benefit associated with AI lies in its potential to minimize harm in various scenarios. AI could assist in identifying and prosecuting child sexual exploitation without exposing investigators to the imagery and analyze the data much more efficiently, resulting in faster, more accurate, and less harmful analysis. AI could help with early diagnosis and support the development of new life-saving medicines. AI could also help reduce decision-making bias in criminal justice sentencing and job recruitment. All of these can happen, but there are also decisions to be made, and that's where education and open discussion is important, so that we can prioritize values over harm.

5. and the biggest risk of AI?

Right now, I see two significant risks associated with the development of AI that are the most urgent and impactful. The first is the need to ensure that AI development is responsible and ethical. AI has the potential to be used for harmful purposes, perpetuating hatred, prejudice, and authoritarianism. The second risk is that policymakers struggle to keep up with the rapid pace of AI development. Any regulation could quickly become outdated and ineffective, potentially hindering innovation while also failing to protect individuals and society at large.

6. In your opinion, will AI have a net positive impact on society?

I think that AI has great potential to make a positive impact on society. I see AI as a tool that people develop and use. My concern lies not with the tool itself, but with people how we, as humans, choose to develop and use the tools. There is long ongoing debate in the national security space about what should be developed, because of the potential for harmful use and misuse; these discussions should absolutely inform conversations about the development of AI. I am encouraged by the general attention that AI and its potential uses are currently receiving and do believe that broad and inclusive open debate will lead to positive outcomes.

7. Where are the limits of human control over AI systems?

Focus on the limits of human control over AI systems may be a bit premature and potentially move focus away from more immediate issues. We don't fully understand the impact of AI that is currently deployed, and it's difficult to estimate the limits of human control over what might be developed in the future.

8. Do you think AI can ever truly understand human values or possess consciousness?

I can imagine AI being able to intellectually understand the outward manifestation of values (i.e., how does a person act when they are being patient). When raising the issue of whether technology can truly feel or possess consciousness, we get into debates that are reflected across society and the world that raises questions like, what is consciousness and when does personhood begin? We can see these debates around end-of-life care, for example. While I personally don't believe that AI could truly manifest the essence of a human, I know that others would disagree based on their understanding and beliefs of consciousness and personhood.

9. Do you think your job as a researcher will ever be replaced by AI?

Maybe. I think that lots of jobs could potentially be replaced, or at least parts of jobs could potentially be replaced. I think we see that right now, with the human-in-the-loop tools, a part of someone's job may be much more efficient or quick. This can be very threatening to people. I think everyone should have the dignity of work and the opportunity to make a living. If there are cases where technology results in job displacement, society should take responsibility say that yes, we allowed this to happen and support those affected people.

10. We will reach AGI by the year?

OpenAI announced that they expect the development of AGI within the next decade, though I haven't come across any other researchers who share such an aggressive timeline. I'd recommend to prepare as best as possible for the earliest possible AGI deployment scenario as there are several unknown elements in the equation right now future advancement of algorithms and future improvements in storage and compute power.

Mobile Accessibility Barriers For Assistive Technology Users

I often hear that native mobile app accessibility is more challenging than web accessibility. Teams don’t know where to start, where to find guidance on mobile accessibility, or how to prevent mobile-specific accessibility barriers.

As someone who works for a company with an active community of mobile assistive technology users, I get to learn about the challenges from the user’s perspective. In fact, I recently ran a survey with our community about their experiences with mobile accessibility, and I’d like to share what I learned with you.

If you only remember one thing from this article, make it this:

Half of assistive technology users said that accessibility barriers have a significant impact on their day-to-day well-being.

Accessibility goes beyond making products user-friendly. It can impact the quality of life for people with disabilities.

Types Of Mobile Assistive Technology

I typically group assistive technologies into three categories:

  1. Screen readers: software that converts information on a screen to speech or braille.
  2. Screen magnifiers: software or system settings to magnify the screen, increase contrast, and otherwise modify the content to make it easier to see.
  3. Alternative navigation: software and/or hardware that replaces an input device such as a keyboard, mouse, or touchscreen.

Across all categories of assistive technology, 81% of the people I surveyed change the accessibility settings on their smartphone and/or tablet. Examples of accessibility settings include the following:

  • Increasing the font size;
  • Turning on captions;
  • Extending the tap duration;
  • Inverting colours.

There are smartphone settings such as dark mode that benefit people with disabilities even though they aren’t considered accessibility settings.

Now, let’s dive into the specifics of each assistive technology category and learn more about the user preferences that shape their digital experiences.

Screen Reader Users

Both iPhone and Android smartphones come with a screen reader installed. On iPhone, the screen reader is VoiceOver, and on Android, it is TalkBack. Both screen readers allow users to explore by touching and dragging their fingers to hear content under their fingers read out loud or to swipe forwards and backward through all elements on the screen in a linear fashion. Both screen readers also let users navigate by headings or other types of elements.

The mobile screen reader users I surveyed tend to have several devices that work together to cover all their accessibility needs, and they support businesses that prioritize mobile accessibility.

  • Nearly half of screen reader users also own a smartwatch.
  • Half use an external keyboard with their smartphone, and a third use a braille display.
  • Almost all factor the accessibility of apps and mobile sites into deciding which businesses to support.

That last point is really important! Accessibility truly inspires purchasing decisions and brand loyalty.

Screen Magnification Users

In addition to magnification, Android smartphones also have a variety of vision-related accessibility features that allow users to change screen colours and text sizes. The iPhone Magnifier app lets users apply colour filters, adjust brightness or contrast, and detect people or doors nearby.

My survey showed that screen magnification users had the highest percentage of tablet ownership, with 77% owning both a smartphone and a tablet. Alternative navigation users followed closely, with 62% owning a tablet, but only 42% of the screen reader users I surveyed own a tablet.

Screen magnification users are less likely to investigate the accessibility of paid apps before purchasing (63%) compared to screen reader and alternative navigation users (89% and 91%, respectively). I suspect this is because device magnification, contrast, and colour inversion settings may allow users to work around some design choices that make an app inaccessible.

Alternative Navigation Users

Switch Access (Android) and Switch Control (iOS) let users interact with their devices using one or more switches instead of the touchscreen. There are a variety of things you can use as a switch: an external device, keyboard, sounds, or the smartphone camera or buttons.

Item scan allows users to highlight items one by one and select an item in focus by activating the switch. Point and scan moves a horizontal line down from the top of the screen. When this line is over the desired element, the user selects their switch to stop it. A vertical line then moves from the left of the screen. When this line is also over the element, the user stops it with their switch. The user can then select the element in the cross hairs of the two lines. In addition to these two methods, users can also customize buttons to perform gestures such as swipe down or swipe left.

Android and iPhone devices can be controlled through Voice Access and Voice Control. Both allow users to speak commands to interact with their smartphone instead of using the touchscreen. The command “Say names” can expose labels that aren’t obvious. The command “Show numbers” allows users to say “tap two” to select the element labeled with the number 2. “Show grid” is a command often used as a last resort to select an element. This approach overlays a grid across their screen area and allows users to select the grid square where the element is in focus.

Alternative navigation users were least likely to own a smartwatch (26%) out of all three assistive technology categories, according to my survey. All the alternative navigation users that own a smartwatch, except for one, use it for health tracking. 24% use an external switch device with their smartphone.

Most Common Mobile Accessibility Barriers

Now that you know about some of the assistive technologies available on Android and iPhone devices, we can explore some specific challenges commonly encountered by users when navigating websites and native apps on their smartphones.

I’ll outline an inclusive development process that can help you discover barriers that are specific to your own app. If you need general tips on what to avoid right now, here are common mobile accessibility issues that assistive technology users encounter. To get this list, I asked the community to select up to three of their most challenging accessibility barriers on mobile.

Unlabelled Buttons Or Links

Unlabelled buttons and links are the number one challenge reported by assistive technology users. Screen reader users are impacted the most by unlabelled elements, but also people who use voice commands to interact with their smartphone.

Small Buttons Or Links

Buttons and links that are too small to tap with a finger or require great precision to select using switch functions are a challenge for anyone with mobility issues. Tiny buttons and links are also hard to see for anyone with low vision.

Gesture Interactions

Gestures like swipe to delete, tap and drag, and anything more complex than a simple tap or double tap can cause problems for many users. Gestures can be difficult to discover, and if you’re not a power mobile user, you may never figure them out. Your best bet is to include a button to perform the same action that a gesture can perform. Custom actions can expose more options, but only to assistive technology users and not to people with disabilities that may not use assistive technology, for example, cognitive disabilities.

Elements Blocking Parts Of The Screen

A chat button that is always hovering and may cover parts of the content. A sticky header or footer that takes up a big portion of the screen when the user zooms in or magnifies their screen. These screen blockers can make it very difficult or impossible for some users to view content.

Missing Error Messages

Keeping a submit button inactive until a form is correctly filled out is often used as an alternative to providing error messages. That approach can be a challenge for assistive technology users in particular, but also anyone with a cognitive disability or who isn’t tech-savvy. Sometimes, error messages exist, but they aren’t announced to screen reader users.

Resizing Text And Pinch And Zoom

When an app doesn’t respect the font size increases set by a user through accessibility settings, people who need larger text must find alternative ways to read content. Some websites disable pinch and zoom — a feature that is not just useful for enlarging text but is often used to see images better.

Other Mobile Accessibility Barriers

The accessibility barriers that weren’t mentioned as often but still represent significant challenges for assistive technology users include:

  • Low contrast
    If the contrast between text and background is low, it makes it harder for folks with low vision to read. Customizing contrast settings can make content more legible for a broader range of people.
  • No dark mode
    For some people, black text on a white background can be painful to the eyes or trigger migraines.
  • Fixed orientation
    Not being able to rotate from portrait to landscape can impact people who have their device in a fixed position on a wheelchair or people with low vision who use landscape mode to make text and images appear larger.
  • Missing captions
    No captions on videos were also cited as a barrier. This is one that I relate to personally, as I rely on captions myself because of my hearing disability.

I knew I couldn’t capture all of the mobile accessibility barriers in my list of choices, so I gave the survey respondents a free text field to enter their own. Here’s what they said:

  • Screen reader users encounter unlabelled images or labels that don’t make sense. AI-based image recognition technology can help but often can’t provide the same context that a designer would. Screen reader users also run into apps that unexpectedly move their screen reader’s focus, changing their location on the screen and causing confusion.
  • Voice Control users find apps and sites that aren’t responsive to their voice commands. They have to try alternate commands to activate interactive elements, sometimes slowing them down significantly.
  • Complex navigation, such as large, dynamic lists or menus that expand and collapse automatically, can be challenging to use with assistive technologies. There aren’t often workarounds to interacting with navigation, so this can influence whether a user will abandon an app or website.
Inclusive Design Approaches For Mobile

It’s important to avoid getting overwhelmed and not doing anything at all because mobile accessibility seems hard. Instead, focus on fixing the most critical issues first, then release, celebrate, and repeat the process.

Ideally, you’ll want to change your processes to avoid creating more accessibility issues in the future. Here’s a high-level process for inclusive app development:

  • Do research with users to understand how their assistive technology works and what challenges they have with your existing app.
  • Create designs for accessibility features such as font scaling and state and focus indicators.
  • Revise designs and get feedback from users that can be applied in development.
  • Annotate design files for accessibility based on user feedback and best practices.
  • Create a new build and use automated testing tools to find barriers.
  • Do manual QA testing on the new build using your phone’s accessibility settings.
  • Release a private build and test with users again before the production release.

Conclusion

Fixing and, more importantly, avoiding mobile accessibility barriers can be easier if you understand how assistive technologies work and the common challenges users encounter on mobile devices. Remember the key takeaway from the beginning of this article: half of the people surveyed felt accessibility barriers had a significant impact on their well-being. With that in mind, I encourage you not to let a lack of understanding of technical accessibility compliance hold you back from building inclusive apps and mobile-friendly websites.

When you look at accessibility from the lens of usability for everyone and learn from assistive technology users, you take a step towards empowering everyone to independently interact with your products and services, playing your part in building a more equitable Internet.

Further Reading On SmashingMag

How Accessibility Standards Can Empower Better Chart Visual Design

Data visualizations are graphics that leverage our visual system and innate capabilities to gather, accumulate, and process information in our environment, as shown in the animation in Figure 1.0.

Figure 1.0. An animation demonstrating our preattentive processing capability. Based on a lecture by Dr. Stephen Franconeri. (Large preview)

As a result, we’re able to quickly spot trends, patterns, and outliers in all the images we see. Can you spot the visual patterns in Figure 1.1?

In this example, there are patterns defined by the size of the shapes, the use of fills and borders, and the use of different types of shapes. These characteristics, or visual encodings, are the building blocks of visualizations. Good visualizations provide a glanceable view of a large data set we otherwise wouldn’t be able to comprehend.

Accessibility Challenges With Data Visualizations

Visualizations typically serve a wide array of use cases and can be quite complex. A lot of care goes into choosing the right encodings to represent each metric. Designers and engineers will use colors to draw attention to more important metrics or information and highlight outliers. Oftentimes, as these design decisions are made, considerations for people with vision disabilities are missed.

Vision disabilities affect hundreds of millions of people worldwide. For example, about 300 million people have color-deficient vision, and it’s a condition that affects 1 in 12 men.1

1 Colour Blind Awareness (2023)

Most people with these conditions don’t use assistive technology when viewing the data. Because of this, the visual design of the chart needs to meet them where they are.

Figure 1.2 is an example of a donut chart. At first glance, it might seem like the categorical color palette matches the theme of digital wellbeing. It’s calm, it’s cool, and it may even invoke a feeling of wellbeing.

Figure 1.3 highlights how this same chart will appear to someone with a protanopia condition. You’ll notice that it is slightly less readable because the Other and YouTube categories appearing at the top of the donut are indistinguishable from one another.

For someone with achromatopsia, the chart will appear as it does in Figure 1.4

In this case, I’d argue that the chart isn’t really telling us anything. It’s nearly impossible to read, and swapping it out for a data table would be arguably more useful. At this point, you might be wondering how to fix this. Where should you start?

Start With Web Standards

Web standards can help us improve our design. In this case, Web Content Accessibility Guidelines (WCAG) will provide the most comprehensive set of requirements to start with. Guidelines call for two considerations. First, all colors must achieve the proper contrast ratio with their neighboring elements. Second, visualizations need to use something other than color to convey meaning. This can be accomplished by including a second encoding or adding text, images, icons, or patterns. While this article focuses on achieving WCAG 2.1 standards, the same concepts can be used to achieve WCAG 2.2 standards.

Web Standards Challenges

Meeting the web standards is trickier than it might first seem. Let’s dive into a few examples showing how difficult it is to ensure data will be understood at a glance while meeting the standards.

Challenge 1: Color Contrast

According to the WCAG 2.1 (level AA) standards, graphics such as chart elements (lines, bars, areas, nodes, edges, links, and so on) should all achieve a minimum 3:1 contrast ratio with their neighboring elements. Neighboring elements may include other chart elements, interaction states, and the chart’s background. Incidentally, if you’re not sure your colors are achieving the correct minimum ratio, you can check your palette here. Additionally, all text elements should achieve a minimum 4.5:1 contrast ratio with their background. Figure 1.5 depicts a sample categorical color palette that follows the recommended standards.

This is quite a bold palette. When applying a compliant palette to a chart, it might look like the example in Figure 1.6.

While this example meets the color contrast requirements, there’s a tradeoff. The chart’s focal point is now lost. The red segments at the bottom of each stacked bar represent the most important metrics illustrated in this chart. They represent errors or a count of items that need your attention. Since the chart features bold colors, all of which are equally competing for our attention, it’s now more difficult to see the items that matter most.

Challenge 2: Dual Encodings, Or Conveying Meaning Without Color

To minimize reliance on color to convey meaning, WCAG 2.1 (level A) standards also call for the use of something other than color to convey meaning. This may be a pattern, texture, icon, text overlay, or an entirely different visual encoding.

It’s easy to throw a pattern on top of a categorical fill color and call it a day, as illustrated in Figure 1.7. But is the chart still readable? Is it glanceable? In this case, the segments appear to run into one another. In his book, The Visual Display of Quantitative Information, Edward Tufte describes the importance of minimizing chartjunk or unnecessary visual design elements that limit one’s ability to read the chart. This begs the question, do the WCAG standards encourage us to add unnecessary chartjunk to the visualization?

Following the standards verbatim can lead us down the path of creating a really noisy visualization.

Let The Standards Empower vs Constrain Design

Over the past several years, my working group at Google has learned that it’s easier to meet the WCAG visual design requirements when they’re considered at the beginning of the design process instead of trying to update existing charts to meet the standard. The latter approach leads to charts with unnecessary chart junk, just like the one previously depicted in Figure 1.7, and minimized usability. Considering accessibility first will enable you to create a visualization that’s not only accessible but useful. We’re calling this our accessibility-first approach to chart design. Now, let’s see some examples.

Solving For Color Contrast

Let’s revisit the color contrast requirement via the example in Figure 1.8. In this case, the most important metric is represented by the red segments appearing at the bottom of each bar in the series. The red color represents a count of items in a failing state. Since both colors in this palette compete for our attention, it’s difficult to focus on the metric that matters most. The chart is no longer glanceable.

Focus On Essential Elements Only

By stretching the standards a bit, we can balance a11y and glanceability a lot better. Only the visual elements essential for interpreting the visualization need to achieve the color contrast requirement. In the case of Figure 1.8, we can use borders that achieve the required contrast ratio while using lighter fills to the point of focus. In Figure 1.9, you’ll notice your attention now shifts down to the metrics that matter most.

Figure 1.9. ✅ DO: Consider using a combination of outlines and fills to meet contrast requirements while maintaining a focal point. (Large preview)

Dark Themes For The Win

Most designers I know love a good dark theme like the one used in Figure 2.0. It looks nice, and dark themes often result in visually stunning charts.

More importantly, a dark theme offers an accessibility advantage. When building on top of a dark background, we can use a wider array of color shades that will still achieve the minimum required contrast ratio.

According to an audit conducted by Google’s Data Accessibility Working Group, the 61 shades of the Google Material palette from 2018 achieved the minimum 3:1 contrast ratio when placed on a dark background. This is depicted in Figure 2.1. Only 40 shades of Google Material colors achieved the same contrast ratio when placed on a white background. The 50% increase in available shades when moving from a light background to a dark background makes a huge difference. Having access to more shades enables us to draw focus to items that matter most.

With this in mind, let’s revisit the earlier donut chart example in Figure 2.2. For now, let’s keep the white background, as it’s a core part of Google’s brand.

Figure 2.2. ✅ DO: Use a combination of fills and borders that achieve the minimum contrast ratios to improve the readability of your chart. (Large preview)

While this is a great first step, there’s still more work to do. Let’s take a closer look.

Solving For Dual Encodings And Minimizing Chartjunk

As shown in Figure 2.3, color is our only way of connecting segments in the donut to the corresponding categories in the legend. Despite our best efforts to follow color contrast standards, the chart can still be difficult to read for people with certain vision disabilities. We need a dual encoding, or something other than color, to convey meaning.

How might we do this without adding noise or reducing the chart’s readability or glanceability? Let’s start with the text.

Integrating Text And Icons

Adding text to a visualization is a great way to solve the dual encoding problem. Let’s use our donut chart as an example. If we move the legend labels into the graph, as illustrated in Figure 2.4, we can visually connect them to their corresponding segments. As a result, there is no longer a need for a legend, and the labels become the second encoding.

Let’s look at a few other ways to provide a dual encoding while maximizing readability. This will prevent us from running in the direction of applying unnecessary chart junk like the example previously highlighted in Figure 1.7.

Depending on the situation, shape of the data, or the available screen real estate, we may not have the luxury of overlaying text on top of a visualization. In cases like in Figure 2.5, it’s still okay to use iconography. For example, if we’re dealing with a very limited number of categories, the added iconography can still act as a dual encoding.

Some charts can have upwards of hundreds of categories, which makes it difficult to add iconography or text. In these cases, we must revisit the purpose of the chart and decide if we need to differentiate categories. Perhaps color, along with a dual encoding, can be used to highlight other aspects of the data. The example in Figure 2.6 shows a line chart with hundreds of categories.

We did a few things with color to convey meaning here:

  1. Bright colors are used to depict outliers within the data set.
  2. A neutral gray color is applied to all nominal categories.

In this scenario, we can once again use a very limited set of shapes for differentiating specific categories.

The Benefits Of Small Multiples And Sparklines

There are still times when it’s important to differentiate between all categories depicted in a visualization. For example, the tangled mess of a chart is depicted in Figure 2.7.

In this case, a more accessible solution would include breaking the charts into their own mini charts or sparklines, as depicted in Figure 2.8. This solution is arguably better for everyone because it makes it easier to see the individual trend for each category. It’s more accessible because we’ve completely removed the reliance on color and appended text to each of the mincharts, which is better for the screen reader experience.

Reserve Fills For Items That Need Your Attention

Earlier, we examined using a combination of fills and outlines to achieve color contrast requirements. Red and green are commonly used to convey status. For someone who is red/green colorblind, this can be very problematic. As an alternative, the status icons in Figure 2.9 reserve fills for the items that need your attention. We co-designed this solution with some help from customers who are colorblind. It’s arguably more scannable for people who are fully sighted, too.

Embracing Relevant Metaphors

In 2022, we launched a redesigned Fitbit mobile app for the masses. One of my favorite visualizations from this launch is a chart showing your heart rate throughout the day. As depicted in Figure 3.0, this chart shows when your heart rate crosses into different zones. Dotted lines were used to depict each of these zone thresholds. We used the spacing between the dots as our dual encoding, which invokes a feeling of a “visual” heartbeat. Threshold lines with closely spaced dots imply a higher heart rate.

Continuing the theme of using fun, relevant metaphors, we even based our threshold spacing on the Fibonacci Sequence. This enabled us to represent each threshold with a noticeably different visual treatment. For this example, we knew we were on the right track as these accessibility considerations tested well with people who have color-deficient vision.

Accessible Interaction States

Color contrast and encodings also need to be considered when showing interactions like mouse hover, selection, and keyboard focus, like the examples in Figure 3.1. The same rules apply here. In this example, the hover, focus, and clicked state of each bar is delineated by elements that appear above and below the bar. As a result, these elements only need to achieve a 3:1 contrast ratio with the white background and not the bars themselves. Not only did this pattern test well in multiple usability studies, but it was also designed so that the states could overlap. For example, the hover state and selected state can appear simultaneously and still meet accessibility requirements.

Finding Your Inspiration

For some more challenging projects, we’ve taken inspiration from unexpected areas.

For example, we looked to nature (Figure 3.2) to help us consider methods for visualizing the effects of cloud moisture on an LTE network, as sketched in Figure 3.3.

We’ve taken inspiration from halftone printing processes (Figure 3.4) to think about how we might reimagine a heatmap with a dual encoding, as depicted in Figure 3.5.

We’ve also taken inspiration from architecture and how people move through buildings (Figure 3.6) to consider methods for showing the scope and flow of data into a donut chart as depicted in Figure 3.7.

Figure 3.7. Applying inspiration from architecture and a building’s flow. (Large preview)

In this case, the animated inner ring highlights the scope of the donut chart when it’s empty and indicates that it will fill up to 100%. Animation is a great technique, but it presents other accessibility challenges and should either time out or have a stop control.

In some cases, we were even inspired to explore new versions of existing visualization types, like the one depicted in Figure 3.8. This case study highlights a step-by-step guide to how we landed on this example.

Getting People On Board With Accessibility

One key lesson is that it’s important to get colleagues on board with accessibility as soon as possible. Your compliant designs may not look quite as pretty as your non-compliant designs and may be open to criticism.

So, how can you get your colleagues on board? For starters, evangelism is key. Provide examples like the ones included here, which can help your colleagues build empathy for people with vision disabilities. Find moments to share the work with your company’s leadership team, spreading awareness. Team meetings, design critiques, AMA sessions, organization forums, and all-hands are a good start. Oftentimes, colleagues may not fully understand how accessibility requirements apply to charting or how their visualizations are used by people with disabilities.

While share-outs are a great start, communication is one way. We found that it’s easier to build momentum when you invite others to participate in the design process. Invite them into brainstorming meetings, design reviews, codesign sessions, and the problem space to help them appreciate how difficult these challenges are. Enlist their help, too.

By engaging with colleagues, we were able to pinpoint our champions within the group or those people who were so passionate about the topic they were willing to spend extra time building demos, prototypes, design specs, and research repositories. For example, at Google, we were able to publish our Top Tips for Data Accessibility on the Material Design blog.

Aside from good citizenship and building a grassroots start, there are ways to get the business on board. Pointing to regulations like Section 508 in America and the European Accessibility Act are other good ways to encourage your business to dive deeper into your product’s accessibility. It’s also an effective mechanism for getting funding and ensuring accessibility is on your product’s roadmap. Once you’ve made the business case and you’ve identified the accessibility champions on your team, it’s time to start designing.

Conclusion

Accessibility is more than compliance. Accessibility considerations can and will benefit everyone, so it’s important not to shove them into a special menu or mode or forget about them until the end of the design process. When you consider accessibility from the start, the WCAG standards also suddenly seem a lot less constraining than when you try to retrofit existing charts for accessibility.

The examples here were built over the course of 3 years, and they’re based on valuable lessons learned along the way. My hope is that you can use the tested designs in this article to get a head start. And by taking an accessibility-first approach, you’ll end up with overall better data visualizations — ones that fully take into account how all people gather, accumulate, and process information.

Resources

To get started thinking about data accessibility, check out some of these resources:

Getting started

ACM

Contrast checking tool

WCAG requirements

Material design best practices and specs

We’re incredibly proud of our colleagues who contributed to the research and examples featured in this article. This includes Andrew Carter, Ben Wong, Chris Calo, Gerard Rocha, Ian Hill, Jenifer Kozenski Devins, Jennifer Reilly, Kai Chang, Lisa Kaggen, Mags Sosa, Nicholas Cottrell, Rebecca Plotnick, Roshini Kumar, Sierra Seeborn, and Tyler Williamson. Without everyone’s contributions, we wouldn’t have been able to advance our knowledge of accessible chart visual design.

When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces

Few technological innovations can completely change the way we interact with computers. Lucky for us, it seems we’ve won front-row seats to the unfolding of the next paradigm shift.

These shifts tend to unlock a new abstraction layer to hide the working details of a subsystem. Generalizing details allows our complex systems to appear simpler & more intuitive. This streamlines coding programs for computers as well as designing the interfaces to interact with them.

The Command Line Interface, for instance, created an abstraction layer to enable interaction through a stored program. This hid the subsystem details once exposed in earlier computers that were only programmable by inputting 1s & 0s through switches.

Graphical User Interfaces (GUI) further abstracted this notion by allowing us to manipulate computers through visual metaphors. These abstractions made computers accessible to a mainstream of non-technical users.

Despite these advances, we still haven’t found a perfectly intuitive interface — the troves of support articles across the web make that evident. Yet recent advances in AI have convinced many technologists that the next evolutionary cycle of computing is upon us.

Layers of interface abstraction, bottom to top: Command Line Interfaces, Graphical User Interfaces, & AI-powered Conversational Interfaces. (Source: Maximillian Piras) (Large preview) The Next Layer Of Interface Abstraction

A branch of machine learning called generative AI drives the bulk of recent innovation. It leverages pattern recognition in datasets to establish probabilistic distributions that enable novel constructions of text, media, & code. Bill Gates believes it’s “the most important advance in technology since the graphical user interface” because it can make controlling computers even easier. A newfound ability to interpret unstructured data, such as natural language, unlocks new inputs & outputs to enable novel form factors.

Now our universe of information can be instantly invoked through an interface as intuitive as talking to another human. These are the computers we’ve dreamed of in science fiction, akin to systems like Data from Star Trek. Perhaps computers up to this point were only prototypes & we’re now getting to the actual product launch. Imagine if building the internet was laying down the tracks, AIs could be the trains to transport all of our information at breakneck speed & we’re about to see what happens when they barrel into town.

“Soon the pre-AI period will seem as distant as the days when using a computer meant typing at a C:> prompt rather than tapping on a screen.”

— Bill Gates in “The Age of AI Has Begun

If everything is about to change, so must the mental models of software designers. As Luke Wroblewski once popularized mobile-first design, the next zeitgeist is likely AI-first. Only through understanding AI’s constraints & capabilities can we craft delight. Its influence on the discourse of interface evolution has already begun.

Large Language Models (LLMs), for instance, are a type of AI utilized in many new applications & their text-based nature leads many to believe a conversational interface, such as a chatbot, is a fitting form for the future. The notion that AI is something you talk to has been permeating across the industry for years. Robb Wilson, the co-owner of UX Magazine, calls conversation “the infinitely scalable interface” in his book The Age of Invisible Machines (2022). Noah Levin, Figma’s VP of Product Design, contends that “it’s a very intuitive thing to learn how to talk to something.” Even a herald of GUIs such as Bill Gates posits that “our main way of controlling a computer will no longer be pointing and clicking.”

Microsoft Copilot is a new conversational AI feature being integrated across their office suite. (Source: Microsoft) (Large preview)

The hope is that conversational computers will flatten learning curves. Jesse Lyu, the founder of Rabbit, asserts that a natural language approach will be “so intuitive that you don’t even need to learn how to use it.”

After all, it’s not as if Data from Stark Trek came with an instruction manual or onboarding tutorial. From this perspective, the evolutionary tale of conversational interfaces superseding GUIs seems logical & echoes the earlier shift away from command lines. But others have opposing opinions, some going as far as Maggie Appleton to call conversational interfaces like chatbots “the lazy solution.”

This might seem like a schism at first, but it’s more so a symptom of a simplistic framing of interface evolution. Command lines are far from extinct; technical users still prefer them for their greater flexibility & efficiency. For use cases like software development or automation scripting, the added abstraction layer in graphical no-code tools can act as a barrier rather than a bridge.

GUIs were revolutionary but not a panacea. Yet there is ample research to suggest conversational interfaces won’t be one, either. For certain interactions, they can decrease usability, increase cost, & introduce security risk relative to GUIs.

So, what is the right interface for artificially intelligent applications? This article aims to inform that design decision by contrasting the capabilities & constraints of conversation as an interface.

Connecting The Pixels

We’ll begin with some historical context, as the key to knowing the future often starts with looking at the past. Conversational interfaces feel new, but we’ve been able to chat with computers for decades.

Joseph Weizenbaum invented the first chatbot, ELIZA, during an MIT experiment in 1966. This laid the foundation for the following generations of language models to come, from voice assistants like Alexa to those annoying phone tree menus. Yet the majority of chatbots were seldom put to use beyond basic tasks like setting timers.

It seemed most consumers weren’t that excited to converse with computers after all. But something changed last year. Somehow we went from CNET reporting that “72% of people found chatbots to be a waste of time” to ChatGPT gaining 100 million weekly active users.

What took chatbots from arid to astonishing? Most assign credit to OpenAI’s 2018 invention (PDF) of the Generative Pre-trained Transformer (GPT). These are a new type of LLM with significant improvements in natural language understanding. Yet, at the core of a GPT is the earlier innovation of the transformer architecture introduced in 2017 (PDF). This architecture enabled the parallel processing required to capture long-term context around natural language inputs. Diving deeper, this architecture is only possible thanks to the attention mechanism introduced in 2014 (PDF). This enabled the selective weighing of an input’s different parts.

Through this assemblage of complementary innovations, conversational interfaces now seem to be capable of competing with GUIs on a wider range of tasks. It took a surprisingly similar path to unlock GUIs as a viable alternative to command lines. Of course, it required hardware like a mouse to capture user signals beyond keystrokes & screens of adequate resolution. However, researchers found the missing software ingredient years later with the invention of bitmaps.

Bitmaps allowed for complex pixel patterns that earlier vector displays struggled with. Ivan Sutherland’s Sketchpad, for instance, was the inaugural GUI but couldn’t support concepts like overlapping windows. IEEE Spectrum’s Of Mice and Menus (1989) details the progress that led to the bitmap’s invention by Alan Kay’s group at Xerox Parc. This new technology enabled the revolutionary WIMP (windows, icons menus, and pointers)) paradigm that helped onboard an entire generation to personal computers through intuitive visual metaphors.

Computing no longer required a preconceived set of steps at the outset. It may seem trivial in hindsight, but the presenters were already alluding to an artificially intelligent system during Sketchpad’s MIT demo in 1963. This was an inflection point transforming an elaborate calculating machine into an exploratory tool. Designers could now craft interfaces for experiences where a need to discover eclipsed the need for flexibility & efficiency offered by command lines.

Parallel Paradigms

Novel adjustments to existing technology made each new interface viable for mainstream usage — the cherry on top of a sundae, if you will. In both cases, the foundational systems were already available, but a different data processing decision made the output meaningful enough to attract a mainstream audience beyond technologists.

With bitmaps, GUIs can organize pixels into a grid sequence to create complex skeuomorphic structures. With GPTs, conversational interfaces can organize unstructured datasets to create responses with human-like (or greater) intelligence.

The prototypical interfaces of both paradigms were invented in the 1960s, then saw a massive delta in their development timelines — a case study unto itself. Now we find ourselves at another inflection point: in addition to calculating machines & exploratory tools, computers can act as life-like entities.

But which of our needs call for conversational interfaces over graphical ones? We see a theoretical solution to our need for companionship in the movie Her, where the protagonist falls in love with his digital assistant. But what is the benefit to those of us who are content with our organic relationships? We can look forward to validating the assumption that conversation is a more intuitive interface. It seems plausible because a few core components of the WIMP paradigm have well-documented usability issues.

Nielsen Norman Group reports that cultural differences make universal recognition of icons rare — menus trend towards an unusable mess with the inevitable addition of complexity over time. Conversational interfaces appear more usable because you can just tell the system when you’re confused! But as we’ll see in the next sections, they have their fair share of usability issues as well.

By replacing menus with input fields, we must wonder if we’re trading one set of usability problems for another.

The Cost of Conversation

Why are conversational interfaces so popular in science fiction movies? In a Rhizome essay, Martine Syms theorizes that they make “for more cinematic interaction and a leaner production.” This same cost/benefit applies to app development as well. Text completion delivered via written or spoken word is the core capability of an LLM. This makes conversation the simplest package for this capability from a design & engineering perspective.

Linus Lee, a prominent AI Research Engineer, characterizes it as “exposing the algorithm’s raw interface.” Since the interaction pattern & components are already largely defined, there isn’t much more to invent — everything can get thrown into a chat window.

“If you’re an engineer or designer tasked with harnessing the power of these models into a software interface, the easiest and most natural way to “wrap” this capability into a UI would be a conversational interface”

— Linus Lee in Imagining Better Interfaces to Language Models

This is further validated by The Atlantic’s reporting on ChatGPT’s launch as a “low-key research preview.” OpenAI’s hesitance to frame it as a product suggests a lack of confidence in the user experience. The internal expectation was so low that employees’ highest guess on first-week adoption was 100,000 users (90% shy of the actual number).

Conversational interfaces are cheap to build, so they’re a logical starting point, but you get what you pay for. If the interface doesn’t fit the use case, downstream UX debt can outweigh any upfront savings.

Forgotten Usability Principles

Steve Jobs once said, “People don’t know what they want until you show it to them.” Applying this thinking to interfaces echoes a usability evaluation called discoverability. Nielsen Norman Group defines it as a user’s ability to “encounter new content or functionality that they were not aware of.”

A well-designed interface should help users discover what features exist. The interfaces of many popular generative AI applications today revolve around an input field in which a user can type in anything to prompt the system. The problem is that it’s often unclear what a user should type in to get ideal output. Ironically, a theoretical solution to writer’s block may have a blank page problem itself.

“I think AI has a problem with these missing user interfaces, where, for the most part, they just give you a blank box to type in, and then it’s up to you to figure out what it might be able to do.”

— Casey Newton on Hard Fork Podcast

Conversational interfaces excel at mimicking human-to-human interaction but can fall short elsewhere. A popular image generator named Midjourney, for instance, only supported text input at first but is now moving towards a GUI for “greater ease of use.”

This is a good reminder that as we venture into this new frontier, we cannot forget classic human-centered principles like those in Don Norman’s seminal book The Design of Everyday Things (1988). Graphical components still seem better aligned with his advice of providing explicit affordances & signifiers to increase discoverability.

There is also Jakob Nielsen’s list of 10 usability heuristics; many of today’s conversational interfaces seem to ignore every one of them. Consider the first usability heuristic explaining how visibility of system status educates users about the consequences of their actions. It uses a metaphorical map’s “You Are Here” pin to explain how proper orientation informs our next steps.

Navigation is more relevant to conversational interfaces like chatbots than it might seem, even though all interactions take place in the same chat window. The backend of products like ChatGPT will navigate across a neural network to craft each response by focusing attention on a different part of their training datasets.

Putting a pin on the proverbial map of their parametric knowledge isn’t trivial. LLMs are so opaque that even OpenAI admits they “do not understand how they work.” Yet, it is possible to tailor inputs in a way that loosely guides a model to craft a response from different areas of its knowledge.

One popular technique for guiding attention is role-playing. You can ask an LLM to assume a role, such as by inputting “imagine you’re a historian,” to effectively switch its mode. The Prompt Engineering Institute explains that when “training on a large corpus of text data from diverse domains, the model forms a complex understanding of various roles and the language associated with them.” Assuming a role invokes associated aspects in an AI’s training data, such as tone, skills, & rationality.

For instance, a historian role responds with factual details whereas a storyteller role responds with narrative descriptions. Roles can also improve task efficiency through tooling, such as by assigning a data scientist role to generate responses with Python code.

Roles also reinforce social norms, as Jason Yuan remarks on how “your banking AI agent probably shouldn’t be able to have a deep philosophical chat with you.” Yet conversational interfaces will bury this type of system status in their message history, forcing us to keep it in our working memory.

A theoretical AI chatbot that uses a segmented controller to let users specify a role in one click — each button automatically adjusts the LLM’s system prompt. (Source: Maximillian Piras) (Large preview)

The lack of persistent signifiers for context, like roleplay, can lead to usability issues. For clarity, we must constantly ask the AI’s status, similar to typing ls & cd commands into a terminal. Experts can manage it, but the added cognitive load is likely to weigh on novices. The problem goes beyond human memory, systems suffer from a similar cognitive overload. Due to data limits in their context windows, a user must eventually reinstate any roleplay below the system level. If this type of information persisted in the interface, it would be clear to users & could be automatically reiterated to the AI in each prompt.

Character.ai achieves this by using historical figures as familiar focal points. Cultural cues lead us to ask different types of questions to “Al Pacino” than we would “Socrates.” A “character” becomes a heuristic to set user expectations & automatically adjust system settings. It’s like posting up a restaurant menu; visitors no longer need to ask what there is to eat & they can just order instead.

“Humans have limited short-term memories. Interfaces that promote recognition reduce the amount of cognitive effort required from users.”

— Jakob Nielsen in “10 Usability Heuristics for User Interface Design

Another forgotten usability lesson is that some tasks are easier to do than to explain, especially through the direct manipulation style of interaction popularized in GUIs.

Photoshop’s new generative AI features reinforce this notion by integrating with their graphical interface. While Generative Fill includes an input field, it also relies on skeuomorphic controls like their classic lasso tool. Describing which part of an image to manipulate is much more cumbersome than clicking it.

Interactions should remain outside of an input field when words are less efficient. Sliders seem like a better fit for sizing, as saying “make it bigger” leaves too much room for subjectivity. Settings like colors & aspect ratios are easier to select than describe. Standardized controls can also let systems better organize prompts behind the scenes. If a model accepts specific values for a parameter, for instance, the interface can provide a natural mapping for how it should be input.

Most of these usability principles are over three decades old now, which may lead some to wonder if they’re still relevant. Jakob Nielsen recently remarked on the longevity of their relevance, suggesting that “when something has remained true for 26 years, it will likely apply to future generations of user interfaces as well.” However, honoring these usability principles doesn’t require adhering to classic components. Apps like Krea are already exploring new GUI to manipulate generative AI.

Prompt Engineering Is Engineering

The biggest usability problem with today’s conversational interfaces is that they offload technical work to non-technical users. In addition to low discoverability, another similarity they share with command lines is that ideal output is only attainable through learned commands. We refer to the practice of tailoring inputs to best communicate with generative AI systems as “prompt engineering”. The name itself suggests it’s an expert activity, along with the fact that becoming proficient in it can lead to a $200k salary.

Programming with natural language is a fascinating advancement but seems misplaced as a requirement in consumer applications. Just because anyone can now speak the same language as a computer doesn’t mean they know what to say or the best way to say it — we need to guide them. While all new technologies have learning curves, this one feels steep enough to hinder further adoption & long-term retention.

Prompt engineering as a prerequisite for high-quality output seems to have taken on the mystique of a dark art. Many marketing materials for AI features reinforce this through terms like “magic.” If we assume there is a positive feedback loop at play, this opaqueness must be an inspiring consumer intrigue.

But positioning products in the realm of spellbooks & shamans also suggests an indecipherable experience — is this a good long-term strategy? If we assume Steve Krug’s influential lessons from Don’t Make Me Think (2000) still apply, then most people won’t bother to study proper prompting & instead will muddle through.

But the problem with trial & error in generative AI is that there aren’t any error states; you’ll always get a response. For instance, if you ask an LLM to do the math, it will provide you with confident answers that may be completely wrong. So it becomes harder to learn from errors when we are unaware if a response is a hallucination. As OpenAI’s Andrej Karpathy suggests, hallucinations are not necessarily a bug because LLMs are “dream machines,” so it all depends on how interfaces set user expectations.

“But as with people, finding the most meaningful answer from AI involves asking the right questions. AI is neither psychic nor telepathic.”

— Stephen J. Bigelow in 5 Skills Needed to Become a Prompt Engineer

Using magical language risks leading novices to the magical thinking that AI is omniscient. It may not be obvious that its knowledge is limited to the training data.

Once the magic dust fades away, software designers will realize that these decisions are the user experience!

Crafting delight comes from selecting the right prompting techniques, knowledge sourcing, & model selection for the job to be done. We should be exploring how to offload this work from our users.

  • Empty states could explain the limits of an AI’s knowledge & allow users to fill gaps as needed.
  • Onboarding flows could learn user goals to recommend relevant models tuned with the right reasoning.
  • An equivalent to fuzzy search could markup user inputs to educate them on useful adjustments.

We’ve begun to see a hint of this with OpenAI’s image generator rewriting a user’s input behind the scenes to optimize for better image output.

Lamborghini Pizza Delivery

Aside from the cognitive cost of usability issues, there is a monetary cost to consider as well. Every interaction with a conversational interface invokes an AI to reason through a response. This requires a lot more computing power than clicking a button within a GUI. At the current cost of computing, this expense can be prohibitive. There are some tasks where the value from added intelligence may not be worth the price.

For example, the Wall Street Journal suggests using an LLM for tasks like email summarization is “like getting a Lamborghini to deliver a pizza.” Higher costs are, in part, due to the inability of AI systems to leverage economies of scale in the way standard software does. Each interaction requires intense calculation, so costs scale linearly with usage. Without a zero-marginal cost of reproduction, the common software subscription model becomes less tenable.

Will consumers pay higher prices for conversational interfaces or prefer AI capabilities wrapped in cost-effective GUI? Ironically, this predicament is reminiscent of the early struggles GUIs faced. The processor logic & memory speed needed to power the underlying bitmaps only became tenable when the price of RAM chips dropped years later. Let’s hope history repeats itself.

Another cost to consider is the security risk: what if your Lamborghini gets stolen during the pizza delivery? If you let people ask AI anything, some of those questions will be manipulative. Prompt injections are attempts to infiltrate systems through natural language. The right sequence of words can turn an input field into an attack vector, allowing malicious actors to access private information & integrations.

So be cautious when positioning AI as a member of the team since employees are already regarded as the weakest link in cyber security defense. The wrong business logic could accidentally optimize the number of phishing emails your organization falls victim to.

Good design can mitigate these costs by identifying where AI is most meaningful to users. Emphasize human-like conversational interactions at these moments but use more cost-effective elements elsewhere. Protect against prompt injections by partitioning sensitive data so it’s only accessible by secure systems. We know LLMs aren’t great at math anyway, so free them up for creative collaboration instead of managing boring billing details.

Generations Are Predictions

In my previous Smashing article, I explained the concept of algorithm-friendly interfaces. They view every interaction as an opportunity to improve understanding through bidirectional feedback. They provide system feedback to users while reporting performance feedback to the system. Their success is a function of maximizing data collection touchpoints to optimize predictions. Accuracy gains in predictive output tend to result in better user retention. So good data compounds in value by reinforcing itself through network effects.

While my previous focus was on content recommendation algorithms, could we apply this to generative AI? While the output is very different, they’re both predictive models. We can customize these predictions with specific data like the characteristics, preferences, & behavior of an individual user.

So, just as Spotify learns your musical taste to recommend new songs, we could theoretically personalize generative AI. Midjourney could recommend image generation parameters based on past usage or preferences. ChatGPT could invoke the right roles at the right time (hopefully with system status visibility).

This territory is still somewhat uncharted, so it’s unclear how algorithm-friendly conversational interfaces are. The same discoverability issues affecting their usability may also affect their ability to analyze engagement signals. An inability to separate signal from noise will weaken personalization efforts. Consider a simple interaction like tapping a “like” button; it sends a very clean signal to the backend.

What is the conversational equivalent of this? Inputting the word “like” doesn’t seem like as reliable a signal because it may be mentioned in a simile or mindless affectation. Based on the insights from my previous article, the value of successful personalization suggests that any regression will be acutely felt in your company’s pocketbook.

Perhaps a solution is using another LLM as a reasoning engine to format unstructured inputs automatically into clear engagement signals. But until their data collection efficiency is clear, designers should ask if the benefits of a conversational interface outweigh the risk of worse personalization.

Towards The Next Layer Of Abstraction

As this new paradigm shift in computing evolves, I hope this is a helpful primer for thinking about the next interface abstractions. Conversational interfaces will surely be a mainstay in the next era of AI-first design. Adding voice capabilities will allow computers to augment our abilities without arching our spines through unhealthy amounts of screen time. Yet conversation alone won’t suffice, as we also must design for needs that words cannot describe.

So, if no interface is a panacea, let’s avoid simplistic evolutionary tales & instead aspire towards the principles of great experiences. We want an interface that is integrated, contextual, & multimodal. It knows sometimes we can only describe our intent with gestures or diagrams. It respects when we’re too busy for a conversation but need to ask a quick question. When we do want to chat, it can see what we see, so we aren’t burdened with writing lengthy descriptions. When words fail us, it still gets the gist.

Avoiding Tunnel Visions Of The Future

This moment reminds me of a cautionary tale from the days of mobile-first design. A couple of years after the iPhone’s debut, touchscreens became a popular motif in collective visions of the future. But Bret Victor, the revered Human-Interface Inventor (his title at Apple), saw touchscreens more as a tunnel vision of the future.

In his brief rant on peripheral possibilities, he remarks how they ironically ignore touch altogether. Most of the interactions mainly engage our sense of sight instead of the rich capabilities our hands have for haptic feedback. How can we ensure that AI-first design amplifies all our capabilities?

“A tool addresses human needs by amplifying human capabilities.”

— Bret Victor in “A Brief Rant on the Future of Interaction Design”

I wish I could leave you with a clever-sounding formula for when to use conversational interfaces. Perhaps some observable law stating that the mathematical relationship expressed by D∝1/G elucidates that ‘D’, representing describability, exhibits an inverse correlation with ‘G’, denoting graphical utility — therefore, as the complexity it takes to describe something increases, a conversational interface’s usability diminishes. While this observation may be true, it’s not very useful.

Honestly, my uncertainty at this moment humbles me too much to prognosticate on new design principles. What I can do instead is take a lesson from the recently departed Charlie Munger & invert the problem.

Designing Backwards

If we try to design the next abstraction layer looking forward, we seem to end up with something like a chatbot. We now know why this is an incomplete solution on its own. What if we look at the problem backward to identify the undesirable outcomes that we want to avoid? Avoiding stupidity is easier than seeking brilliance, after all.

An obvious mistake to steer clear of is forcing users to engage in conversations without considering time constraints. When the time is right to chat, it should be in a manner that doesn’t replace existing usability problems with equally frustrating new ones. For basic tasks of equivalent importance to delivering pizza, we should find practical solutions not of equivalent extravagance to driving a Lamborghini. Furthermore, we ought not to impose prompt engineering expertise as a requirement for non-expert users. Lastly, as systems become more human-like, they should not inherit our gullibility, lest our efforts inadvertently optimize for exponentially easier access to our private data.

A more intelligent interface won’t make those stupid mistakes.

Thanks to Michael Sands, Evan Miller, & Colin Cowley for providing feedback on early drafts of this article.

The Feature Trap: Why Feature Centricity Is Harming Your Product

Most product teams think in terms of features. Features are easy to brainstorm and write requirement docs for, and they fit nicely into our backlogs and ticketing systems. In short, thinking in terms of features makes it easy to manage the complex task of product delivery.

However, we know that the best products are more than the sum of their parts, and sometimes, the space between the features is as important as the features themselves. So, what can we do to improve the process?

The vast majority of product teams are organized around delivering features — new pieces of functionality that extend the capabilities of the product. These features will often arise from conversations the company is having with prospective buyers:

  • “What features are important to you?”
  • “What features are missing from your current solution?”
  • “What features would we need to add in order to make you consider switching from your existing provider to us?” and so on.

The company will then compile a list of the most popular feature requests and will ask the product team to deliver them.

For most companies, this is what customer centricity looks like; asking customers to tell them what they want — and then building those features into the product in the hope they’ll buy — becomes of key importance. This is based on the fundamental belief that people buy products primarily for the features so we assemble our roadmaps accordingly.

We see this sort of thinking with physical products all the time. For instance, take a look at the following Amazon listing for one of the top-rated TV sets from last year. It’s like they hurled up the entire product roadmap directly onto the listing!

Now, of course, if you’re a hardcore gamer with very specific requirements, you might absolutely be looking for a TV with “VRR, ALLM, and eARC as specified in HDMI2.1, plus G-Sync, FreeSync, Game Optimizer, and HGiG.” But for me? I don’t have a clue what any of those things mean, and I don’t really care. Instead, I’ll go to a review site where they explain what the product actually feels like to use in everyday life. The reviewers will explain how good the unboxing experience is. How sturdy the build is. How easy it is to set up. They’ll explain that the OS is really well put together and easy to navigate, the picture quality is probably the best on the market, and the sound, while benefiting from the addition of a quality sound bar, is very clear and understandable. In short, they’ll be describing the user experience.

The ironic thing is that when I talk to most founders, product managers, and engineers about how they choose a TV, they’ll say exactly the same thing. And yet, for some reason, we struggle to take that personal experience and apply it to our own users!

Tip: As a fun little trick, next time you find yourself arguing about features over experience, ask people to get out their phones. I bet that the vast majority of folks in the room will have an iPhone, despite Samsung and Google phones generally having better cameras, more storage, better screens, and so on. The reason why iPhones have risen in dominance (if we ignore the obvious platform lock-in) is because, despite perhaps not having the best feature set on the market, they feel so nice to use.

Seeing Things From The Users’ Perspective

While feature-centric thinking is completely understandable, it misses a whole class of problems. The features in and of themselves might look good on paper and work great in practice, but do they mesh together to form a convincing whole? Or is the full experience a bit of a mess?

All the annoying bumps, barriers, and inconsistencies that start accruing around each new feature, if left unsolved, can limit the amount of value users can extract from the product. And if you don’t effectively identify and remove these barriers in a deliberate and structured way, any additional functionality will simply add to the problem.

If users are already struggling to extract value from existing features, how do you expect them to extract any additional value you might be adding to the product?

“As a product manager, it’s natural to want to offer as many features as possible to your customers. After all, you want to provide value, right? But what happens when you offer too many features? Your product becomes bloated, convoluted, and difficult to use.”
— “Are Too Many Features Hurting Your Product?

These barriers and inconsistencies are usually the result of people not thinking through the user experience. And I don’t mean user experience in some abstract way. I mean literally walking through the product step-by-step as though you’d never seen it before — sometimes described as having a “beginner’s mind” mdash; and considering the following questions:

  • Is it clear what value this product delivers and how I can get that value?
  • If I were a new user, would the way the product is named and structured make sense to me?
  • Can I easily build up a mental model of where everything is and how the product works?
  • Do I know what to do next?
  • How is this going to fit into my existing workflow?
  • What’s getting in my way and slowing me down?

While approaching things with a beginner’s mind sounds easy, it’s actually a surprisingly hard mindset for people to adopt — letting go of everything they know (or think they know) about their product, market, and users. Instead, their position as a superuser tends to cloud their judgment: believing that because something is obvious to them (something that they have created and have been working on for the past two years), it will be obvious to a new user who has spent less than five minutes with the product. This is where usability testing (a UX research method that evaluates whether users are able to use a digital product efficiently and effectively) should normally “enter the stage.”

The issue with trying to approach things with a beginner’s mind is also often exacerbated by “motivated reasoning,” the idea that we view things through the lens of what we want to be true, rather than what is true. To this end, you’re much more likely to discount feedback from other people if that feedback is going to result in some negative outcome, like having to spend extra time and money redesigning a user flow when you’d rather be shipping that cool new feature you came up with last week.

I see this play out in usability testing sessions all the time. The first subject comes in and struggles to grasp a core concept, and the team rolls their eyes at the incompetence of the user. The next person comes in and has the same experience, causing the team to ask where you found all these stupid users. However, as the third, fourth, and fifth person comes through and experiences the same challenge, “lightbulbs” slowly start forming over the team members’ heads:

“Maybe this isn’t the users’ fault after all? Maybe we’ve assumed a level of knowledge or motivation that isn’t there; maybe it’s the language we’ve used to describe the feature, or maybe there’s something in the way the interface has been designed that is causing this confusion?”

These kinds of insights can cause teams to fundamentally pivot their thinking. But this can also create a huge amount of discomfort and cognitive dissonance — realizing that your view of the world might not be entirely accurate. As such, there’s a strong motivation for people to avoid these sorts of realizations, which is why we often put so little effort (unfortunately) into understanding how our users perceive and use the things we create.

Developing a beginner’s mind takes time and practice. It’s something that most people can cultivate, and it’s actually something I find designers are especially good at — stepping into other people’s shoes, unclouded by their own beliefs and biases. This is what designers mean when they talk about using empathy.

Towards A Two-Tier Process (Conclusion)

We obviously still need to have “feature teams.” Folks who can understand and deliver the new capabilities our users request (and our business partners demand). While I’d like to see more thought and validation when it comes to feature selection and creation, it’s often quicker to add new features to see if they get used than to try and use research to give a definitive answer.

As an example, I’m working with one founder at the moment who has been going around houses with their product team for months about whether a feature would work. He eventually convinced them to give it a try — it took four days to push out the change, and they got the feedback they needed almost instantly.

However, as well as having teams focused on delivering new user value, we also need teams who are focused on helping unlock and maximize existing user value. These teams need to concentrate on outcomes over outputs; so, less deliver X capability in Y sprints than deliver X improvement by Y date. To do this, these teams need to have a high level of agency. This means taking them out of the typical feature factory mindset.

The teams focusing on helping unlock and maximize existing user value need to be a little more cross-disciplinary than your traditional feature team. They’re essentially developing interventions rather than new capabilities — coming up with a hypothesis and running experiments rather than adding bells and whistles. “How can we improve the onboarding experience to increase activation and reduce churn?” Or, “How can we improve messaging throughout the product so people have a better understanding of how it works and increase our North Star metric as a result?”

There’s nothing radical about focusing on outcomes over outputs. In fact, this way of thinking is at the heart of both the Lean Startup movement and the Product Led Growth. The problem is that while this is seen as received wisdom, very few companies actually put it into practice (although if you ask them, most founders believe that this is exactly what they do).

Put simply, you can’t expect teams to work independently to deliver “outcomes” if you fill their their calendar with output work.

So this two-tier system is really a hack, allowing you to keep sales, marketing, and your CEO (and your CEO’s partner) happy by delivering a constant stream of new features while spinning up a separate team who can remove themselves from the drum-beat of feature delivery and focus on the outcomes instead.

Further Reading

  • Why Too Many Features Can Ruin a Digital Product Before It Begins(Komodo Digital)
    Digital products are living, ever-evolving things. So, why do so many companies force feature after feature into projects without any real justification? Let’s talk about feature addiction and how to avoid it.
  • Are Too Many Features Hurting Your Product?(FAQPrime)
    As a product manager, it’s natural to want to offer as many features as possible to your customers. After all, you want to provide value, right? But what happens when you offer too many features? Your product becomes bloated, convoluted, and difficult to use. Let’s take a closer look at what feature bloat is, why it’s a problem, and how you can avoid it.
  • Twelve Signs You’re Working in a Feature Factory,” John Cutler
    The author started using the term Feature Factory when a software developer friend complained that he was “just sitting in the factory, cranking out features, and sending them down the line.” This article was written in 2016 and still holds its ground today. In 2019 there appeared a newer version of it (“Twelve signs You’re Working in a Feature Factory — Three Years Later”).
  • What Is The Agile Methodology?,” (Atlassian)
    The Agile methodology is a project management approach that involves breaking the project into phases and emphasizes continuous collaboration and improvement. Teams follow a cycle of planning, executing, and evaluating.
  • Problem Statement vs Hypothesis — Which ­­Is More Important?,” Sadie Neve
    When it comes to experimentation and conversion rate optimization (CRO), we often see people relying too much on their instincts. But in reality, nothing in experimentation is certain until tested. This means experimentation should be approached like a scientific experiment that follows three core steps: identify a problem, form a hypothesis, and test that hypothesis.
  • The Build Trap,” Melissa Perri (Produx Labs)
    The “move fast and break things” mantra seems to have taken the startup world by storm since Facebook made it their motto a few years ago. But there is a serious flaw with this phrase, and it’s that most companies see this as an excuse to stop analyzing what they intend to build and why they should build it — those companies get stuck in what I call “The Build Trap.”
  • What Is Product-led Growth?(PLG Collective)
    We are in the middle of a massive shift in the way people use and buy software. It’s been well over a decade since Salesforce brought software to the cloud. Apple put digital experiences in people’s pockets back in 2009 with the first iPhone. And in the years since the market has been flooded with consumer and B2B products that promise to meet just about every need under the sun.
  • The Lean Startup
    The Lean Startup isn’t just about how to create a more successful entrepreneurial business. It’s about what we can learn from those businesses to improve virtually everything we do.
  • Usability Testing — The Complete Guide,” Daria Krasovskaya and Marek Strba
    Usability testing is the ultimate method of uncovering any type of issue related to a system’s ease of use, and it truly is a must for any modern website or app owner.
  • "The Value of Great UX,” Jared Spool
    How can we show that a great user experience produces immense value for the organization? We can think of experience as a spectrum, from extreme frustration to delight. In his article, Jared will walk you through how our work as designers is able to transform our users’ experiences from being frustrated to being delighted.
  • Improving The Double Diamond Design Process,” Andy Budd (Smashing Magazine)
    The so-called “Double Diamond” is a great way of visualizing an ideal design process, but it’s just not the way most companies deliver new projects or services. The article proposes a new “Double Diamond” idea that better aligns with the way work actually gets done and highlights the place where design has the most leverage.
  • Are We Moving Towards a Post-Agile Age?,” Andy Budd
    Agile has been the dominant development methodology in our industry for a while now. While some teams are just getting to grips with Agile, others have extended it to the point that it’s no longer recognizable as Agile; in fact, many of the most progressive design and development teams are Agile only in name. What they are actually practicing is something new, different, and innately more interesting — something I’ve been calling Post-Agile thinking.

How to Make a Transportation and Logistics Website in WordPress

Do you want to make a transportation and logistics WordPress website?

If you run a logistics and transportation business, then you will need an online presence to get your brand known and set yourself apart from the competition. WordPress is one of the easiest and most flexible platforms to build a website for that very purpose.

In this article, we will show you how to make a transportation and logistics website in WordPress.

How to Make a Transportation and Logistics Website in WordPress

What Features Should a Logistics & Transportation Site Have?

Like any other business, transportation and logistics companies need a professional website to reach customers online. Without a site, your business might miss out on opportunities and struggle to communicate effectively with potential clients.

But having a website isn’t just about showing your brand. Big companies like DHL use their websites to help customers track their shipments and answer questions quickly.

That’s why logistics and transportation companies usually have certain unique features on their sites, like shipment tracking.

This function allows customers to monitor where their shipment is located in real-time and identify any potential issues with the delivery.

Other than that, a logistics and transportation website should follow best practices, like responsive website design, fast loading speeds, and strong security to prevent unauthorized access.

With that in mind, let’s look at how you can make a transportation and logistics website using WordPress, the most popular website builder on the market. You can use the quick links below to navigate through the steps:

Step 1: Get a Hosting Plan and Domain Name

The first step is to sign up for a WordPress hosting service. If you are unfamiliar with web hosting, then it’s essentially a service that stores and displays your website files so that they are publicly accessible.

At WPBeginner, we recommend using Bluehost for your WordPress hosting. Besides offering great value for money, they are also fast and easy to use, even for beginners who are new to web hosting.

Bluehost offer for WPBeginner readers

Bluehost also has a huge discount for WPBeginner readers, along with a free domain name and an SSL certificate. You can sign up by clicking on the button below:

Since you will be running a logistics website, we recommend going with the Bluehost Pro plan. It’s designed for high traffic, so your site will stay online at all times, even if multiple users are tracking their shipments.

Simply click on ‘Select’ beneath the plan you want to buy.

Bluehost Pricing Plans

Once you have chosen a plan, you will now need to pick a domain name, which is the online address for your website.

In general, it’s best to use a domain that includes your brand name in it, like fedex.com or dhl.com. If you want, you may also add a transportation or logistics-related keyword after it, like murphylogistics.com.

For help with picking the best domain name, see our guide on how to choose a domain name for your WordPress website. You can also try WPBeginner’s free business name generator to play around with some options.

Once you have chosen a domain name, just click ‘Next.’

Choosing a logistics website domain name in Bluehost

After this, you will be asked to enter your account information, business email address, name, country, phone number, and more.

You will also see optional extras that you can buy. We generally don’t recommend buying them straight away, as you can always add them later if your business needs them.

Bluehost's package extras

At this stage, you can insert your payment information to complete the purchase.

Then, you will receive a confirmation email with the login credentials to your Bluehost dashboard, which is the control panel where you will manage your logistics site.

Step 2: Create a New WordPress Website

Note: If you have chosen other hosting services like SiteGround, DreamHost, HostGator, or WP Engine, then read our guide on how to install WordPress for step-by-step instructions.

If you used our Bluehost link before, then Bluehost will automatically install WordPress on your hosting service by default, so you can skip this section.

That said, if you miss this step or want to set up another WordPress site on the same hosting plan, you can follow these instructions.

First, go to the ‘Websites’ tab in the Bluehost dashboard. Then, click the ‘Add Site’ button.

Adding a new site in Bluehost

The Bluehost website setup wizard will now appear.

To begin, simply select ‘Install WordPress’ and click ‘Continue.’

Choosing WordPress as the CMS to use in Bluehost

You can now insert a title for your website.

After that, just click ‘Continue.’

Inserting a site title in Bluehost

At this stage, you can connect a domain name to your website.

You can add your existing domain or use a temporary subdomain until you are ready to purchase a new domain name.

Connecting a domain name to a website in Bluehost

Now, just wait a few moments for Bluehost to install WordPress.

Once the installation is successful, you will land on the ‘Websites’ tab in Bluehost again and find your new site there. To log in to the WordPress admin panel, just click ‘Edit Site.’

Clicking on the Edit Site button in Bluehost

Alternatively, you can use your WordPress login URL (like example.com/wp-admin/) in your web browser. Make sure to replace the domain name with your own.

At this point, you can continue to the next steps to start creating the transportation and logistics WordPress website.

Step 3: Choose a Transportation and Logistics WordPress Theme

WordPress themes make it easy to create a good-looking website without web design skills. All you have to do is choose a theme you like, install it, and tweak some of the design elements.

When you first install WordPress, you will have one of the default themes installed, which may not be the most attractive. But don’t worry, there are many other logistics and transportation WordPress themes that you can use.

For guidance on theme setup and theme recommendations, you can check out the following articles:

How to Edit Your Logistics and Transportation WordPress Theme

The great thing about WordPress is it offers several options to customize your WordPress theme, so you can choose the one that suits your skills and needs best.

One is to use the WordPress Full Site Editor (FSE), which is what you will use with a WordPress block theme.

Check out our beginner’s guide to WordPress Full Site Editing for step-by-step guidance.

Using the WordPress Full Site Editor to edit a transportation and logistics website

Another option is to use the Theme Customizer, which is the default option for classic WordPress themes. You can read more about how to edit a theme using the Theme Customizer in our article.

However, our recommendation is to use a page builder plugin like SeedProd.

While WordPress’ built-in editing features are good, their customization options may be a bit basic. Since you are working on a professional website, you want to leave a memorable impression on visitors.

SeedProd offers a flexible drag-and-drop builder with various fonts, color options, widgets, and even animations to personalize your website design. Plus, you get access to 300+ theme templates that are optimized for conversions from the get-go.

The SeedProd page builder plugin for WordPress

To use a SeedProd theme, you will need to install the SeedProd plugin. While a free version of SeedProd is available, we recommend getting a Pro or Elite plan. Both come with the Theme Builder, which allows you to customize every part of the theme.

For instructions on plugin installation, see our guide on how to install a WordPress plugin.

Once the plugin is installed and active, go ahead and activate your license. Simply paste your license key and click ‘Verify key.’

Adding the SeedProd license key to your WordPress website

After that, go to SeedProd » Theme Builder.

Now, just click ‘Theme Template Kits.’

Accessing SeedProd's Theme Template Kits

You will now see dozens of templates on the screen.

For a transportation and logistics website, you can use the Oceanic Cargo Shipping Agency theme. The theme template kit already has an attractive services page, so you can simply adjust the information and images there for your business.

Just hover your cursor over it and click the orange checkmark.

Choosing the Oceanic Cargo Shipping Agency SeedProd theme

You will now be directed to the SeedProd page builder, where you can drag and drop blocks, add new sections, change the background, create animated effects, and so on.

Every area is customizable, so feel free to play around with the editor.

Editing the transportation and logistics SeedProd theme

For more information about using SeedProd, you can check out our guide on how to create a custom theme in WordPress.

Step 4: Create a Homepage With a Services Section

When editing your website design, one of the most important things you should pay attention to is the homepage.

As the first page that visitors will most likely see, the homepage has to create a strong impression and give users enough information about your logistics business.

Typically, new WordPress websites have a homepage that displays their latest blog posts.

Example of a blog homepage

Since you are running a business site, it’s a good idea to separate your blog page from your homepage and create a new custom static front page from scratch. Otherwise, people may think your website is mainly for blogging and not for business.

You also want to add a services section to your homepage to give users an overview of what kind of logistics and transportation services you offer. Here’s a great example by DHL:

DHL's shipping service section

We also recommend linking this section to your services page later on so that you can provide more details about each offer there.

For guides to create a good-looking homepage, you can check out our article on how to create a custom homepage and how to create a services section in WordPress.

Step 5: Set Up Your Important Web Pages

Once you have set up your homepage, it’s time to create other pages on your transportation and logistics WordPress website.

We have an article that details the most important pages your WordPress site should have. But for this type of business, here are some pages that you should pay careful attention to:

  • Services page(s) This is where you will detail the services you offer. You can include the types of shipping supplies and boxes, types of delivery, and their prices. Feel free to create a dedicated child page for all your services to provide more details.
  • Contact page Here, potential customers can get in touch with you, or existing clients can reach out for help. We recommend adding a contact form using WPForms and including relevant contact information like your business address and phone number.
  • Service locations page Highlight the areas where your transportation and logistics services are available. This will be helpful if you have multiple pickup and dropoff points that customers can go to.
  • Shipment tracking page This page allows clients to monitor their shipments in real-time. You will want to create a blank page for this now, as we will show you how to add the tracking feature in the next step.
  • Booking page for pickups – This is for clients to schedule a pickup service for their packages. We will also show you how to add the booking form to this page later.
  • Customer portal – Create a secure and user-friendly portal for customers to access their shipment history, payments, invoices, and any other relevant data. Check out our article on how to make a client portal for step-by-step guidance.
  • FAQ page – Answer common questions clients may have so that they can better understand your services and feel confident about doing business with you. You can learn more about this topic in our article about adding an FAQ section in WordPress.

For more information, just see our article on how to create a custom page in WordPress.

Step 6: Install a Cargo Tracking Plugin for Your Logistics Site

We mentioned earlier that you will need a shipment tracking page for customers to monitor their deliveries. After setting up the page for this, you will need to install a cargo tracking plugin to display the user’s shipping information.

WPCargo is one cargo tracking plugin you could use. The free plugin comes with the standard shipment tracking functionality, including auto-tracking IDs, shipment management tools, and tracking forms. This may be enough if your business is new and that’s all the features you need at the moment.

There is also a premium plugin that gives you access to a barcode scanner, custom field manager, and more.

To use WPCargo, you need to install and activate the plugin. Then, go to WPCargo » General Settings from your WordPress dashboard.

On this page, you can add information about your services, like the types of shipments, shipment modes, shipment locations, and shipment carriers.

All this information will be useful when you need to add a new shipment from the WordPress admin.

WPCargo's general settings

One of the things you want to do in this tab is scroll down to ‘Track Page Settings.’

Then, select a page to insert the [wpcargo_trackform] shortcode.

Choosing a page for customers to track shipments in WPCargo

This tag will display a field that users can fill out with their shipment tracking number and get a real-time status on where their shipment is.

Here is what it looks like:

WPCargo's tracking shipment page on the frontend

Other than that, you can go ahead and configure other settings, like customizing the shipment number format and assigning shipment emails.

Once you’ve done that, just scroll down to click ‘Save Changes.’

Clicking save changes in WPCargo

If you switch to the ‘Multiple Package Settings’ tab, then you can choose whether clients can ship multiple packages in one order.

If so, feel free to specify what dimension and weight units to use and what package types they can select.

WPCargo's multiple package settings

Moving on to the Map Settings tab, you can choose to enable a map where users can view their shipment history.

We only recommend activating this setting if you know how to work with Google Maps APIs.

WPCargo's map settings

The Client Email Settings and Admin Email Settings tabs are basically similar. This is where you can customize the email notifications sent to website administrators and clients.

WPCargo comes with handy shortcode tags that you can include to display the shipment data. You can also choose which shipment statuses will make the plugin send an email notification to the client.

WPCargo's client email settings

Whenever you configure the settings in a WPCargo tab, don’t forget to save your changes.

Expert Tip: At times, emails sent from WordPress don’t get successfully delivered due to your site’s hosting configuration. To prevent this from happening, we recommend using the WP Mail SMTP plugin. For more details, you can see our guide on how to fix the WordPress not sending emails issue.

Adding a New Shipment in WPCargo

If you want to add a new shipment in WPCargo, then you have to do it in the WordPress admin. Only WPCargo admin, employee, and agent user roles have this access, whereas clients on your website don’t.

To add a new shipment, go to WPCargo » Add Shipment. Then, fill out the shipper and receiver’s details.

Adding a new shipment in WPCargo

Scrolling down, you will need to fill out more information about the shipment details.

All the things you configured in the General Settings will show up here as options you can choose.

Filling out WPCargo's shipment details

After that, scroll back up to the ‘Assign shipment to’ section.

Make sure to assign the shipment to a Client, Agent, and Employee responsible for it.

Assigning a WPCargo shipment

Then, move down to the ‘Current Status:’ section and update the shipment’s date, time, location, status, and remarks, if any.

After that, just click ‘Publish.’ Depending on your email settings from earlier, the plugin will notify the client about the shipment’s status.

Updating the shipment status in WPCargo

For an alternative method, you can see our guide on how to offer shipment tracking in WooCommerce.

Step 7: Add a Shipping Calculator Form to Your WordPress Site

Besides a tracking plugin, logistics companies typically have a shipping calculator form on their websites. This feature helps potential customers see the estimated price of their shipments, which can be handy if they want to deliver multiple packages.

If you use WPCargo, they have a premium add-on to create a shipping calculator. It allows users to see the distance between the origin and destination and the resulting fees based on the distance.

WPCargo vehicle rate add-on

Alternatively, you can use WPForms, which is the easiest WordPress form plugin on the market. It includes 1200+ templates for various forms, including a shipping cost calculator form.

All you need to do is install the plugin, choose this template, and customize the form fields to your liking.

The shipping cost calculator form already includes a calculations add-on that will automatically calculate the shipping price based on the user’s information.

Creating a shipping cost calculator in WPForms

For more information about this topic, check our out guide on how to add a shipping calculator in WordPress.

Step 8: Create a Booking Form for Scheduling Pickups

If you offer package pickup services, then it’s a good idea to create a booking form for customers to easily schedule their pickups on your website.

A booking form on a logistics site usually asks for information like:

  • The sender’s details, such as their contact information and origin address.
  • The receiver’s details, including their contact information and destination address.
  • Package weight and dimensions.
  • The shipping supplies they need, such as what type of box they want to use.
  • The type of delivery, such as express delivery or regular delivery.
  • Pickup date and time.

For that last part, we recommend specifying the availability of your pickup schedule. This way, customers can’t insert a date and time that’s outside of your work hours.

You can display this booking form on a dedicated page for scheduling pickups and the account page of your customer portal.

Creating a pickup booking form in WPForms

Our guide on how to create a booking form in WordPress can walk you through the entire process.

Step 9: Enable Payment Methods in Your Transportation Website

To accept payments for your transportation and logistics services, you will need to enable some payment methods on your website.

Usually, WordPress websites install an eCommerce or shopping cart plugin to accept payments. You can follow our guide on how to make an online store for more details.

If you are looking for a simpler solution, then we recommend using the WP Simple Pay plugin. It’s a Stripe payment plugin that lets you create a payment form without having to add an unnecessary shopping cart feature to your website.

WP Simple Pay

Since WP Simple Pay uses Stripe, you will have multiple payment methods by default, including buy-now-pay-later payment options.

If you use WPForms, then you can also add a payment function to your forms by connecting the plugin with Stripe. Or, you can install payment add-ons like Square, PayPal Commerce, and Authorize.net.

The payment fields in WPForms

For more information about enabling payments, just check out our guides on how to easily accept credit card payments.

If you have B2B clients that use your logistics services regularly, then we also recommend reading our article on accepting recurring payments in WordPress.

Step 10: Create a Request a Quote Form for Your Business

If you offer transportation and logistics services to businesses, then you may use a custom pricing structure that varies by the company’s needs.

In this case, it may not be possible to display a set list of prices on your website. Instead, the client has to consult with you first to get more information about your pricing.

It’s best to create a quote request form. Then, users can insert information about their business, company size, and the kind of transportation and logistics services they need. With this information, you can offer them the right service and pricing structure.

WPForms has a ‘Request a Quote’ form template ready, so you can use that and simply change the form fields according to your needs.

WPForms' Request a Quote template

You can read more in our article about how to create a request a quote form in WordPress.

For a transportation and logistics business, we recommend following these tips to create your form:

  • Make important form fields required to fill out – These include the type of goods, dimensions, weight, origin, and destination. This ensures that the potential client gives you enough information about the shipment so you can provide the right quote for them.
  • Enable autocompletion for address fields – This feature helps users enter their addresses faster and avoid any mistakes when inserting their information.
  • Mention how long you’ll take to respond – This way, the potential customer isn’t left wondering when to expect an email back. For example, you can say that you’ll get back to them within 24 hours.

Step 11: Add Live Chat to Your Company Website

Unfortunately, problems can happen during shipments and cause delays. When this occurs, customers will want to get answers quickly so that they aren’t wondering if their package has gotten lost or compromised.

To deal with this problem, we recommend using live chat support software. This allows users to talk to you or an agent directly on your website rather than having to email you and wait for a reply.

We recommend installing LiveChat, which is well-known in the customer support industry. With this tool, you can easily customize the live chat window in WordPress so that it doesn’t look out of place with your web design.

LiveChat

For more details, you can see our tutorial on how to add live chat in WordPress.

If you use WhatsApp, then you can also add a WhatsApp chatbox to communicate with users directly. We recommend doing this if this social media platform is popular among your region and target demographic.

WhatsApp chatbox on a website

In many cases, users use the live chat feature and ask questions that many other people also ask. To make answering these responses more efficient, you can try adding an automated chatbot to your website.

With this, instead of connecting the customer with a live agent, they will have to talk with a chatbot first. The chatbot will then show the user some pre-made responses based on what they’re asking.

For more details, check out our article on how to add a chatbot to your website.

Tools to Increase Sales for Your Transportation and Logistics Business

While you have now successfully created a transportation and logistics WordPress website, the journey doesn’t end here. To ensure the success of your business, you will need to continually optimize your site.

Here are some WordPress plugin and tool recommendations you can use to take your website to the next level:

  • All in One SEO (AIOSEO) – This plugin makes it easy to optimize your website for search engines and increase your site’s rankings. This way, you can get steady organic traffic to your business from Google.
  • MonsterInsights – If you want to use Google Analytics, then MonsterInsights can easily integrate your site with the platform. It has a user-friendly reporting dashboard that tells you where your customers are coming from and what they do on your site.
  • Reviews Feed Pro – Boost your social proof by displaying customer testimonials on your website. With Reviews Feed Pro, you can pull testimonials from third-party review sites like Google Reviews and Trustpilot.

We hope this article has helped you learn how to make a transportation and logistics website in WordPress. You may also want to check out our guides on dropshipping made simple and our expert picks for the best WooCommerce dropshipping plugins.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Make a Transportation and Logistics Website in WordPress first appeared on WPBeginner.

Skyrocket Performance Up To 126% With Our New High Frequency Hosting Plans

The newest and fastest architecture available in our hosting range, discover the incredible performance benefits of our revolutionary High Frequency hosting plans.

As if our award-winning and highly rated managed hosting wasn’t already amazing enough…

Now we’ve taken it to the next level by introducing 7 brand new High Frequency (HF) plans that give a whole new meaning to performance.

What makes these plans different? How powerful are they? And why should you consider High Frequency Hosting?

All is answered in this article, as we give you the lowdown on our newest high-octane creations and put them to the ultimate test.

Here’s what we’re covering:

What is High Frequency Hosting?

Over the years, we’ve introduced many new plans to our hosting lineup, but these High Frequency plans are an entirely different beast.

Because although they’re built on the same powerful infrastructure as our other plans, they have a completely different hardware setup.

This is what sets these plans apart and allows them to handle more traffic load, perform tasks at a higher speed, and execute operations more effectively (more on that soon).

Here’s a quick look at the next generation technology behind our HF plans:

3GHz+ Intel Xeon CPUs

High Frequency plans are powered by blazing-fast 3GHz+ Intel Xeon processors that are made specifically for sites requiring uncompromising performance.

This cutting edge processor greatly outperforms standard CPUs, which for context, typically fall within the 2GHz range.

NVMe SSD storage

Having next level CPU speed doesn’t mean much if your local disk storage can’t keep up.

This is the advantage of NVMe SSD storage, which unlike regular SSD storage, has the capacity to match the enhanced CPU throughput.

Combine these two hardware elements, and you have a recipe for high-performance architecture that seamlessly handles increasing requests and maintains reliability as your hosted site’s workload continues to grow.

You and your clients also get to enjoy the following benefits:

  • Better and faster user experiences – Superior CPU and disk performance ensures smooth and blazing-fast user experiences with absolute minimal site downtime and disruption.
  • Scale hosting seamlessly – HF servers are designed to accommodate and effortlessly scale with increasing website demand. Plan upgrades are also easy and affordable.
  • Stand out from the crowd – Give your hosting services a competitive edge and target clients seeking high-performance solutions.

But enough about the tech and what performance benefits High Frequency ‘should’ give you.

Next, we will truly put them to the test, so you can see how powerful they really are for yourself.

Putting High Frequency To The Test

Before launching these new plans to the world we had to be sure they lived up to their name.

So, our expert hosting team arranged a set of tests to see how these plans performed in real-world scenarios compared to our regular plans.

We decided to test two crucial categories when it comes to hosting performance: CPU performance and disk performance.

Here’s how it went down:

CPU Performance: High Frequency vs Regular Hosting

Methodology:

For the first two tests in the CPU performance category, we simulated multiple people visiting our website’s cached and uncached home page using the ‘Maintain client load’ method.

For the third test we simulated users adding a product to their cart on our website multiple times using this same load method.

All three tests involved scaling up page visitors to a targeted max client count, specific to each plan, within five minutes. During this time, each client repeatedly made requests, simulating high-traffic website use.

The best plans handle more requests and have a lower (faster) average response time.

Here’s how it all went down:

1. Load test for increasing traffic on *cached* home page

CPU Performance: High Frequency vs Regular hosting plans
CPU Performance test (home page – cached): High Frequency vs Regular hosting plans.

Here is the test data:

A table showing the results of our cached home page test

The Results: Up to 108% performance increase!

  • 40-108% increase in requests handled (Regular vs. HF plans)
  • Improved average response time across the board, including over half the average response time (Bronze Regular vs. Bronze HF)

2. Load test for increasing traffic on *uncached* home page

CPU Performance: High Frequency vs Regular hosting plans (home page - uncached)
CPU Performance test (home page – uncached): High Frequency vs Regular hosting plans.

Here is the test data:

A table showing the results of our uncached home page performance test

Results: Up to 126% performance increase!

  • 61-126% increase in requests handled (Regular vs. HF plans)
  • Improved average response time across the board, including almost half the average response time (Bronze Regular vs. Bronze HF)

3. Stress testing PHP with repeated add-to-cart simulations

CPU Performance: High Frequency vs Regular - add to cart
Results from stress testing PHP with repeated add-to-cart simulations.

Here is the test data:

a table showing the results of our add to card performance test

Results: Up to 108% performance increase!

  • 51-108% increase in requests handled (Regular vs. HF plans)
  • Improved average response time across the board, including a 33.19% speed increase (Bronze Regular vs. Bronze HF)

Disk Performance: High Frequency vs Regular Hosting

Methodology:

For this performance test category we used the FIO (Flexible I/O) tool to perform random read and write operations on the disk.

The higher read/write speed = better upload/download. Faster read/write also means increased database performance.

Testing disk performance through random read/write operations

A table showing the results of our read/write testing

Results: Up to 2.79x performance increase!

  • 1.7-2.78x increase in Read speed (Regular vs. HF plans)
  • 1.71-2.79x increase in Write speed (Regular vs. HF Plans)

Deploy High Frequency Hosting in Minutes

Satisfied that High Frequency can deliver the power, speed, and reliability you’re looking for?

Then check out our new range of HF plans, which, despite the groundbreaking features and performance, offer incredible value for buyers and resellers.

We have a total of 7 plans which scale based on your needs, and you can easily move between plans as well.

Another easy way to try High Frequency, or any of our hosting plans, is through our Agency membership, which automatically gives you $144 worth of free yearly hosting credit, and access to exclusive VIP pricing.

Every WPMU DEV product or plan also comes with an automatic 30-day 100% money-back guarantee, so no matter which option you decide to take, it will be completely risk free.

We hope you are as impressed and excited by our new range of plans as we are. It’s a massive step forward for our hosting, with even more exciting improvements coming in 2024 to look forward to!

Building Components For Consumption, Not Complexity (Part 2)

Welcome back to my long read about building better components — components that are more likely to be found, understood, modified, and updated in ways that promote adoption rather than abandonment.

In the previous installment in the series, we took a good look through the process of building flexible and repeatable components, aligning with the FRAILS framework. In this second part, we will be jumping head first into building adoptable, indexable, logical, and specific components. We have many more words ahead of us.

Adoptable

According to Sparkbox’s 2022 design systems survey, the top three biggest challenges faced by teams were recently:

  1. Overcoming technical/creative debt,
  2. Parity between design & code,
  3. Adoption.

It’s safe to assume that points 1. and 2. are mostly due to tool limitations, siloed working arrangements, or poor organizational communication. There is no enterprise-ready design tool on the market that currently provides a robust enough code export for teams to automate the handover process. Neither have I ever met an engineering team that would adopt such a feature! Likewise, a tool won’t fix communication barriers or decades worth of forced silos between departments. This will likely change in the coming years, but I think that these points are an understandable constraint.

Point 3. is a concern, though. Is your brilliant design system adoptable? If we’re spending all this time working on design systems, why are people not using them effectively? Thinking through adoption challenges, I believe we can focus on three main points to make this process a lot smoother:

  1. Naming conventions,
  2. Community-building,
  3. (Over)communication.

Naming Conventions

There are too many ways to name components in our design tool, from camelCasing to kebab-casing, Slash/Naming/Conventions to the more descriptive, e.g., “Product Card — Cart”. Each approach has its pros and cons, but what we need to consider with our selection is how easy it is to find the component you need. Obvious, but this is central to any good name.

It’s tempting to map component naming 1:1 between design and code, but I personally don’t know whether this is what our goal should be. Designers and developers work in different ways and with different methods of searching for and implementing components, so we should cater to the audience. This would aid solutions based on intention, not blindly aiming for parity.

Figma can help bridge this gap with the “component description field” providing us a useful space to add additional, searchable names (or aliases, even) to every component. This means that if we call it a headerNavItemActive in code but a “Header link” in design with a toggled component property, the developer-friendly name can be added to the description field for searchable parity.

The same approach can be applied to styles as well.

There is a likelihood that your developers are working from a more tokenized set of semantic styles in code, whereas the design team may need less abstract styles for the ideation process. This delta can be tricky to navigate from a Figma perspective because we may end up in a world where we’re maintaining two or more sources of truth.

The advice here is to split the quick styles for ideation and semantic variables into different sets. The semantic styles can be applied at the component level, whereas the raw styles can be used for developing new ideas.

As an example, Brand/Primary may be used as the border color of an active menu item in your design files because searching “brand” and “primary” may be muscle memory and more familiar than a semantic token name. Within the component, though, we want to be aliasing that token to something more semantic. For example, border-active.

Note: Some teams go to a further component level with their naming conventions. For example, this may become header-nav-item-active. It’s hyper-specific, meaning that any use outside of this “Header link” example may not make sense for collaborators looking through the design file. Component-level tokens are an optional step in design systems. Be cautious, as introducing another layer to your token schema increases the amount of tokens you need to maintain.

This means if we’re working on a new idea — for example, we have a set of tabs in a settings page, and the border color for the active tab at the ideation stage might be using Brand/Primary as the fill — when this component is contributed back to the system, we will apply the correct semantic token for its usage, our border-active.

Do note that this advice is probably best suited to large design teams where your contribution process is lengthier and requires the distinct separation of ideation and production or where you work on a more fixed versioning release cycle for your system. For most teams, a single set of semantic variables will be all you need. Variables make this process a lot easier because we can manage the properties of these separate tokens in a central location. But! This isn’t an article about tokens, so let’s move on.

Community-building

A key pillar of a successful design system is advocacy across the PDE (product, design, and engineering) departments. We want people to be excited, not burdened by its rules. In order to get there, we need to build a community of internal design system advocates who champion the work being done and act as extensions of the central team. This may sound like unpaid support work, but I promise you it’s more than that.

Communicating constantly with designers taught me that with the popularity of design systems booming over the past few years, more and more of us are desperate to contribute to them. Have you ever seen a local component in a file that is remarkably similar to one that already exists? Maybe that designer wanted to scratch the itch of building something from the ground up. This is fine! We just need to encourage that more widely through a more open contribution model back to the central system.

How can the (central) systems team empower designers within the wider organization to build on top of the system foundations we create? What does that world look like for your team? This is commonly referred to as the “hub and spoke” model within design systems and can really help to accelerate interest in your system usage goals.

“There are numerous inflection points during the evolution of a design system. Many of those occur for the same fundamental reason — it is impossible to scale a design system team enough to directly support every demand from an enterprise-scale business. The design system team will always be a bottleneck unless a structure can be built that empowers business units and product teams to support themselves. The hub and spoke (sometimes also called ‘core + federated’) model is the solution.”

— Robin Cannon, “The hub and spoke design system model” (IBM)

In simple terms, a community can be anything as small as a shared Slack/Teams channel for the design system all the way up to fortnightly hangouts or learning sessions. What we do here is help to foster an environment where discussion and shared knowledge are at the center of the system rather than being tacked on after the components have been released.

The team at Zalando has developed a brilliant community within the design team for their system. This is in the form of a sophisticated web portal, frequent learning and educational meetings, and encouraging an “open house” mindset. Apart from the custom-built portal, I believe this approach is an easy-to-reach target for most teams, regardless of size. A starting point for this would be something as simple as an open monthly meeting or office hours, run by those managing your system, with invites sent out to all designers and cross-functional partners involved in production: product managers, developers, copywriters, product marketers, and the list goes on.

For those looking for inspiration on how to run semi-regular design systems events, take a look at what the Gov UK team have started over on Eventbrite. They have run a series of events ranging from accessibility deep dives all the way up to full “design system days.”

Leading with transparency is a solid technique for placing the design system as close as possible to those who use it. It can help to shift the mindset from being a siloed part of the design process to feeding all parts of the production pipeline for all key partners, regardless of whether you build it or use it.

Back to advocacy! As we roll out this transparent and communicative approach to the system, we are well-placed to identify key allies across the product, design, and engineering team/teams that can help steward excellence within their own reach. Is there a product manager who loves picking apart the documentation on the system? Let’s help to position them as a trusted resource for documentation best practices! Or a developer that always manages to catch incorrect spacing token usage? How can we enable them to help others develop this critical eye during the linting process?

This is the right place to mention Design Lint, a Figma plugin that I can only highly recommend. Design Lint will loop through layers you’ve selected to help you find possibly missing styles. When you write custom lint rules, you can check for errors like color styles being used in the wrong way, flag components that aren’t published to your library, mark components that don’t have a description, and more.

Each of these advocates for the system, spread across departments within the business, will help to ensure consistency and quality in the work being produced.

(Over)communication

Closely linked to advocacy is the importance of regular, informative, and actionable communication. Examples of the various types of communication we might send are:

  • Changelog/release notes.
  • Upcoming work.
  • System survey results. (Example: “Design Maturity Results, Sep-2023,” UK Department for Education.)
  • Resource sharing. Found something cool? Share it!
  • Hiring updates.
  • Small wins.

That’s a lot! This is a good thing, as it means there is always something to share among the team to keep people close, engaged, and excited about the system. If your partners are struggling to see how important and central a design system is to the success of a product, this list should help push that conversation in the right direction.

I recommend trying to build a pattern of regularity with your communication to firstly build the habit of sharing and, secondly, to introduce formality and weight to the updates. You might also want to decide whether you look forward or backward with the updates, meaning at the start or end of a sprint if you work that way.

Or perhaps you can follow a pattern as the following one:

  • Changelog/release notes are sent on the final day of every sprint.
  • “What’s next?” is shared at the start of a sprint.
  • Cool resources are shared mid-sprint to help inspire the team (and to provide a break between focus work sessions).
  • Small wins are shared quarterly.
  • Survey results are shared at the start of every second quarter.
  • Hiring updates are shared as they come up.

Outside of the system, communication really does make or break the success of a project, so leading from the front ensures we’re doing everything we can.

Indexable

The biggest issue when building or maintaining a system is knowing how your components will be used (or not used). Of course, we will never know until we try it out (btw, this is also the best piece of design advice I’ve ever been given!), but we need to start somewhere.

Design systems should prioritize quality over speed. But product teams often work in “ship at all costs” mode, prioritizing speed over quality.

“What do you do when a product team needs a UI component, pattern, or feature that the design system team cannot provide in time or is not part of their scope?”

— Josh Clark, “Ship Faster by Building Design Systems Slower

What this means is starting with real-world needs and problems. The likelihood when starting a system is that you will create all the form fields, then some navigational components, and maybe a few notification/alerts/callouts/notification components (more on naming conventions later) and then publish your library, hoping the team will use those components.

The harsh reality is, though, the following:

  • Your team members aren’t aware of which components exist.
  • They don’t know what components are called yet.
  • There is no immediate understanding of how components are translated into code.
  • You’re building components without needing them yet.

As you continue to sprint on your system, you will realize over time that more and more design work (user flows, feature work) is being pushed over to your product managers or developers without adhering to the wonderful design system you’ve been crafting. Why is that? It’s because people can’t discover your components! (Are they easily indexable?)

This is where the importance of education and communication comes into play. Whether it’s from design to development, design to copywriting, product to design, or brand to product, there is always a little bit more communication that can happen to ease these tensions within teams. Design Ops as a profession is growing in popularity amongst larger organizations for this very purpose — to better foster and facilitate communication channels not only amongst disparate design teams but also cross-functionally.

Note: Design Ops refers to the practice of integrating the design team’s workflow into the company’s broader development context. In practical terms, this means the design ops role is responsible for planning and managing the design team’s work and making sure that designers are collaborating effectively with product and engineering teams throughout the development process.

Back to discoverability! That communication layer could be introduced in a few ways, depending on how your team is structured. Using the channel within Slack or Teams (or whichever messaging tool you use) example from before, we can have a centralized communication channel about this very specific job — components.

Here’s an example message:

Within this channel, the person/s responsible for the system is encouraged to frequently post updates with as much context as is humanly possible.

For example:

  • What are you working on now?
  • What updates should we expect within the next day/week/month?
  • Who is working on what components?
  • How can the wider team support or contribute to this work?
  • Are there any blockers?

Starting with these questions and answers in a public forum will encourage wider communication and understanding around the system to ultimately force a wider adoption of what’s being worked on and when.

Secondly, within the tools themselves, we can be over-the-top communicative whilst we create. Making heavy use of the version history feature within Figma, we can add very intentional timestamps on activity, spelling out exactly what is happening, when, and by whom. Going into the weeds here to effectively use that section of the file as mini-documentation can allow your collaborators (even those without a paid license!) to get as close to the work as possible.

Additionally, if you are using a branch-based workflow for component management, we encourage you to use the branch descriptions as a way to achieve a similar result.

Note: If you are investigating a branch workflow within a large design organization, I recommend using them for smaller fixes or updates and for larger “major” releases to create new files. This will allow for a future world where one set of designers needs to work on v1, whereas others use v2.

Naming Conventions

Undoubtedly, the hardest part of design system work is naming things. What I call a dropdown, you may call a select, and someone else may call an option list. This makes it extremely difficult to align an entire team and encourage one way of naming anything.

However, there are techniques we can employ to ensure that we’re serving the largest number of users of our system as possible. Whether it’s using Figma features or working closer with our development team, there is a world in which people can find the components they need and when they need them.

I’m personally a big fan of prioritizing discoverability over complexity at every stage of design, from how we name our components to frames to entire files. What this means is that, more often than not, we’re better off introducing verbosity, rather than trying to make everything as concise as possible.

This is probably best served with an example!

What would you call this component?

  • Dropdown.
  • Popover.
  • Actions.
  • Modal.
  • Something else?

Of course, context is very important when naming anything, which is why the task is so hard. We are currently unaware of how this component will be used, so let’s introduce a little bit of context to the situation.

Has your answer changed? The way I look at this component is that, although the structure is quite generic — rounded card, inner list with icons — the usage is very specific. This is to be used on a search filter to provide the user with a set of actions that they can carry out on the results. You may:

  1. Import a predefined search query.
  2. Export your existing search query.
  3. Share your search query.

For this reason, why would we not call this something like search actions? This is a simplistic example (and doesn’t account for the many other areas of the product that this component could be used), but maybe that’s okay. As we build and mature our system, we will always hit walls where one component needs to — or can be — used in many other places. It’s at this time that we make decisions about scalability, not before we have usage.

Other options for this specific component could be:

  • Action list.
  • Search dropdown.
  • Search / Popover.
  • Filter menu.

Logical

Have you ever been in a situation where you searched for a component in the Figma Assets panel and not been sure of its purpose? Or have you been unsure of the customization possible within its settings? We all have!

I tend to find that this is the result of us (as design systems maintainers) optimizing for creation and not usage. This is so important, so I’ll say it again:

We tend to optimize for the people building the system, not for the people using it.

The consumers/users of a system will always far outweigh the people managing it. They will also be further away from the decisions that went into making the component and the reasons behind why it is built the way it is.

Here are a few hypothetical questions worth thinking through:

  • Why is this component called a navbar, and not a tab-bar?
  • Why does it have four tabs by default and not three, like the production app?
  • There’s only one navbar in the assets list, but we support many products. Where are the others?
  • How do I use the dark mode version of this component?
  • I need a tablet version of the table component. Should I modify this one, or do we have an alternative version ready to be used?

These may seem like familiar questions to you. And if not, congratulations, you’re doing a great job!

Figma makes it easy to build complexity into components, arguably too easy. I’m sure you’ve found yourself in a situation where you create a component set with too many permutations or ended up in a world where the properties applied to a component turn the component properties panel into what I like to call “prop soup.”

A good design system should be logical (usable). To me, usability means:

  1. Speed of discovery, and
  2. Efficient implementation of components.

The speed of discovery and the efficient implementation of components can — brace yourself! — sometimes mean repetition. That very much goes against our goals of a don’t repeat yourself system and will horrify those of you who yearn for a world in which consolidation is a core design system principle but bear with me for a bit more.

The canvas is a place for ideation and flexibility and a place where we need to encourage the fostering of new ideas fast. What isn’t fast is a confused designer. As design system builders, we then need to work in a world where components are customizable but only after being understood. And what is not easily understandable is a component with an infinite number of customization options and a generic name. What is understandable is a compact, descriptive, and lightweight component.

Let’s take an example. Who doesn’t love… buttons? (I don’t, but this atomic example is the simplest way to communicate our problem.)

Here, we have one component variant button with:

  • Four intentions (primary, secondary, error, warning);
  • Two types (fill, stroke);
  • Three different sizes (large, medium, small);
  • And four states (default, hover, focus, inactive).

Even while listing those out, we can see a problem. The easy way to think this through is by asking yourself, “Is a designer likely to need all of these options when it comes to usage?”

With this example, it might look like the following question: “Will a designer ever need to switch between a primary button and a warning one?” Or are they actually two separate use cases and, therefore two separate components?

To probably no one’s surprise, my preference is to split that component right down into its intended usage. That would then mean we have one variant for each component type:

  1. Primary,
  2. Secondary,
  3. Error (Destructive),
  4. Warning.

Four components for one button! Yes, that’s right, and there are two huge benefits if you decide to go this way:

  1. The Assets panel becomes easier to navigate, with each primary variant within each set being visually surfaced.
  2. The designer removes one decision from component usage: what type to use.

Let’s help set our (design) teams up for success by removing decisions! The design was intentionally placed within brackets there because, as you’re probably rightly thinking, we lose parity with our coded components here. You know what? I think that’s totally fine. Documentation and component handover happen once with every component, and it doesn’t mean we need to sacrifice usability within the design to satisfy the front-end framework composability. Documentation is still a vital part of a design system, and we can communicate component permutations in a method that meets design and development in the middle.

Auto Layout

Component usability is also heavily informed by the decision to use auto layout or not. It can be hard to grapple with, but my advice here is to go all in on using auto layout. Not only does it help to remove the need for eyeballing measurements within production designs, but it also helps remove the burden of spacing for non-design partners. If your copywriter needs to edit a line of text within a component, they can feel comfortable doing so with the knowledge that the surrounding content will flow and not “break” the design.

Note: Using padding and gap variables within main components can remove the “Is the spacing correct?” question from component composition.

Auto layout also provides us with some guardrails with regard to spacing and margins. We strive for consistency within systems, and using auto layout everywhere pushes us as far as possible in that direction.

Specific

We touched on this in the “usable” section, but naming conventions are so important for ensuring the discoverability and adoption of components within a system.

The more specific we can make components, the more likely they are to be used in the right place. Again, this may mean introducing inefficiencies within the system, but I strongly believe that efficiency is a long-term play and something we reach gradually over time. This means being incredibly inefficient in the short term and being okay with that!

Specific to me means calling a header a header, a filter a filter, and a search field a search field. Doesn’t it seem obvious? You’re right. It seems obvious, but if my Twitter “name that component” game has taught me anything, it’s that naming components is hard.

Let’s take our search field example.

  • Apple’s Human Interface Guidelines call it a “search field.”
  • Material Design calls it a “search bar.”
  • Microsoft Fluent 2 doesn’t have a search field. Instead, it has a “combobox” component with a typeahead search function.

Sure, the intentions may be different between a combobox and a search field or a search bar, but does your designer or developer know about these subtle nuances? Are they aware of the different use cases when searching for a component to use? Specificity here is the sharpest way for us to remove these questions and ensure efficiency within the system.

As I said before, this may mean that we end up performing inefficient activities within the system. For example, instead of bundling combobox and search into one component set with toggle-able settings, we should split them. This means searching for “search” in Figma would provide us with the only component we need, rather than having to think ahead if our combobox component can be customized to our needs (or not).

Conclusion

It was a long journey! I hope that throughout the past ten thousand words or so, you’ve managed to extract quite a few useful bits of information and advice, and you can now tackle your design systems within Figma in a way that increases the likelihood of adoption. As we know, this is right up there with the priorities of most design systems teams, and I firmly believe that following the principles laid out in this article will help you (as maintainers) sprint towards a path of more funding, more refined components, and happier team members.

And should you need some help or if you have questions, ask me in the comments below, or ping me on Twitter/Posts/Mastodon, and I’ll be more than happy to reply.

Further Reading

  • Driving change with design systems and process,” Matt Gottschalk and Aletheia Délivré (Config 2023)
    The conference talk explores in detail how small design teams can use design systems and design operations to help designers have the right environment for them.
  • Gestalt 2023 — Q2 newsletter
    In this article article, you will learn about the design systems roadmaps (from the Pinterest team).
  • Awesome Design Tokens
    A project that hosts a large collection of design token-related articles and links, such as GitHub repositories, articles, tools, Figma and Sketch plugins, and many other resources.
  • The Ondark Virus(D’Amato Design blog)
    An important article about naming conventions within design tokens.
  • API?(RedHat Help)
    This article will explain in detail how APIs (Application Programming Interface) work, what the SOAP vs. REST protocols are, and more.
  • Responsive Web Design,” by Ethan Marcotte (A List Apart)
    This is an old (but gold) article that set the de-facto standards in responsive web design (RWD).
  • Simple design system structure” (FigJam file, by Luis OuriachCC-BY license)
    For when you need to get started!
  • Fixed aspect ratio images with variants” (Figma file, by Luis OuriachCC-BY license)
    Aspect ratios are hard with image fills, so the trick to making them work is to define your breakpoints and create variants for each image. As the image dimensions are fixed, you will have much more flexibility — you can drag the components into your designs and use auto layout.
  • Mitosis
    Write components once, run everywhere; compiles to React, Vue, Qwik, Solid, Angular, Svelte, and others.
  • Create reusable components with Mitosis and Builder.io,” by Alex Merced
    A tutorial about Mitosis, a powerful tool that can compile code to standard JavaScript in addition to frameworks and libraries like Angular, React, and Vue, allowing you to create reusable components.
  • VueJS — Component Slots(Vue documentation)
    Components can accept properties (which can be JavaScript values of any type), but how about template content?
  • Magic Numbers in CSS,” by Chris Coyier (CSS Tricks)
    In CSS, magic numbers refer to values that work under some circumstances but are frail and prone to break when those circumstances change. The article will take a look at some examples so that you know what they are and how to avoid the issues related to their use.
  • Figma component properties(Figma, YouTube)
    In this quick video tip, you’ll learn what component properties are and how to create them.
  • Create and manage component properties(Figma Help)
    New to component properties? Learn how component properties work by exploring the different types, preferred values, and exposed nested instances.
  • Using auto layout(Figma Help)
    Master auto layout by exploring its properties, including resizing, direction, absolute position, and a few others.
  • Add descriptions to styles, components, and variables(Figma Help)
    There are a few ways to incorporate design system documentation in your Figma libraries. You can give styles, components, and variables meaningful names; you can add short descriptions to styles, components, and variables; you can add links to external documentation to components; and you can add descriptions to library updates.
  • Design system components, recipes, and snowflakes,” by Brad Frost
    Creating things with a component-based mindset right from the start saves countless hours. Everything is/should be a component!
  • What is digital asset management?(IBM)
    A digital asset management solution provides a systematic approach to efficiently storing, organizing, managing, retrieving, and distributing an organization’s digital assets.
  • Search fields (Components)(Apple Developer)
    A search field lets people search a collection of content for specific terms they enter.
  • Search — Components Overview(Material Design 3)
    Search lets people enter a keyword or phrase to get relevant information.
  • Combobox — Components(Fluent 2)
    A combobox lets people choose one or more options from a list or enter text in a connected input; entering text will filter options or allow someone to submit a free-form answer.
  • Pharos: JSTOR’s design system serving the intellectually curious(JSTOR)
    Building a design system from the ground up — a detailed account written by the JSTOR team.
  • Design systems are everybody’s business,” by Alex Nicholls (Director of Design at Workday)
    This is Part 1 in a three-part series that takes a deep dive into Workday’s experience of developing and releasing their design system out into the open. For the next parts, check Part II, “Productizing your design system,” and Part III, “The case for an open design system.”
  • Design maturity results ‘23,” (UK Dept. for Education)
    The results of the design maturity survey carried out in the Department for Education (UK), September 2023.
  • Design Guidance and Standards,” (UK Dept. for Education)
    Design principles, guidance, and standards to support people who use the Department for Education services (UK).
  • Sparkbox’s Design Systems Survey, 2022 (5th edition)
    The top three biggest challenges faced by design teams: are overcoming technical/creative debt, parity between design & code, and adoption. This article reviews in detail the survey results; 183 respondents maintaining design systems have responded.
  • The hub and spoke design system model,” by Robin Cannon (IBM)
    No design system team can scale enough to support an enterprise-scale business by itself. This article sheds some light on IBM’s hub and spoke model.
  • Building a design system around collaboration, not components(Figma, YouTube)
    It’s easy to focus your design system on the perfect component, missing out on the aspect that’ll ensure your success — collaboration. Louise From and Julia Belling (from Zalando) explain how they created and then scaled effectively their internal design system.
  • Friends of Figma, DesignOps(YouTube interest group)
    This group is about practices and resources that will help your design organization to grow. The core topics are centered around the standardization of design, design growth, design culture, knowledge management, and processes.
  • Linting meets Design,” by Konstantin Demblin (George Labs)
    The author is convinced that the concept of “design linting” (in Sketch) is groundbreaking for digital design and will remain state-of-the-art for a long time.
  • How to set up custom design linting in Figma using the Design Lint plugin,” by Daniel Destefanis (Product Design Manager at Discord)
    This is an article about Design Lint — a Figma plugin that loops through layers you’ve selected to help you find missing styles. You can check for errors such as color styles being used in the wrong way, flag components that aren’t published to your library, mark components that don’t have a description, and so on.
  • Design Systems and Speed,” by Brad Frost
    In this Twitter thread, Brad discusses the seemingly paradoxical relationship between design systems and speed. Design systems make the product work faster. At the same time, do design systems also need to go slower?
  • Ship Faster by Building Design Systems Slower,” by Josh Clark (Principal, Big Medium)
    Design systems should prioritize quality over speed, but product teams often have “ship at all costs” policies, prioritizing speed over quality. Actually, successful design systems move more slowly than the products they support, and the slower pace doesn’t mean that they have to be the bottleneck in the process.
  • Design Systems, a book by Alla Kholmatova (Smashing Magazine)
    Often, our design systems get out-of-date too quickly or just don’t get enough traction in our companies. What makes a design system effective? What works and what doesn’t work in real-life products? The book is aimed mainly at small to medium-sized product teams trying to integrate modular thinking into their organization’s culture. Visual and interaction designers, UX practitioners, and front-end developers particularly, will benefit from the knowledge in this book.
  • Making Your Collaboration Problems Go Away By Sharing Components,” by Shane Hudson (Smashing Magazine)
    Recently UXPin has extended its powerful Merge technology by adding npm integration, allowing designers to sync React component libraries without requiring any developer input.
  • Taking The Stress Out Of Design System Management,” by Masha Shaposhnikova (Smashing Magazine)
    In this article, the author goes over five tips that make it easier to manage a design system while increasing its effectiveness. This guide is aimed at smaller teams.
  • Around The Artifacts Of Design Systems (Case Study),” by Dan Donald (Smashing Magazine)
    Like many things, a design system isn’t ever a finished thing but a journey. How we go about that journey can affect the things we produce along the way. Before diving in and starting to plan anything out, be clear about where the benefits and the risks might lie.
  • Design Systems: Useful Examples and Resources,” by Cosima Mielke (Smashing Magazine)
    In complex projects, you’ll sooner or later get to the point where you start to think about setting up a design system. In this article, some interesting design systems and their features will be explored, as well as useful resources for building a successful design system.

Understanding and Using Docker Containers in Web Development: A Guide

Introduction to Docker and Containers

Docker is a developer's equivalent of a magic box. It enables people to design and run programs in an orderly and efficient manner. Docker employs lightweight containers rather than huge virtual machines. These containers act as mini-packages for applications, allowing them to be moved around and operated on multiple systems. Docker has greatly simplified the life of developers!

  • Docker allows developers to package applications with all the necessary parts, such as libraries and other dependencies, and ship them out as one package.
  • Using containers ensures consistency across multiple development, staging, and production environments.
  • Understanding how Docker works is crucial for modern web development workflows.

Docker and Containers Have Changed the Game in Web Development

  • Providing an isolated environment for applications that minimizes conflicts between different working environments.
  • Enhancing the scalability and efficiency of applications, as containers are more lightweight than traditional virtual machines.
  • Making it easier to update and deploy code changes by streamlining the CI/CD pipeline.

The Main Components of Docker That You Should Be Familiar With

  • Docker Images: These are the blueprints for containers, defining the environment and what it contains.
  • Docker Containers: Instances of Docker images that run the applications.
  • Docker Daemon: The background service running on the host manages the building, running, and distribution of Docker containers.
  • Docker Client: The tool you use to communicate with Docker and instruct it on what to perform.
  • Docker Store: A marketplace for sharing and managing Docker images. Understanding Docker and container technology is pivotal for any web developer looking to stay current with the trends and best practices in software development.

Setting up Docker for Web Development

Setting up Docker for web development is simple and useful. It facilitates your work. Here's how to go about it:

Creating And Maintaining A Voice Of Customer Program

For those involved in digital and physical product development or leadership, consider a Voice of Customer (VoC) program. A VoC program systematically gathers and analyzes customer insights, channeling user opinions into actionable intelligence. VoC programs use surveys, analytics, interviews, and more to capture a broad range of customer sentiment. When implemented effectively, a VoC program transforms raw feedback into a roadmap for strategic decisions, product refinement, and service enhancements.

By proactively identifying issues, optimizing offerings for user satisfaction, and tailoring products to real-time demand, VoC programs keep companies ahead. Moreover, in a world of fleeting consumer loyalty, such programs build trust and enhance the overall brand experience. VoC has been a standard CX practice that UX and product teams can utilize to their advantage. We’ll focus on VoC for digital products for this article. However, the methods and lessons learned are equally applicable to those working with physical products.

Successful product teams and User Experience (UX) practitioners understand that customer feedback is invaluable. It guides decisions and fosters innovation for products and services. Whether it’s e-commerce platforms refining user interfaces based on shopper insights or social media giants adjusting algorithms in response to user sentiments, customer feedback is pivotal for digital success. Listening, understanding, and adapting to the customer’s voice are key to sustainable growth.

The role of UX research in capturing the Voice of the Customer

UX research serves as the bridge that spans the chasm between a company’s offerings and its customers’ perspectives. UX research plays a pivotal role in capturing the multifaceted VoC. Trained UX researchers transform raw feedback into actionable recommendations, guiding product development and design in a direction that resonates authentically with users.

Ultimately, UX research is the translator that converts the diverse, nuanced VoC into a coherent and actionable strategy for digital companies.

Setting Up A Voice Of Customer Program

Overview Of Steps

We’ve identified six key steps needed to establish a VoC program. At a high level, these steps are the following:

  1. Establishing program objectives and goals.
  2. Identifying the target audience and customer segments.
  3. Selecting the right research methods and tools.
  4. Developing a data collection framework.
  5. Analyzing and interpreting customer feedback.
  6. Communicating insights to stakeholders effectively.

We’ll discuss each of these steps in more detail below.

Establishing Program Objectives And Goals

Before establishing a VoC program, it’s crucial to define clear objectives and goals. Are you aiming to enhance product usability, gather insights for new feature development, or address customer service challenges? By outlining these goals, you create a roadmap that guides the entire program. You will also avoid taking on too much and maintain a focus on what is critical when you state your specific goals and objectives. Specific objectives help shape research questions, select appropriate methodologies, and ensure that the insights collected align with the strategic priorities of the company.

You should involve a diverse group of stakeholders in establishing your goals. You might have members of your product teams and leadership respond to a survey to help quantify what your team and company hope to get out of a VoC. You might also hold workshops to help gain insight into what your stakeholders consider critical for the success of your VoC. Workshops can help you identify how stakeholders might be able to assist in establishing and maintaining the VoC and create greater buy-in for the VoC from your stakeholders. People like to participate when it comes to having a say in how data will be collected and used to inform decisions. If you come up with a long list of goals that seem overwhelming, you can engage key stakeholders in a prioritization exercise to help determine which goals should be the VoC focus.

Identifying The Target Audience And Customer Segments

Once you create clear objectives and goals, defining the target audience and customer segments will be important. For example, you decide your objective is to understand conversion rates between your various customer segments. Your goal is to increase sign-up conversion. You would want to determine if your target audience should be people who have purchased within a certain time frame, people who have never made a purchase, people who have abandoned carts, or a mix of all three.

Analytics can be critical to help create shortcuts at this point. You might start by looking at analytical data collected on the sign-up page to identify age gaps to set the target audience to understand why that specific age gap(s) are not signing up, whether there is evidence certain segments are more likely to abandon carts, and which segments are less likely to visit your site at all. Then, based on these clear objectives and goals, as well as identifying a target audience and customer segment, you could select the right research method and tools to collect data from the audience segment(s) you’ve identified as critical to collect feedback from.

Selecting The Right Research Methods And Tools

The success of a VoC program hinges on the selection of appropriate research methods and tools. Depending on your objectives, you might employ a mix of quantitative methods like surveys and analytics to capture broad trends, along with qualitative methods like user interviews and usability testing to unearth nuanced insights. Utilizing digital tools and platforms can streamline data collection, aggregation, and analysis. These tools, ranging from survey platforms to sentiment analysis software, enhance efficiency and provide in-depth insights.

The key is to choose methods and tools that align with the program’s goals and allow for a holistic understanding of the customer’s voice.

Your UX researcher will be critical in helping to identify the correct methods and tools for collecting data.

For example, a company could be interested in measuring satisfaction with its current digital experience. If there are currently no metrics being captured by the company, then a mixed method approach could be used to try to understand customers’ current attitudes towards the digital experience at a large scale and then dive deeper at a smaller scale after analyzing the survey. The quantitative survey could contain traditional metrics to measure people’s feelings like Net Promoter Score (NPS), which attempts to measure customer loyalty using a single item and/or System Usability Scale (SUS), which attempts to measure system usability using a brief questionnaire, and then based on the data collected, would drive the types of questions asked in a qualitative interview.

To collect the survey information, an online survey tool could be used that can draft and calculate metric questions for you. Many tools have integrated analysis that allows users to do statistical analysis of quantitative data collected and light semantic reviews on qualitative data. You can share the survey data easily with your stakeholder groups and then shape an interview protocol that will allow you to reach out to a smaller group of users to get deeper insight into the findings from the survey.

Table 1: Commonly used UX research methods to consider as part of a VOC Program
UX Research Method Situations in which to use Type of data collected
User interviews
  • Gaining an in-depth understanding of user needs, motivations, and behaviors.
  • Uncovering hidden pain points and frustrations.
  • Generating new ideas and solutions.
Qualitative data (e.g., quotes, stories, opinions)
Surveys
  • Gathering quantitative data from a large number of users.
  • Measuring user satisfaction and attitudes.
  • Identifying trends and patterns.
Quantitative data (e.g., ratings, rankings, frequencies)
Focus groups
  • Generating a wide range of perspectives on a topic.
  • Exploring controversial or sensitive issues.
  • Gathering feedback on design concepts or prototypes.
Qualitative data (e.g., group discussions, consensus statements)
Usability testing
  • Identifying usability problems with a product or service.
  • Evaluating the effectiveness of design solutions.
  • Gathering feedback on user flows and task completion.
Qualitative and quantitative data (e.g., task completion rates, error rates, user feedback)
Analytics
  • Tracking user behavior on a website or app.
  • Identifying trends and patterns in user engagement.
  • Measuring the effectiveness of marketing campaigns.
Quantitative data (e.g., page views, time on site, conversion rates)

Developing A Data Collection Framework

Collecting feedback requires a structured approach to ensure consistency and reliability. Developing a data collection framework involves creating standardized surveys, questionnaires, and interview protocols that gather relevant information systematically. A well-designed framework ensures you capture essential data points while minimizing biases or leading questions. This framework becomes the backbone of data collection efforts, enabling robust analysis and comparison of feedback across various touchpoints and customer segments.

Your data collection framework should include the following:

  • Objectives and research questions.
  • Data sources, whether it’s surveys, user interviews, website analytics, or any other relevant means.
  • Data collection methods with an emphasis on reliability and validity.
  • A robust data management plan. This includes organizing data in a structured format, setting up appropriate storage systems, and ensuring data security and privacy compliance, especially if dealing with sensitive information.
  • Timing and frequency of data collection, as well as the duration of your study. A well-thought-out schedule ensures you gather data when it’s most relevant and over a suitable time frame.
  • A detailed data analysis plan that outlines how you will process, analyze, and draw insights from the collected data.

Analyzing And Interpreting Customer Feedback

Collecting data is only half the journey; the real value lies in analyzing and interpreting the data collected. This involves processing both quantitative data (such as survey responses) and qualitative data (such as open-ended comments). Data analysis techniques like sentiment analysis, thematic coding, and pattern recognition help distill valuable insights.

These insights unveil customer preferences, emerging trends, and pain points that might require attention. Your UX researcher(s) can take the lead, with assistance from other team members, in helping to analyze your data and interpret your findings. The interpretation phase transforms raw data into actionable recommendations, guiding decision-making for product improvements and strategic initiatives.

Communicating Insights To Stakeholders Effectively

The insights derived from a VoC program hold significance across various levels of the organization. Effectively communicating these insights to stakeholders is critical for driving change and garnering support. Presenting findings through clear, visually engaging reports and presentations helps stakeholders grasp the significance of customer feedback. Additionally, highlighting actionable recommendations and illustrating how they tie back to strategic objectives empowers decision-makers to make informed choices. Regularly updating stakeholders on progress, outcomes, and improvements reinforces the ongoing value of the VoC program and fosters a culture of customer-centricity within the organization.

Key Components Of A Successful Voice Of Customer Program

Building A Culture Of Feedback Within The Organization

A successful VoC program is rooted in an organizational culture that prioritizes feedback at all levels. This culture begins with leadership setting the example by actively seeking and valuing customer opinions. When employees perceive that feedback is not only encouraged but also acted upon, it fosters an environment of collaboration and innovation. This culture should extend across departments, from marketing to development to customer service, ensuring that every team member understands the role they play in delivering exceptional experiences. By integrating customer insights into the company’s DNA, a feedback culture reinforces the notion that everyone has a stake in the customer’s journey.

Start small and incorporate research activities into product development to start harnessing a user-centric approach. Develop reports that showcase the business purpose, findings, and recommendations that can be presented to the product development team and stakeholders, but also to other departments to show the value of VoC research. Lastly, provide opportunities to collaborate with other departments to help them incorporate VoC into their daily activities. As a result, a culture of incorporating a VoC program becomes reinforced.

There are many ways you can go about building this culture. Some specific examples we’ve used include facilitating cross-product or cross-discipline meetings to plan research and review findings, workshops bringing together stakeholders from various lines of business or roles to help shape the research agenda, and perhaps most importantly, identifying and utilizing a champion of insights to promote findings throughout the organization. Ideally, your champion would hold a position that allows them to have exposure horizontally across your business and vertically up to various key stakeholders and members of leadership. Your champion can help identify who should be attending meetings, and they can also be utilized to present findings or have one-off conversations with leadership to promote buy-in for your culture of feedback.

Implementing User-friendly Feedback Mechanisms

For a VoC program to thrive, feedback mechanisms must be accessible, intuitive, and seamless for customers. Whether it’s a user-friendly feedback form embedded within an app, a chatbot for instant assistance, or social media channels for open conversations, the channels for providing feedback should reflect the digital preferences of your audience. These mechanisms should accommodate both quantitative and qualitative inputs, enabling customers to share their experiences in a manner that suits them best. A key element here is the simplicity of the process; if users find it cumbersome or time-consuming to provide feedback, the program’s effectiveness can be compromised.

Encouraging Customer Participation And Engagement

Engaging customers is essential for gathering diverse perspectives. Incentivizing participation through rewards, gamification, or exclusive offers can increase engagement rates. Moreover, companies can foster a sense of ownership among customers by involving them in shaping future offerings. Beta testing, user panels, and co-creation sessions invite customers to actively contribute to product development, reinforcing the idea that their opinions are not only valued but directly influence the company’s direction. By making customers feel like valued collaborators, a VoC program becomes a mutually beneficial relationship.

Integrating Feedback Into The Decision-making Process

Customer feedback should not remain isolated; it needs to permeate the decision-making process across all departments. This integration demands that insights gathered through the VoC program are systematically channeled to relevant teams. Product teams can use these insights to refine features, marketers can tailor campaigns based on customer preferences, and support teams can address recurring pain points promptly. Creating feedback loops ensures that customer opinions are not only heard but also translated into tangible actions, demonstrating the organization’s commitment to iterative improvement driven by user insights.

Continuous Improvement And Iteration Of The VoC Program

A VoC program is a journey, not a destination. It requires a commitment to continuous improvement and adaptation. As customer behaviors and preferences evolve, the program must evolve in tandem. Regularly reviewing the program’s effectiveness, incorporating new data sources, and updating methodologies keep the program relevant. This also includes analyzing the program’s impact on KPIs such as customer satisfaction scores, retention rates, and revenue growth. By iterating the program itself, businesses ensure that it remains aligned with changing business goals and the ever-evolving needs of their customers.

Best Practices And Tips For An Effective VoC Program

Creating Clear And Concise Surveys And Questionnaires

The success of a VoC program often hinges on the quality of the surveys and questionnaires used to collect feedback. To ensure meaningful responses, it’s essential to design clear and concise questions that avoid ambiguity. Keep the surveys focused on specific topics to prevent respondent fatigue and make sure that the language used is easily understandable by your target audience. Utilize a mix of closed-ended (quantitative) and open-ended (qualitative) questions to capture both statistical data and rich, contextual insights. Prioritize brevity and relevance to encourage higher response rates and more accurate feedback.

Monitoring Feedback Across Multiple Channels

Customer feedback is shared through diverse channels: social media, email, app reviews, support tickets, and more. Monitoring feedback across these channels is essential for capturing a holistic view of customer sentiment. Centralize these feedback streams to ensure that no valuable insights slip through the cracks. By aggregating feedback from various sources, you can identify recurring themes and uncover emerging issues, allowing for proactive responses and continuous improvement. Note we have focused on digital products. However, if there is a physical component of your experience, such as a brick-and-mortar store, you should be collecting similar feedback from those customers in those settings.

Incorporating User Testing And Usability Studies

Incorporating user testing and usability studies is important to help evaluate an experience with users. While upfront activities like in-depth user interviews can articulate users’ desires and needs for an experience, they do not help evaluate the updated experience. Findings and recommendations from user testing and usability studies should be incorporated into development sprints or backlogs. This will ensure that the experience consistently considers and reflects the VoC.

Ensuring Privacy And Data Security In The VoC Program

As you talk to users and develop your VoC program, you will constantly be collecting data. The data that is shared in reports should always be anonymous. Additionally, creating documentation on how to collect consent and data policies will be very important. If data is not stored properly, you could face penalties and lose the trust of participants for future VoC activities.

Challenges Of Starting A Voice Of Customer Program

If you are committed to starting a VoC program from scratch and then maintaining that program, you are likely to encounter many challenges. Gaining buy-in and commitment from stakeholders is a challenge for anyone looking to establish a VoC program. You’ll need to commit to a concerted effort across various departments within an organization. Securing buy-in and commitment from key stakeholders, such as executives, managers, and employees, is crucial for its success. Without their support, the program may struggle to gain traction and achieve its goals.

Resources are always an issue, so you’ll need to work on securing adequate funding for the program. Establishing and maintaining a VoC program can be a costly endeavor. This includes the cost of software, training, and staff time. Organizations must be prepared to allocate the necessary resources to ensure the success of the program.

Allocating sufficient time and resources to collect, analyze, and act on feedback: collecting, analyzing, and acting on customer feedback can be a time-consuming process. Organizations must ensure that they have the necessary staff and resources in place to dedicate to the VoC program.

Case Study: Successful Implementation Of A VoC Program

We worked with a large US insurance company that was trying to transform its customers’ digital experience around purchasing and maintaining policies. At the start of the engagement, the client did not have a VoC program and had little experience with research. As a result, we spent a lot of time initially explaining to key stakeholders the importance and value of research and using the findings to make changes to their product as they started their digital transformation journey.

We created a slide deck and presentation outlining the key components of a VoC program, how a VoC program can be used to impact a product, methods of UX research, what type of data the methods would provide, and when to use certain methods. We also shared our recommendations based on decades of experience with similar companies. We socialized this deck through a series of group and individual meetings with key stakeholders. We had the benefit of an internal champion at the company who was able to identify and schedule time with key stakeholders. We also provided a copy of the material we’d created to socialize with people who were unable to attend our meetings or who wanted to take more time digesting information offline.

After our meetings, we fielded many questions about the process, including who would be involved, the resources required, timelines for capturing data and making recommendations, and the potential limitations of certain methods. We should have accounted for these types of questions in our initial presentation.

VoC Activity Purpose Involvement
In-Depth User Interviews One-on-one interviews that focused on identified customer’s current usages, desires, and pain points related to the current experience. Additionally, later in the product development cycle, understanding customer’s feelings towards the new product and features that should be prioritized/enhanced in future product releases. Product, sales, and marketing teams
Concept Testing One-on-one concept testing with customers to gather feedback on the high-level design concepts. Product, sales, and marketing teams
Unmoderated Concept Testing Unmoderated concept testing with customers to gather feedback on the materials provided by the business to customers. The goal was to be able to reach out to more people to increase the feedback. Product, sales, and marketing teams
Usability Testing One-on-one usability testing sessions with customers to identify behaviors, usability, uses, and challenges of the new product. Product, sales, and marketing teams
Kano Model Survey This survey is to gather customer input on features from the product backlog to help the business prioritize them for future development. Product Team
Benchmarking Survey This survey is to help understand users’ attitudes toward the digital experience that can be used to compare customers’ attitudes as enhancements are made to it. Metrics that were used include Net Promoter Score, Systematic Suability Scale, and Semantic Differential. Product, sales, and marketing teams

One large component of enhancing the customer’s digital experience was implementing a service portal. To help better understand the needs and desires of users for this service portal, we started with executing in-depth user interviews. This first VoC activity helped to show the value of VoC research to the business and how it can be used to develop a product with a user-centric approach.

Our biggest challenge during this first activity was recruiting participants. We were unable to use a third-party service to help recruit participants. As a result, we had to collect a pool of potential participants through the sales division. As mentioned before, the company didn’t have much exposure to VoC work, so while trying to execute our VoC research and implement a VoC program, any time we worked with a division in the company that hadn’t heard of VoC, we spent additional time walking through what VoC is and what we were doing. Once we explained to the sales team what we were doing, they helped with providing a list of participants for recruitment for this activity and future ones.

After we received a list of potential participants, we crafted an email with a link to a scheduling tool where potential participants could sign up for interview slots. The email would be sent through a genetic email address to over 50+ potential participants. Even though we sent multiple reminder emails to this potential list of participants, we could only gather 5–8 participants for each VoC activity.

As we conducted more VoC activities and presented our findings to larger audiences throughout the company, more divisions became interested in participating in the VoC program. For example, we conducted unmoderated concept testing for a division that was looking to redesign some PDFs. Their goal was to understand customers’ needs and preferences to drive the redesign process. Additionally, we also helped a vendor conduct usability testing for the company to understand how user-friendly an application system was. This was one way to help grow the VoC program within the company as well as their relationship with the vendor.

We needed to do more than foster a culture of gathering customer feedback. As we began to execute the VoC program more extensively within the company, we utilized methods that went beyond simply implementing feedback. These methods allowed the VoC program to continue growing autonomously.

We introduced a benchmarking survey for the new portal. This survey’s purpose was to gauge the customer experience with the new portal over time, starting even before the portal’s release. This not only served as a means to measure the customer experience as it evolved but also provided insights into the maturation of the VoC program itself.

The underlying assumption was that if the VoC program were maturing effectively, the data gathered from the customer experience benchmarking survey would indicate that customers were enjoying an improved digital experience due to changes and decisions influenced more by VoC.

Next, we focused on transferring our knowledge to the company so the VoC program could continue to mature over time without us there. From the beginning, we were transparent about our processes and the creation of material for a VoC activity. We wanted to create a collaborative environment to make sure we understand the company’s needs and questions, but also so the company could understand the process for executing a VoC activity. We accomplished this in part by involving our internal champion at the company in all of the various studies we conducted and conversations we were having with various business units.

We’d typically start with a request or hypothesis by a division of the company. For example, once the portal is launched, what are people’s opinions on the new portal, and what functionality should the business focus on? Then, we would craft draft materials of the approach and questions. In this case, we decided to execute in-depth user interviews to be able to dive deep into users’ needs, challenges, and desires.

Next, we would conduct a series of working sessions to align the questions and ensure that they still align with the company’s goals for the activity. Once we had all the materials finalized, we had them reviewed by the legal team and began to schedule and recruit participants. Lastly, we would conduct the VoC activity, synthesize the data, and create a report to present to different divisions within the company.

We started the transfer of knowledge and responsibilities to the company by slowly giving them some of these tasks related to executing a VoC activity. With each additional new task the company was in charge of, we set additional time aside to debrief and provide details on what was done well and what could be improved upon. The goal was for the individuals at the company to learn by doing and giving them incremental new tasks as they felt more comfortable. Lastly, we provided documentation to leave behind, including a help guide they could refer to when continuing to execute VoC activities.

We concluded our role managing the VoC program by handing over duties and maintenance to the internal champion who had worked with us from the beginning. We stayed engaged, offering a few hours of consulting time each month; however, we were no longer managing the program. Months later, the program is still running, with a focus on collecting feedback on updates being made to products in line with their respective roadmaps. The client has used many of the lessons we learned to continue overcoming challenges with recruiting and to effectively socialize the findings across the various teams impacted by VoC findings.

Overall, while helping to build this VoC program, we learned a lot. One of our biggest pain points was participant recruitment. The process of locating users and asking them to participate in studies was new for the company. We quickly learned that their customers didn’t have a lot of free time, and unmoderated VoC activities or surveys were ideal for the customers as they could complete them on their own time. As a result, when possible, we opted to execute a mixed-methods approach with the hope we could get more responses.

Another pain point was technology. Some of the tools we’d hoped to use were blocked by the company’s firewall, which made scheduling interviews a little more difficult. Additionally, some divisions had access to certain quantitative tools, but the licenses couldn’t easily be used across divisions, so workarounds had to be created to implement some surveys. As a result, being creative and willing to think about short-term workarounds was important when developing the VoC program.

Conclusion

Building a successful VoC program is an ongoing effort. It requires a commitment to continuously collecting, analyzing and acting on customer feedback. This can be difficult to sustain over time, as other priorities may take precedence. However, a successful VoC program is essential for any organization that is serious about improving the customer experience.

We’ve covered the importance of VoC programs for companies with digital products or services. We recommend you take the approach that makes the most sense for your team and company. We’ve provided details of starting and maintaining a VoC program, including the upfront work needed to define objectives and goals, targeting the right audience, choosing the right methods, putting this all in a framework, collecting data, data analysis, and communicating your findings effectively.

We suggest you start small and have fun growing your program. When done right, you will soon find yourself overwhelmed with requests from other stakeholders to expand your VoC to include their products or business units. Keep in mind that your ultimate goal is to create a product that resonates with users and meets their needs. A VoC program ensures you are constantly collecting relevant data and taking actionable steps to use the data to inform your product or business’s future. You can refine your VoC as you see what works well for your situation.

Additional Voice of Customer Resources

Designing Web Design Documentation

As an occasionally competent software developer, I love good documentation. It explains not only how things work but why they work the way they do. At its best, documentation is much more than a guide. It is a statement of principles and best practices, giving people the information they need to not just understand but believe.

As soft skills go in tech land, maintaining documentation is right up there. Smashing has previously explored design documents in a proposal context, but what happens once you’ve arrived at the answer and need to implement? How do you present the information in ways that are useful to those who need to crack on and build stuff?

Documentation often has a technical bent to it, but this article is about how it can be applied to digital design — web design in particular. The idea is to get the best of both worlds to make design documentation that is both beautiful and useful — a guide and manifesto all at once.

An Ode To Documentation

Before getting into the minutia of living, breathing digital design documentation, it’s worth taking a moment to revisit what documentation is, what it’s for, and why it’s so valuable.

The documentation describes how a product, system, or service works, what it’s for, why it’s been built the way it has, and how you can work on it without losing your already threadbare connection with your own sanity.

We won’t get into the nitty-gritty of code documentation. There are plenty of Smashing articles to scratch that itch:

However, in brief, here are a few of the key benefits of documentation.

Less Tech Debt

Our decisions tend to be much more solid when we have to write them down and justify them as something more formal than self-effacing code comments. Having clear, easy-to-read code is always something worth striving for, but supporting documentation can give essential context and guidance.

Continuity

We work in an industry with an exceptionally high turnover rate. The wealth of knowledge that lives inside someone’s head disappears with them when they leave. If you don’t want to reinvent the wheel every time someone moves on, you better learn to love documentation. That is where continuity lies.

Prevents Needless Repetition

Sometimes things are the way they are for very, very good reasons, and someone, somewhere, had to go through a lot of pain to understand what they were.

That’s not to say the rationale behind a given decision is above scrutiny. Documentation puts it front and center. If it’s convincing, great, people can press on with confidence. If it no longer holds up, then options can be reassessed, and courses can be altered quickly.

Documentation establishes a set of norms, prevents needless repetition, allows for faster problem-solving, and, ideally, inspires.

Two Worlds

In 1959, English author C. P. Snow delivered a seminal lecture called “The Two Cultures” (PDF). It is well worth reading in full, but the gist was that the sciences and the humanities weren’t working together and that they really ought to do so for humanity to flourish. To cordon ourselves off with specialisations deprives each group of swathes of knowledge.

“Polarisation is sheer loss to us all. To us as people and to our society. It is at the same time practical and intellectual and creative loss [...] It is false to imagine that those three considerations are clearly separable.”

— Charles Percy Snow

Although Snow himself conceded that “attempts to divide anything into two ought to be regarded with much suspicion,” the framing was and remains useful. Web development is its own meeting of worlds — between designers and engineers, art and data — and the places where they meet are where the good stuff really happens.

“The clashing point of two subjects, two disciplines, two cultures — two galaxies, so far as that goes — ought to produce creative chances.”

— Charles Percy Snow

Snow knew it, Leonardo da Vinci knew it, Steve Jobs knew it. Magic happens when we head straight for that collision.

A Common Language

Web development is a world of many different yet connected specialisations (and sub-specialisations for that matter). One of the key relationships is the one between engineers and designers. When the two are in harmony, the results can be breathtaking. When they’re not, everything and everyone involved suffers.

Digital design needs its own language: a hybrid of art, technology, interactivity, and responsiveness. Its documentation needs to reflect that, to be alive, something you can play with. It should start telling a story before anyone reads a word. Doing so makes everyone involved better: writers, developers, designers, and communicators.

Design documentation creates a bridge between worlds, a common language composed of elements of both. Design and engineering are increasingly intertwined; it’s only right that documentation reflects that.

Design Documentation

So here we are. The nitty-gritty of design documentation. We’re going to cover some key considerations as well as useful resources and tools at your disposal.

The difference between design documentation, technical documentation, and a design system isn’t always clear, and that’s fine. If things start to get a little blurry, just remember the goal is this: establish a visual identity, explain the principles behind it, and provide the resources needed to implement it as seamlessly as possible.

What should be covered isn’t the point of this piece so much as how it should be covered, but what’s listed below ought to get you started:

The job of design documentation is to weave all these things (and more) together. Here’s how.

Share The Why

When thinking of design systems and documentation, it’s understandable to jump to the whats — the fonts, the colors, the components — but it’s vital also to share the ethos that helped you to arrive at those assets at all.

Where did this all come from? What’s the vision? The guiding principles? The BBC does a good job of answering these questions for Global Experience Language (GEL), its shared design framework.

On top of being public-facing (more on that later), the guidelines and design patterns are accompanied by articles and playbooks explaining the guiding principles of the whole system.

Include proposal documents, if they exist, as well as work practices. Be clear about who the designs are built for. Just about every system has a target audience in mind, and that should be front and center.

Cutting the guiding principles is like leaving the Constitution out of a US history syllabus.

Make Its Creation Is A Collaborative Process

Design systems are big tents. They incorporate design, engineering, copywriting, accessibility, and even legal considerations — at their best anyway.

All of those worlds ought to have input in the documentation. The bigger the company/project, the more likely multiple teams should have input.

If the documentation isn’t created in a collaborative way, then what reason do you have to expect its implementation to be any different?

Use Dynamic Platforms

The days are long gone when brand guidelines printed in a book are sufficient. Much of modern life has moved online, so too should guidance for its documentation. Happily (or dauntingly), there are plenty of platforms out there, many with excellent integrations with each other.

Potential resources/platforms include:

There can be a chain of platforms to facilitate the connections between worlds. Figma can lead into Storybook, and Storybook can be integrated directly into a project. Embrace design documentation as an ecosystem of skills.

Accommodate agile, constant development by integrating your design documentation with the code base itself.

Write With Use Cases In Mind

Although the abstract, philosophical aspects of design documentation are important, the system it described is ultimately there to be used.

Consider your users’ goals. In the case of design, it’s to build things consistent with best practices. Show readers how to use the design guidelines. Make the output clear and practical. For example,

  • How to make a React component with design system fonts;
  • How to choose appropriate colors from our palette.

As we’ve covered, the design breaks down into clear, recognizable sections (typography, color, and so on). These sections can themselves be broken down into steps, the latter ones being clearly actionable:

  • What the feature is;
  • Knowledge needed for documentation to be most useful;
  • Use cases for the feature;
  • Implementation;
  • Suggested tooling.

The Mailchimp Pattern Library is a good example of this in practice. Use cases are woven right into the documentation, complete with contextual notes and example code snippets, making the implementation of best practices clear and easy.

Humanising Your Documentation, a talk by Carolyn Stranksy, provides a smashing overview of making documentation work for its users.

Documentation should help people to achieve their goals rather than describe how things work.

As StackOverflow founder Jeff Atwood once put it, “A well-designed system makes it easy to do the right things and annoying (but not impossible) to do the wrong things.”

Use Case Driven Documentation” by Tyner Blain is a great breakdown of this ethos, as is “On Design Systems: Sell The Output, Not The Workflow” by our own Vitaly Friedman.

Language

The way things are said is important. Documentation ought to be clear, accessible, and accepting.

As with just about any documentation, give words like ‘just’, ‘merely’, and ‘simply’ a wide berth. What’s simple to one person is not always to another. Documentation should inform, not belittle. “Reducing bias in your writing” by Write the Docs gives excellent guidance here.

Another thing to keep in mind is the language you use. Instead of using “he” or “she,” use “one,” “they,” “the developer,” or some such. It may not seem like a big deal to one (see what I did there), but language like that helps reinforce that your resources are for everyone.

More generally, keep the copy clear and to the point. That’s easier said than done, but there are plenty of tools out there that can help tidy up your writing:

  • Alex, a tool for catching insensitive, inconsiderate writing;
  • Write Good, an English prose linter.

In a previous Smashing article, “Readability Algorithms Should Be Tools, Not Targets,” I’ve shared a wariness about tools like Grammarly or Hemingway Editor dictating how one writes, but they’re useful tools.

Also, I can never resist a good excuse to share George Orwell’s rules for language:

  1. Never use a metaphor, simile, or other figure of speech that you are used to seeing in print.
  2. Never use a long word where a short one will do.
  3. If it is possible to cut a word out, always cut it out.
  4. Never use the passive where you can use the active.
  5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
  6. Break any of these rules sooner than say anything outright barbarous.

Books like The Elements of Style (PDF) by William Strunk Jr are good to be familiar with, too. Keep things informative but snappy.

Make It Beautiful

Design documentation has a lot more credibility if it’s walking the walk. If it looks like a hot mess, what are the chances of it being taken seriously?

Ideally, you should be showcasing a design ethos, not just explaining it. NASA showed way back in 1976 (PDF) that manuals can themselves be beautiful. The Graphics Standards Manual by Richard Danne and Bruce Blackburn feels like a creative work in its own right.

Show the same care and attention to detail in your design documentation that you expect users to show in applying it. Documentation should be the first and best example of it in action.

Make your documentation easy to navigate and search. The most wonderful resources in the world aren’t doing anyone much good if they can’t be found. It’s also a splendid opportunity to show information architecture best practice in action too.

Publish it

Once you’ve gone through the trouble of creating a design system and explaining how it works, why keep that to yourself? Publishing documentation and making it freely available for anyone to browse is a fantastic final polish.

Here at the Guardian, for example, our Source design system Storybook can be viewed by anyone, and its code is publicly available on GitHub. As well as being a proving ground for the system itself, it creates a space for knowledge sharing.

Here are just a few fantastic examples of publicly available design documentation:

There are plenty more where these came from in the Design Systems Gallery — a fantastic place to browse for inspiration and guidance.

What’s more, if there are stories from the formation of your system, writing articles or blog posts are also totally legit ways of documenting it. What did the New York Times do when they developed a design system? They wrote an article about it, of course.

Publishing design documentation — in all its forms — is a commitment, but it’s also a statement of purpose. Why not share something beautiful, right?

And Maintain It

This is all well and good, I hear you say, arms crossed and brow furrowed, but who’s going to keep all this stuff up to date? That’s all the time that could be spent making things.

I hear you. There are reasons that Tweets (Xs?) like this make the rounds from time to time:

Yes, it requires hard work and vigilance. The time, effort, and heartache you’ll save by having design documentation will be well worth the investment of those same things.

The better integrated the documentation is with the projects it guides, the more maintenance will take care of itself. As components and best practices change, as common issues arise and are ironed out, the system and its documentation can evolve in kind.

To spare you the suspense, your design documentation isn’t going to be perfect off the bat. There will be mistakes and situations that aren’t accounted for, and that’s fine. Own them. Acknowledge blindspots. Include ways for users to give feedback.

As with most things digital, you’re never really “done.”

Start Small

Such thorough, polished design documentation can almost be deterrents, something only those with deep pockets can make. It may also seem like an unjustifiable investment of time. Neither has to be true.

Documentation of all forms saves time in the long run, and it makes your decisions better. Whether it’s a bash script or a newsletter signup component, you scrutinize it that little bit more when you commit to it as a standard rather than a one-off choice. Let a readme-driven ethos into your heart.

Start small. Choose fonts and colors and show them sitting together nicely on your repo wiki. That’s it! You’re underway. You will grow to care for your design documentation as you care for the project itself because they are part of each other.

Go forth and document!

Facilitating Inclusive Online Workshops (Part 2)

Earlier in the first part of the series, we defined inclusivity and how it contributes to enriching the workshop experience. We established that inclusivity is about ensuring everyone has an equal opportunity to participate and contribute, regardless of their background or identity. It goes beyond merely having diversity in attendance. It’s about creating an environment where different perspectives are valued and used to drive innovative outcomes.

In the second part, I will introduce you to the principle of the inclusive workshop through the acronym P.A.R.T.S. (which stands for Promote, Acknowledge, Respect, Transparency, and Share). After the principle is explained, we will dive into what you can do during and after the workshop to implement this principle.

The P.A.R.T.S. Principle

Often, we fall into the trap of thinking, “I’ve got a mixed group of folks here. My inclusivity job is done!”

Yes, having a diverse set of individuals is often an essential first step. But it’s just that — a first step. It’s like opening the door and inviting people in. However, the real task begins after the guests have arrived. That’s when you need to ensure they feel welcome, heard, and valued.

As a facilitator, how can you make sure that people feel safe to express their ideas and participate actively during the workshop? Here’s where the P.A.R.T.S. principle comes in.

The P.A.R.T.S. principle is an acronym that encapsulates five key principles that can form the foundation of any inclusive workshop: Promote, Acknowledge, Respect, Transparency, and Share.

P — Promote

Promote active participation from all attendees.

This begins with creating an environment where participants feel at ease sharing their ideas, opinions, and experiences. As a facilitator, your role is to set this tone from the beginning. One practical way to promote participation is by establishing some ground rules that encourage everyone to contribute. Another approach is to use different facilitation techniques to draw out quieter participants, such as having a quiet brainstorming session where participants can spend more time on their own to contribute their ideas or having round-robin techniques where everyone gets a turn to speak.

A — Acknowledge

Acknowledging participants’ contributions validates their input and makes them feel heard and valued.

This can be as simple as saying, “Thank you for sharing,” or “That’s an interesting perspective.” It’s also about demonstrating that you’ve understood their input by summarizing or paraphrasing what they’ve said. By doing this, you not only confirm their feelings of being heard but also model listening behavior for other participants.

R — Respect

Respect for all ideas, experiences, and perspectives is fundamental to an inclusive workshop.

This starts with setting expectations that all ideas are welcome, no matter how outside-the-box they may seem. It also means respecting the varied communication styles, personalities, and cultural backgrounds of the participants. As a facilitator, you should encourage respect by addressing any inappropriate comments or behaviors immediately and decisively.

T — Transparency

Transparency involves clear and open communication.

As a facilitator, it’s essential to articulate the workshop’s goals and processes clearly, address questions and concerns promptly, and keep channels for feedback open and responsive. This can be done by stating the agenda upfront, explaining the purpose of each activity, and regularly checking in with participants to ensure they’re following along.

S — Share

Share the workshop’s objectives, expectations, and agenda with all participants.

This shared understanding guides the workshop process and provides a sense of direction. It also empowers participants to take ownership of their contributions and the workshop outcomes.

The P.A.R.T.S. principle is a high-level principle you can try to implement in your workshop to make sure that all voices are heard, but to guide you further into how the principle can be used, here are some practical steps you can follow before, during, and after the workshop.

Applying The P.A.R.T.S. Principle: Before And During The Workshop

Step 1. Set The Stage

Setting the stage for your workshop goes beyond just a simple introduction. This is the point at which you establish the environment and set the tone for the entire event. For example, you can set rules like: “One person speaks at a time,” “Respect all ideas,” “Challenge the idea, not the person,” and so on. Clearly stating these rules before you start will help create an environment conducive to open and productive discussions.

It’s important to let participants know that every workshop has its “highs” and “lows.” Make it clear at the outset that these fluctuations in pace and energy are normal and are part of the process. Encourage participants to be patient and stay engaged through the lows, as these can often lead to breakthroughs and moments of high productivity later, during the highs.

Step 2. Observe The Participants

As a facilitator, it’s essential for you to observe and understand the dynamics of the group — to ensure everyone is engaged and participating effectively. Below, I’ve outlined a simpler approach to participant observation that involves looking for non-verbal cues, tracking participation levels, and paying attention to reactions to the content.

Here are a few things you should be paying attention to:

  • Non-verbal cues
    Non-verbal cues can be quite telling and often communicate more than words. Pay attention to participants’ body language as captured by their cameras, such as their posture, facial expressions, and eye contact. This also applies to in-person workshops where it is, in fact, much easier to keep track of the body language of participants. For instance, leaning back or crossing arms might suggest disengagement, while constant eye contact and active note-taking might indicate interest and engagement. When you’re facilitating a remote workshop (and there is no video connection, so you won’t have access to the usual body language indicators), pay attention to the use of emojis, reactions, and the frequency of chat activity. Also, look for signals that people want to speak; they might be unmuting themselves, using the “raise hand” button, or physically raising their hands.
  • Participation levels
    Keep track of how often and who is contributing to the discussion. If you notice a participant hasn’t contributed in a while, you might want to encourage them to share their thoughts. You could ask, “We haven’t heard from you yet. Would you like to add something to the discussion?”. Conversely, if someone seems to be dominating the conversation, you could say, “Let’s hear from someone who hasn’t had a chance to speak yet.” It’s all about ensuring a balanced participation where every voice is heard.
  • Reactions to content
    Observe participants’ reactions to the topics which are being discussed. Nods of agreement, looks of surprise, or expressions of confusion can all be very revealing. If you notice a reaction that suggests confusion or disagreement, don’t hesitate to pause and address it. You could ask the participant to share their thoughts or provide further explanations to clarify any possible misunderstandings.
  • Managing conflict
    At times, disagreements or conflicts may arise during the workshop. As a facilitator, it’s your role to manage these situations and ensure a safe and respectful environment. If a conflict arises, acknowledge it openly and encourage constructive dialogue. Remind participants of the ground rules, focusing on the importance of respecting each others’ opinions and perspectives. If necessary, you could use conflict resolution techniques, such as active listening and meditating or even taking a short break to cool down the tension.

Another helpful tip is to have a space for extra ideas. This could be a whiteboard in a physical setting or a shared digital document in a virtual one. Encourage participants to write down any thoughts or ideas that come up, even if they are not immediately relevant to the current discussion. These can be revisited later and may spur new insights or discussions.

Another tip is to use workshop-specific tools such as Butter, where participants can express their emotions through the emoji reaction features and be queued to ask their questions without interrupting the speakers. Lastly, if you have a group larger than 5-6 people, consider dividing them into sub-groups and using co-facilitators to assist in managing these sub-groups. This will make the workshop experience much better for individual participants.

Observing others through laptop cameras can be difficult when there are more than 5-6 people in the virtual room. That’s a big reason why you’ll need to set the stage and establish a few ground rules at the beginning. Rules such as “Speak one person at a time,” “Use the ‘Raise Hand’ button to speak,” and “Leave questions in the chat space” can really improve the experience.

Remote workshops might not be able to replace the full experience of in-person workshops, where we can clearly see people’s body language and interact with each other more easily. However, with the right combination of tools and facilitation tips, remote workshops can probably match very closely the in-person experience and make the participants happy.

Step 3. Respect Your Schedule

As you go about your workshop, respecting your agenda is essential. This is all about sticking to your plan, staying on track, and communicating clearly with the participants about what stage you’re at and what’s coming next.

Scheduled breaks are equally as important. Let’s say you’ve planned for a 10-minute break every 45 minutes, then stick to this plan. It offers participants time to rest, grab a quick snack (or coffee/tea), refresh their minds, and prepare for the next part. This is particularly significant during online workshops where screen fatigue is a common problem.

We know workshops don’t always go as planned — disruptions are often part of the package. These could range from a technical glitch during a virtual workshop, a sudden question sparking a lengthy discussion, or just starting a bit late due to late arrivals. This is where your “buffer time” will come in handy!

Respecting the buffer time allows you to handle any disruption that may come up without compromising on the workshop content or rushing through sections to recover the lost time. If there are no disruptions, this time can be used for additional discussions or exercises or even finishing the workshop earlier — something that participants usually appreciate.

Remember to stay focused. As the facilitator, you should keep discussions on track and aligned with the workshop’s goals. If the conversation veers off-topic, gently guide it back to the main point.

Applying The P.A.R.T.S. Principle: After The Workshop

Step 1. Follow Up

A critical part of concluding your workshop is following up with participants. This not only helps solidify the decisions and actions that were agreed upon but also maintains the collaborative momentum even after the workshop ends.

  • Meeting Minutes
    Send out a concise summary of the workshop, including the key points of discussion, decisions made, and next steps. This serves as a reference document for participants and ensures everyone is on the same page.
  • Action Plan
    Detail the agreed-upon action items, the person responsible for each, and the deadlines. This provides clarity on the tasks to be accomplished post-workshop.
  • Next Steps
    Clearly communicate the next steps, whether that’s a follow-up meeting, a deadline for tasks, or further resources to explore. This ensures that the momentum from the workshop continues.

Step 2. Celebrate

Completing a workshop is no small feat. It takes dedication, focus, and collaborative effort from all participants. So, don’t let this moment pass uncelebrated. Recognizing everyone’s contributions and celebrating the completion of the workshop is an essential concluding step.

This not only serves as a token of gratitude for the participant’s time and effort but also reinforces the sense of achievement, promoting a positive and inclusive culture. Reflect on the journey you all undertook together, emphasizing the progress made, the skills developed, and the insights gained.

In your closing remarks or a follow-up communication, highlight specific achievements or breakthrough moments from the workshop. You might also share key takeaways or outcomes that align with the workshop’s objectives. This helps to not only recap the learning but also underscore the value each participant brought to the workshop.

Consider personalized gestures to commemorate the workshop — certificates of completion, digital badges, or even just a special mention can make participants feel recognized and appreciated. Celebrations, no matter how small, can build camaraderie, boost morale, and leave everyone looking forward to the next workshop.

Conclusion

Let me conclude Part 2 by quoting Simon Raybould, who wonderfully encapsulates the art of facilitation, by saying:

“The secret of facilitating is to make it easy for people to learn. If you’re not making it easy, you’re not doing it right.”
— Simon Raybould

I couldn’t agree more. The inclusive workshop is not just about getting things done; it represents the symphony of diverse voices coming together, the exploration of ideas, and the collective journey toward shared objectives. Embracing this essence of inclusivity and embedding it into your workshop design and delivery makes for an environment where everyone feels respected, collaboration is enhanced, and innovative thinking flourishes.

As a facilitator, you have the power to make the workshop experience memorable and inspiring. The influence of your efforts can extend beyond the workshop, cultivating an atmosphere of respect, diversity, and inclusivity that spills over into all collaborative activities. This is the true impact and potential of well-executed, inclusive workshops.

Further Reading & References

Here are a few additional resources on the topic of workshops. I hope you will find something useful there, too.

  • Gamestorming: A Playbook for Innovators, Rulebreakers, and Changemakers, by Dave Gray, Sunni Brown, and James Macanufo
    This well-known playbook provides a wide range of strategies and activities for designing workshops that encourage a creative, productive thinking environment. If you’re leading workshops and wish to encourage more out-of-the-box thinking, this book is a perfect source of inspiration.
  • Sprint, by Jake Knapp, John Zeratsky, and Braden Kowitz
    This is another well-known book in the workshop space. The book focuses on mastering the facilitation of Design Sprint, a workshop method by Google aimed at solving business problems and fostering collaboration. If you’re keen on leading tech teams or startups, this book is a great pick.
  • The Workshop Survival Guide, by Devin Hunt and Rob Fitzpatrick
    This guide navigates you through the end-to-end process of designing and conducting successful workshops. Whether you’re a newbie or an experienced facilitator, this resource gives comprehensive support to facilitate workshops confidently.
  • Invent To Learn: Making, Tinkering, and Engineering in the Classroom, by Sylvia Libow Martinez and Gary S. Stager
    Even though it is primarily for school educators, the book shares a wide range of methods and techniques that you can adapt to any workshop setting to create inclusive, creative, and hands-on learning environments. Highly recommended for those interested in creating an inclusive environment in any setting.
  • No Hard Feelings: The Secret Power of Embracing Emotions at Work, by Liz Fosslien and Mollie West Duffy
    Although it doesn’t focus on workshops specifically, the book gives useful insights on managing emotions at work from both participant and facilitator perspectives. It offers a broad overview of different personalities at work and how to foster emotional inclusivity, which can be valuable when facilitating workshops.
  • A Comprehensive Checklist For Running Design Workshops,” by Slava Shestopalov
    Slava’s article is a thorough guide to designing and conducting a successful workshop. This is a highly recommended read for designers, product managers, or even engineers looking to understand the nuances of running a design-centric workshop.
  • The Workshopper Playbook — A Summary” (AJ&Smart)
    The summary of “The Workshop Playbook” discusses the 4C technique that AJ&Smart developed for constructing any workshop. The 4C’s — Collect, Choose, Create, and Commit — form an exceptional workshop framework that adheres to the double-diamond method of workshop creation. If you’re interested in gaining a more profound understanding of the 4C framework, consider reading the full book by AJ&Smart.
  • The Secret To Healthy Remote Work: Fewer Meetings, More Workshops,” by Mehdi En-Naizi
    The article promotes the shift from traditional meetings to workshops in remote work settings to boost productivity and decrease stress. It highlights the workshops’ effectiveness, enhanced focus, and their role in promoting team unity and social interactions.
  • 10 Tips On Running An Online Meeting Your Team Won’t Hate (And Free Templates To Try!),” Anamaria Dorgo and Cheska Teresa
    This guide provides a detailed approach to overcoming the fatigue and frustration often associated with online meetings. The tips include clearly defining the meeting’s purpose, sticking to an agenda, creating an inclusive space for active participation, scheduling regular breaks, and using breakout rooms for more focused discussions.
  • How Silent Brainstorming Easily Engages Introverts On The Project Team,” by Annie MacLeod (DPM)
    Try out this brainstorming technique next time you need to get the team’s input on a problem or solution or if you’re working on a team with a lot of introverts.
  • Dot Voting: A Simple Decision-Making and Prioritizing Technique in UX,” Sarah Gibbons (NN/g Nielsen Norman Group)
    A few UX workshop activities work well in any situation, and dot voting is one of them. Dot voting is a simple tool used to democratically prioritize items or make decisions in a group setting. It is an easy, straightforward way to narrow down alternatives and converge on a set of concepts or ideas.
  • How Do You Encourage Introverts And Quiet Participants To Share Their Ideas In A Meeting?” (LinkedIn — Meeting Facilitation)
    Meetings are essential for collaboration, creativity, and innovation. But not everyone feels comfortable speaking up in a group setting. Some people may be introverted, shy, or simply prefer to listen and process information before sharing their thoughts. How do you encourage these quiet participants to contribute their valuable ideas in a meeting?
  • Teacher Toolkit: Think-Pair-Share” — YouTube, (Think-Pair-Share webpage)
    This versatile tool can be used in any classroom. The discussion technique gives students the opportunity to respond to questions in written form before engaging in meaningful conversation with other students. Asking students to write and discuss ideas with a partner before sharing with the larger group builds confidence, encourages greater participation, and results in more thoughtful discussions.
    (Editor’s Note: The Teacher Toolkit webpage is temporarily down. Until their server is restored, you can use a full webpage copy preserved by the WayBack Machine. — MB)
  • Fishbowl Conversation
    Fishbowl Conversation is great for keeping a focused conversation when you have a large group of people. At any time, only a few people have a conversation (the fish in the fishbowl). The remaining people are listeners (the ones watching the fishbowl). The caveat is that the listeners can join the discussion at any moment.
  • Lightning Talks” (Design sprints by Google)
    Lightning Talks are a core Design Sprint method and a powerful opportunity to build ownership in the Design Sprint challenge. Plan and set up Lightning Talks before your Design Sprint begins. After all the Lightning Talks are finished, hold an HMW sharing session to capture and share all the opportunities your team has come up with.
  • AJ&Smart’s Remote Design Sprint
    The lightning demo activity from Design Sprint is a perfect example of the “Idea Gallery” type of activity. Participants work individually to create a visual or written representation of their ideas (like a poster), and then everyone walks around to view the “gallery” and people discuss the ideas.
  • Poster Session” (Gamestorming)
    The goal of a poster session is to create a set of compelling images that summarize a challenge or topic for further discussion. Creating this set might be an “opening act,” which then sets the stage for choosing an idea to pursue, or it might be a way to get indexed on a large topic.
  • Jigsaw Activities” (The Bell Foundation)
    Jigsaw activities are a specific type of information gap activity that works best when used with the whole class. The class is first divided into groups of four to six learners who are then given some information on a particular aspect of the topic, which they later become experts in.
  • Disney Brainstorming Method
    The Disney method was developed in 1994 by Robert Dilts based on Walt Disney’s creative approach. It’s a good mix of creativity and concreteness as it’s not only about generating ideas but also looking at them with a critical eye and, eventually, having a few of them ready to be further explored and implemented.
  • Support Extroverted Students in Remote Environment — Group Discussions
    Several video platforms have options for small group discussions. If you’re using one of these, breaking into small groups can be a great opportunity to help your extroverted students feel fulfilled (and for your more introverted students to “warm up” for group discussion).
  • 37 brainstorming techniques to unlock team creativity,” by James Smart (SessionLab)
    It’s important to find a framework and idea-generation process that empowers your group to generate meaningful results, as finding new and innovative ideas is a vital part of the growth and success of any team or organization. In this article, several effective brainstorming techniques are explored in detail in categories such as creative exercises and visual idea-generation games.
  • Round-Robin Brainstorming” (MindTools blog)
    It’s all too easy to start a brainstorming session with good intentions but then overlook or miss potentially great ideas simply because one assertive person sets the tone for the entire meeting. This is why a tool like Round-Robin Brainstorming is so valuable. This method allows team members to generate ideas without being influenced by any one person, and you can then take these ideas into the next stages of the problem-solving process.
  • Eysenck’s Personality Theory” (TutorialsPoint)
    What is Eysenck’s Personality Theory? This theory has been influential in personality psychology and used to explain various phenomena, including individual differences in behavior and mental health.
  • Meeting Design: For Managers, Makers, and Everyone, a book by Kevin Hoffman
    Meetings don’t have to be painfully inefficient “snoozefests” — if you design them well. Meeting Design will teach you the design principles and innovative approaches you’ll need to transform meetings from boring to creative, from wasteful to productive.
  • State of Meetings Report 2021
    How did meetings actually change in 2020? What will the long-term impact of this change be? And could 2020 have changed the way we meet for good? These are questions that will be answered in this detailed report.
  • Social Identity Theory (Science Direct)
    Social identity theory defines a group as a collection of people who categorize themselves as belonging to the same social category and internalize the category’s social identity-defining attributes to define and evaluate themselves — attributes that capture and accentuate intragroup similarities and intergroup differences.
  • Clarizen Survey Pins Falling Productivity Levels on Communication Overload” (Bloomberg)
    A new survey by Clarizen, the global leader in collaborative work management, finds that companies’ efforts to improve collaboration among employees by opening new lines of communication can have the opposite effect.
  • Conflict Resolution Skills: What They Are and How to Use Them” (Coursera)
    Handling conflict in any context is never fun. Often, issues become more complicated than needed if the people involved need more conflict resolution and general communication skills. In this article, you’ll learn more about conflict resolution and, more specifically, how different conflict resolution skills may be useful in various situations.
  • Meeting Parking Lot” (The Facilitator’s School)
    A free template for handling off-topic questions, topics, and discussions. Available in Miro Template and Mural Template format.
  • SmashingConf Online Workshops
    Finally, do meet the friendly Smashing Magazine front-end & UX workshops! These remote workshops aim to give the same experience and access to experts that you would have in an in-person workshop without needing to leave your desk or couch. You can follow along with practical examples and interactive exercises, ask questions during the Q&A sessions, and use workshop recordings and materials to study at your own pace, at your own time.

Falling For Oklch: A Love Story Of Color Spaces, Gamuts, And CSS

I woke up one morning in early 2022 and caught an article called “A Whistle-Stop Tour of 4 New CSS Color Features” over at CSS-Tricks.

Wow, what a gas! A new and wider color gamut! New color spaces! New color functions! New syntaxes! It is truly a lot to take in.

Now, I’m no color expert. But I enjoyed adding new gems to my CSS toolbox and made a note to come back to that article later for a deeper read. That, of course, led to a lot of fun rabbit holes that helped put the CSS Color Module Level 4 updates in a better context for me.

That’s where Oklch comes into the picture. It’s a new color space in CSS that, according to experts smarter than me, offers upwards of 50% more color than the sRGB gamut we have worked with for so long because it supports a wider gamut of color.

Color spaces? Gamuts? These are among many color-related terms I’m familiar with but have never really understood. It’s only now that my head is wrapping around these concepts and how they relate back to CSS, and how I use color in my own work.

That’s what I want to share with you. This article is less of a comprehensive “how-to” guide than it is my own personal journey grokking new CSS color features. I actually like to this of this more as a “love story” where I fall for Oklch.

The Deal With Gamuts And Color Spaces

I quickly learned that there’s no way to understand Oklch without at least a working understanding of the difference between gamuts and color spaces. My novice-like brain thinks of them as the same: a spectrum of colors. In fact, my mind goes straight to the color pickers we all know from apps like Figma and Sketch.

I’ve always assumed that gamut is just a nerdier term for the available colors in a color picker and that a color picker is simply a convenient interface for choosing colors in the gamut.

(Assumed. Just. Simply. Three words you never want to see in the same sentence.)

Apparently not. A gamut really boils down to a range of something, which in this case, is a range of colors. That range might be based on a single point if we think of it on a single axis.

Or it might be a range of multiple coordinates like we would see on a two-axe grid. Now the gamut covers a wider range that originates from the center and can point in any direction.

The levels of those ranges can also constitute an axis, which results in some form of 3D space.

sRGB is a gamut with an available range of colors. Display P3 is another gamut offering a wider range of colors.

So, gamuts are ranges, and ranges need a reference to determine the upper and lower limits of those axes. That’s where we start talking about color spaces. A color space is what defines the format for plotting points on the gamut. While more trained folks certainly have more technical explanations, my basic understanding of color spaces is that they provide the map — or perhaps the “shape” — for the gamut and define how color is manipulated in it. So, sRGB is a color gamut that spans a range of colors, and Hex, RGB, and HSL (among others, of course) are the spaces we have to explore the gamut.

That’s why you may hear a color space as having a “wider” or “narrower” gamut than another — it’s a range of possibilities within a shape.

If I’ve piqued your interest enough, I’ve compiled a list of articles that will give you more thorough definitions of gamuts and color spaces at the end of this article.

Why We Needed New Color Spaces

The short answer is that the sRGB gamut serves as the reference point for color spaces like Hex, RGB, and HSL that provide a narrower color gamut than what is available in the newer Display P3 gamut.

We’re well familiar with many of sRGB-based color notations and functions in CSS. The values are essentially setting points along the gamut space with different types of coordinates.

  /* Hex */ #f8a100
  /* RGB */ rgb(248, 161, 2)
  /* HSL */ hsl(38.79 98% 49%)

For example, the rgb() function is designed to traverse the RGB color space by mixing red, blue, and green values to produce a point along the sRGB gamut.

If the difference between the two ranges in the image above doesn’t strike you as particularly significant or noticeable, that’s fair. I thought they were the same at first. But the Display P3 stripe is indeed a wider and smoother range of colors than the sRGB stripe above it when you examine it up close.

The problem is that Hex, RGB, and HSL (among other existing spaces) only support the sRGB gamut. In other words, they are unable to map colors outside of the range of colors that sRGB offers. That means there’s no way to map them to colors in the Display P3 gamut. The traditional color formats we’ve used for a long time are simply incompatible with the range of colors that has started rolling out in new hardware. We needed a new space to accommodate the colors that new technology is offering us.

Dead Grey Zones

I love this term. It accurately describes an issue with the color spaces in the sRGB gamut — greyish areas between two color points. You can see it in the following demo.

Oklch (as well as the other new spaces in the Level 4 spec) doesn’t have that issue. Hues are more like mountains, each with a different elevation.

That’s why we needed new color spaces — to get around those dead grey zones. And we needed new color functions in CSS to produce coordinates on the space to select from the newly available range of colors.

But there’s a catch. That mountain-shaped gamut of Oklch doesn’t always provide a straight path between color points which could result in clipped or unexpected colors between points. The issue appears to be case-specific depending on the colors in use, but that also seems to indicate that there are situations where using a different color space is going to yield better gradients.

Consistent Lightness

It’s the consistent range of saturation in HSL muddying the waters that leads to another issue along this same train of thought: inconsistent levels of lightness between colors.

The classic example is showing two colors in HSL with the same lightness value:

The Oklab and Oklch color spaces were created to fix that shift. Black is more, well, black because the hues are more consistent in Oklab and Oklch than they are in LAB and LCH.

So, that’s why it’s likely better to use the oklch() and oklab() functions in CSS than it is to use their lch() and lab() counterparts. There’s less of a shift happening in the hues.

So, while Oklch/LCH and Oklab/LAB all use the same general color space, the Cartesian coordinates are the key difference. And I agree with Sitnik and Turner, who make the case that Oklch and LCH are easier to understand than LAB and Oklab. I wouldn’t be able to tell you the difference between LAB’s a and b values on the Cartesian coordinate system. But chroma and hue in LCH and Oklch? Sure! That’s as easy to understand as HSL but better!

The reason I love Oklch over Oklab is that lightness, chroma, and hue are much more intuitive to me than lightness and a pair of Cartesian coordinates.

And the reason I like Oklch better than HSL is because it produces more consistent results over a wider color gamut.

OKLCH And CSS

This is why you’re here, right? What’s so cool about all this is that we can start using Oklch in CSS today — there’s no need to wait around.

“Browser support?” you ask. We’re well covered, friends!

In fact, Firefox 113 shipped support for Oklch a mere ten days before I started writing the first draft of this article. It’s oven fresh!

Using oklch() is a whole lot easier to explain now that we have all the context around color spaces and gamuts and how the new CSS Color Module Level 4 color functions fit into the picture.

I think the most difficult thing for me is working with different ranges of values. For example, hsl() is easy for me to remember because the hue is measured in degrees, and both saturation and lightness use the same 0% to 100% range.

oklch() is different, and that’s by design to not only access the wider gamut but also produce perceptively consistent results even as values change. So, while we get what I’m convinced is a way better tool for specifying color in CSS, there is a bit of a learning curve to remembering the chroma value because it’s what separates OKLCH from HSL.

The oklch() Values

Here they are:

  • l: This controls the lightness of the color, and it’s measured in a range of 0% to 100% just like HSL.
  • c: This is the chroma value, measured in decimals between 0 and 0.37.
  • h: This is the same ol’ hue we have in HSL, measured in the same range of 0deg to 360deg.

Again, it’s chroma that is the biggest learning curve for me. Yes, I had to look it up because I kept seeing it used somewhat synonymously with saturation.

Chroma and saturation are indeed different. And there are way better definitions of them out there than what I can provide. For example, I like how Cameron Chapman explains it:

“Chroma refers to the purity of a color. A hue with high chroma has no black, white, or gray added to it. Conversely, adding white, black, or gray reduces its chroma. It’s similar to saturation but not quite the same. Chroma can be thought of as the brightness of a color in comparison to white.”

— Cameron Chapman

I mentioned that chroma has an upper limit of 0.37. But it’s actually more nuanced than that, as Sitnik and Turner explain:

“[Chroma] goes from 0 (gray) to infinity. In practice, there is actually a limit, but it depends on a screen’s color gamut (P3 colors will have bigger values than sRGB), and each hue has a different maximum chroma. For both P3 and sRGB, the value will always be below 0.37.”

— Andrey Sitnik and Travis Turner

I’m so glad there are smart people out there to help sort this stuff out.

The oklch() Syntax

The formal syntax? Here it is, straight from the spec:

oklab() = oklab( [ <percentage> | <number> | none]
    [ <percentage> | <number> | none]
    [ <percentage> | <number> | none]
    [ / [<alpha-value> | none] ]? )

Maybe we can “dumb” it down a bit:

oklch( [ lightness ] [ chroma ] [ hue ] )

And those values, again, are measured in different units:

oklch( [ lightness = <percentage> ] [ chroma <number> ] [ hue <degrees> ]  )

Those units have min and max limits:

oklch( [ lightness = <percentage (0%-100%)> ] [ chroma <number> (0-0.37) ] [ hue <degrees> (0deg-360deg) ]  )

An example might be the following:

color: oklch(70.9% 0.195 47.025);

Did you notice that there are no commas between values? Or that there is no unit on the hue? That’s thanks to the updated syntax defined in the CSS Color Module Level 4 spec. It also applies to functions in the sRGB gamut:

/* Old Syntax */
hsl(26.06deg, 99%, 51%)

/* New Syntax */
hsl(26.06 99% 51%)

Something else that’s new? There’s no need for a separate function to set alpha transparency! Instead, we can indicate that with a / before the alpha value:

/* Old Syntax */
hsla(26.06deg, 99%, 51%, .75)

/* New Syntax */
hsl(26.06 99% 51% / .75)

That’s why there is no oklcha() function — the new syntax allows oklch() to handle transparency on its own, like a grown-up.

Providing A Fallback

Yeah, it’s probably worth providing a fallback value for oklch() even if it does enjoy great browser support. Maybe you have to support a legacy browser like IE, or perhaps the user’s monitor or screen simply doesn’t support colors in the Display P3 gamut.

Providing a fallback doesn’t have to be hard:

color: hsl(26.06 99% 51%);
color: oklch(70.9% 0.195 47.025);

There are “smarter” ways to provide a fallback, like, say, using @supports:

.some-class {
  color: hsl(26.06 99% 51%);
}

@supports (oklch(100% 0 0)) {
  .some-class {
    color: oklch(70.9% 0.195 47.025);
  }
}

Or detecting Display P3 support on the @media side of things:

.some-class {
  color: hsl(26.06 99% 51%);
}

@media (color-gamut: p3) {
  .some-class {
    color: oklch(70.9% 0.195 47.025);
  }
}

Those all seem overly verbose compared to letting the cascade do the work. Maybe there’s a good reason for using media queries that I’m overlooking.

There’s A Polyfill

Of course, there’s one! There are two, in fact, that I am aware of: postcss-oklab-function and color.js. The PostCSS plugin will preprocess support for you when compiling to CSS. Alternatively, color.js will convert it on the client side.

That’s Oklch 🥰

O, Oklch! How much do I love thee? Let me count the ways:

  • You support a wider gamut of colors that make my designs pop.
  • Your space transitions between colors smoothly, like soft butter.
  • You are as easy to understand as my former love, HSL.
  • You are well-supported by all the major browsers.
  • You provide fallbacks for handling legacy browsers that will never have the pleasure of knowing you.

I know, I know. Get a room, right?!

Resources

Using Friction As A Feature In Machine Learning Algorithms

A common assumption in user experience design is less friction makes apps more delightful. But in practice, the happy path isn’t always the smoothest. The term “friction” in the digital sense usually refers to anything that makes experiences cumbersome. It’s an analogy to the physical resistance that occurs when objects interact. Digital friction comes in many forms, from frustrating flows to confusing copy. But plenty of scenarios actually benefit with a bit of resistance. Its killer feature is mitigating unintended consequences, such as an accidental Alexa shopping spree.

You’ve likely already encountered intentional friction many times. Most apps leverage it for destructive actions, account security, and error handling, as recommended by experts from Norman Nielsen Group to the magazine you’re currently reading.

Yet friction has found a new calling in the age of artificial intelligence. When implemented correctly, it can improve the efficiency of AI systems such as machine learning algorithms. These algorithms are often used to personalize experiences through predictive recommendations. Some applications incorporating these algorithms realize that adding a bit of friction to their interface can turn each user interaction into an opportunity to improve algorithmic quality.

While less friction makes an app smoother, a bit more may make it even smarter.

Friction As A Feature

Before venturing down the AI rabbit hole, let’s explore some simple examples showcasing the basic benefits of friction in UX. These are a helpful foundation to build off as we ascend into more complex applications for machine learning algorithms. Regardless of your familiarity, this will ground the following lessons in first principles.

Preventing Unintended Consequences

A common use for friction is error prevention, the fifth entry in Jakob Nielsen’s list of usability heuristics. In scenarios with the potential for high-cost errors, such as irreversible deletion, apps often request confirmation before executing requests. Confirmations often display in a modal, locking the rest of the screen to increase focus on copy explaining an action’s implications. This extra step provides some extra time to consider these ramifications.

“By forcing us to slow down and think at this exact moment, we’re kept from making potentially disastrous decisions by accident.”

— Archana Madhavan in Amplitude’s “Onboarding With The IKEA Effect: How To Use UX Friction To Build Retention

Sometimes more resistance is present when the consequences can be catastrophic. For instance, a confirmation may involve cognitive work such as typing “DELETE” to submit a deletion request. This level of resistance makes sense when considering the humbling fact of life from Steve Krug’s classic UX book Don’t Make Me Think, which states, “We don’t read pages. We scan them.” This makes it easy to imagine how a streamlined design can make it too easy to overlook the consequences of a click.

While these tactics may look comically cumbersome, they mitigate devastating downsides. This use of friction is like a train’s brakes screeching to a halt right in time to avoid a collision — everyone breathes a sigh of relief, crisis averted. This also outlines the basic framework for understanding when to add friction. It boils down to a cost-benefit analysis: do the rewards of streamlining outweigh the risk? If not, slow it down. Now let’s move on from a black & white example to venture into a grayer area.

Nudging Toward Healthy Behavior

Some problems aren’t classifiable as errors but still aren’t in anyone’s best interest. Trying to solve them becomes wicked because there is no right or wrong solution. Yet that doesn’t make failing to address them any less of an existential risk. Consider social media’s medley of knee-jerk, tribalistic behavior. It has led many to question the value of these apps altogether, which isn’t good for business, or society at large. In an attempt to encourage more thoughtful discourse, these platforms turn to friction.

Twitter explored adding an extra step that asks people to read articles before retweeting them. This nudge aims to craft a more trustworthy experience for everyone by slowing the spread of misinformation. According to their reporting, people shown the prompt opened articles 40% more often, and some decided not to retweet it after all. They built on this success by showing a warning before users post messages which include harmful language.

Instagram also implemented a similar feature in its fight against online bullying. Adam Mosseri, the Head of Instagram, published a blog post stating that this “intervention gives people a chance to reflect.” Although specific data isn’t provided, they suggest it had promising results in cultivating a more humane experience for their communities.

These examples show how faster is not always better. Sometimes we need restraint from saying things we don’t mean or sharing things that we don’t understand. Friction helps algorithms in a similar manner. Sometimes they also need more information about us so they don’t recommend things we won’t appreciate.

Understanding Preferences & Objectives

Let’s shift focus to AI with a simple example of how friction plays a role in machine learning algorithms. You’ve probably signed up for an app that begins by asking you a bunch of questions about your interests. Behind the scenes, an algorithm uses these answers to personalize your experience. These onboarding flows have become so common over the past decade that you may have forgotten a time before apps were smart enough to get to know you.

You may have never even questioned why you must go through a preference capture flow before getting to explore content. The value is obvious because no one wants the quickest path to something irrelevant. Many apps are simply in the business of making relevant connections, and these personalization tactics have been one of the best ways to do so. A McKinsey report illuminates this further by reporting that “35 percent of what consumers purchase on Amazon and 75 percent of what they watch on Netflix come from product recommendations based on such algorithms.”

“The top two reasons that customers churn are 1) they don’t understand your product, and 2) they don’t obtain any value from it. Customer onboarding can solve both of these issues.”

— Christina Perricone in HubSpot’s “The Ultimate Guide to Customer Onboarding

Perhaps these onboarding flows are so familiar that they don’t feel like friction. They may seem like necessary steps to unlock an app’s value. However, that perspective quickly changes for anyone designing one of these flows. The inherent tension lies in attempting to balance the diametrically opposite needs of two parties. On the one hand, an algorithm provides better output relative to its input (although asymptotes exist). Success is a function of maximizing data collection touchpoints, but this tends to result in more steps with more complex questions.

In short, the quicker an app makes a recommendation, the more likely it will be wrong. On the other hand, an extremely long onboarding flow is unlikely to make an amazing first impression on new users. I had the pleasure of walking this tightrope when designing the onboarding flow at Headliner. Each new step we added always felt like it would be the straw that broke the camel’s back. We nervously monitored our activation reports for signs we went too far but surprisingly saw no meaningful dropoff. Yet, even a slight decrease would easily be worth the improved retention that personalization yielded.

This is thanks to some clever interface innovations. TikTok’s design turns user engagement into clear signals they use to tweak their algorithms. Content recommendation quality is a direct function of this, which some refer to as an algorithm’s vision.

Optimizing an app’s key interactions to understand implicit signals makes an explicit means of capturing preferences unnecessary.

Engagement Signals

Every interaction is an opportunity to improve understanding through bidirectional feedback. An interface should provide system feedback to the user engaging with it while also reporting to the system how performance meets user expectations. Everything from button taps to the absence of action can become a signal. Interfaces that successfully incorporate this are referred to as algorithm-friendly.

A study by Apple’s Machine Learning Research Department details their success in leveraging engagement signals, which they believe “provide strong indications of a user’s true intent,” to efficiently train a machine learning model through a process called Reinforcement Learning from Human Feedback. Their results documented “significant accuracy gains in a production deep learning system,” meaning that an interface designed well enough to analyze naturally occurring user behavior is all that is needed to create personalization that feels like mind reading.

Instagram actually employs this strategy as well, although its approach is a bit less cohesive since they seem to be in a perpetual state of transition.

TikTokification

But what exactly makes an interface algorithm-friendly? In TikTok’s case, it was the design decision to only show one video at a time. That’s right, friction! By decreasing the information density in the viewport at any given time, they increased their understanding of a user’s focus. This localizes interactions (or lack thereof) to specific content as quality measures.

Gustav Söderström, the Co-President, CPO & CTO at Spotify has referred to this approach as “giving the algorithm glasses.” Compare this to the medley of distractions in other feeds, and it’s easy to imagine which one is better at collecting data.

Using friction as a tool allows designers to craft an interface that separates engagement signals from noise.

Algorithmic visibility comparison of TikTok & Instagram’s home feeds. (Source: Maximillian Piras) (Large preview)

As we return to my aforementioned framework for evaluating when to add friction, we can understand how it makes sense in this scenario. While each interaction may take slightly longer, relevant content can be found quicker. The trade-off makes sense since relevance sits atop a user’s hierarchy of needs.

Additionally, if you were to measure friction over a longer time horizon, you likely would find an experience with better personalization feels more frictionless. This is because the efficiency in helping users find what they’re looking for would consistently compound (although, again, asymptotes exist). So each subsequent visit theoretically requires less work on the user’s part, which makes the alternate approach look like the cumbersome one.

“The secret of why some of these products are so good at recommendations is not actually that they have better algorithms. It’s the same algorithms with a more efficient user interface.”

— Gustav Söderström in The Verge’s “Why Spotify wants to look like TikTok

While TikTok popularized this interface, anybody who was single in the last decade may notice a similarity to dating apps. Using directional gestures as engagement signals dates back to the swipeable card paradigm Tinder introduced in 2012. They, too, limited the viewport to one result at a time and used actions to inform subsequent recommendations. But TikTok took it mainstream since not everyone needs a dating app, and those who do will churn once they’ve met someone.

The results of using this paradigm in everyday entertainment led many platforms to copy it in hopes of the same algorithmic gains. The latest to embark on this journey is Spotify, much to the chagrin of their users. In fact, this decision even landed it on Mashable’s list of worst app updates in 2023. But Söderström says they don’t have a choice, and he believes in the long run, the signal clarity will make up for any interim backlash because of how much quicker it can learn user preferences. Critics fail to realize how important these changes are for Spotify’s future.

In the machine learning age, apps with inefficient interfaces for signal analysis risk becoming uncompetitive.

Algorithmic visibility comparison of Spotify’s old & new home feeds. (Source: Maximillian Piras) (Large preview)

Making Lemonade

The reason this approach is so powerful is due to the compounding nature of good data. Optimizing signals for any individual user creates a data network effect that benefits everyone else. It even turns negatives into positives! An individual bad experience can mitigate others from encountering the same, making the system antifragile.

This approach dates back to 2003 with the introduction of Amazon’s item-to-item collaborative filtering. You may know it as “customers who viewed this also viewed this.”

This type of filtering produces high-quality recommendations with limited user data. It does so by building relationships between items to proxy user preferences. With only two to three data points, an algorithm can draw connections across the entire dataset. It effectively piggybacks off previous patterns that are similar enough.

This means an app like TikTok only needs a few swipes before it can make high-probability assumptions about your preferences. That’s why friction is so useful in algorithm-friendly interfaces. If the initial interactions send clean signals, then an algorithm can graph a user’s interests almost immediately.

Friction In The Future

We began in the past by reviewing how friction found its way into UX toolkits through error prevention and healthy nudges. Then we moved on to its ability to help algorithms learn user preferences and objectives. While explicit onboarding flows are still in vogue, TikTok is popularizing an interface that makes them unnecessary by using implicit engagement signals leading to significant algorithmic gains. Yet the machine learning age is just beginning, and friction is only accelerating its evolution.

Inverting The Pareto Principle

We’ve focused on algorithms that recommend content, but more diverse uses of personalization may emerge due to the newfound capabilities of Large Language Models. These models unlock the ability to manipulate unstructured data at scale. This allows engagement patterns of greater complexity to be analyzed and productized. The result is algorithms can recommend much more than media and metadata.

Perhaps they can craft completely personalized feature sets based on our preferences and objectives. Imagine selecting effects in Photoshop and seeing suggestions such as “Creators who used this effect also used this one.” These capabilities could increase the usage of buried features that only power users tend to find.

Microsoft is exploring this by adding Copilot to its products. They claim the “average person uses less than 10% of what PowerPoint can do,” but AI will unlock all that latent value.

Microsoft Copilot uses LLMs in an attempt to unlock the 90% of features that most users don’t know exist. (Source: Microsoft Design) (Large preview)

Using LLMs to create feature recommendation engines is a fascinating idea. It would allow developers to stop relying on the Pareto Principle for prioritization. Especially because Joel Spolsky claims the 80/20 rule is actually a myth.

“A lot of software developers are seduced by the old “80/20” rule. It seems to make a lot of sense: 80% of the people use 20% of the features… Unfortunately, it’s never the same 20%. Everybody uses a different set of features.”

— Joel Spolsky in “Strategy Letter IV: Bloatware and the 80/20 Myth

It would be nice if irreducible simplicity in interface design were only a power law away, but feature creep is hard to combat when different people find value in different options. It’s unrealistic to believe that there is some golden 20% of features driving 80% of value. If there was, then why isn’t the Pareto Principle ever applied to content?

I can’t imagine a team at YouTube suggesting that removing 80% of videos would improve the service. Instead, it’s viewed as a routing problem: find the right piece of content for the right person. If machine learning algorithms can recommend features, I hope the value of friction goes without saying at this point. The efficiency gains unlocked by algorithm-friendly interfaces absolutely apply.

Hallucinations Or Creations

The recent inflection point in the capability of LLMs unlocks an entirely new computing paradigm. The legendary UX researcher Jakob Nielsen believes it introduces the first new UI paradigm in 60 years, which he calls Intent-Based Outcome Specification. Instead of telling computers what to do, we now explain an outcome so they can determine how to achieve it.

Using machine learning algorithms to recommend features is one example. Another fairly new example that you’re likely familiar with is chatbots like ChatGPT. Hundreds of millions of people already use it, which is a testament to how out of this world the experience is. Yet therein lies a problem: sometimes its responses literally aren’t grounded in reality because it has a tendency to make them up! This isn’t obvious to those unfamiliar with the technology’s inner workings since there aren’t many safeguards. As a result, some people become dangerously overreliant on its unverified output.

In one case, a lawyer based legal arguments on research from ChatGPT only to find out in court that multiple cited sources turned out to be completely nonexistent. The lawyer’s defense was that he was “unaware of the possibility that its content could be false.” Examples like this reinforce the importance of friction in preventing unintended consequences. While ChatGPT’s empty state mentions its limitations, they obviously aren’t stated explicitly enough for everyone.

Extra steps and prompts, such as those mentioned earlier, could better educate users about what is referred to as a “hallucination.” It’s a phenomenon of chatbots confidently outputting responses that don’t align with their training data. Similar to telling a lie when you don’t have a correct answer, although that characterization overly anthropomorphizes the software.

Yet some see hallucinations as more of a feature than a bug. Marc Andreessen, the co-founder of Netscape, states during an interview that “another term for hallucination is just simply creativity.” He views it as a significant evolution from the hyperliteral systems of the past because they can now brainstorm and improvise.

The problem is that chatbot interfaces tend to be simplistic by attempting to be one size fits all. More controls or modes would educate users about available output types so they can specify which they expect. Sometimes we may want an imaginative response from a creative partner. Other times we want the hyper-accuracy of a deterministic calculator, such as ChatGPT’s Wolfram plugin.

Perhaps a creativity slider or persona selector similar to Maggie Appleton’s exploration will better align the system to user needs. However it’s implemented, a bit of friction can maximize benefits while minimizing risks.

Finding Your Friction

We’ve covered using friction for simple error prevention to complex algorithm optimizations. Let’s end with a few tips that make implementing it as smooth as possible.

Peak-End Rule

When adding resistance to an experience, the Peak-End Rule is a useful psychological heuristic to leverage. It’s rooted in studies by Daniel Kahneman & Amos Tversky, where they found that perception of painful experiences doesn’t tend to correlate with duration. It’s the peak & end of the experience that subjects recall.

In practice, experts suggest that delight is a function of positive emotional peaks and rewarding emotional payoffs. Optimizing for the peak & end provides room to shift focus from time spent and steps taken as performance indicators; long and complex experiences can still be delightful if designed correctly.

Maps Aren’t Territories

People experience friction emotionally, but developers see it as a value on a chart. In the same way that a map is not a territory, this ratio is only an approximation of the actual experience. It’s something to consider when evaluating any strategies for adding or removing friction. Since applications are complex ecosystems, any measurements should consider a holistic view. Every step has second-order effects, which makes one-dimensional measurements prone to blind spots.

For example, when a wrong file is deleted, the data can’t report people cursing at their computer screen. Nor is it likely to include the context of them opening a new file just to recreate their old file from scratch. The same subjectivity applies to all instances of friction. For instance, are your reports equipped to measure the trade-off of an action that takes longer but results in better data collection? It might increase algorithmic efficiency, which compounds across a neural network.

As we’ve discussed, better recommendations tend to yield better retention, which tends to yield more revenue if a business model aligns with usage. Myopic measurements will miss these types of gains, so make sure to analyze friction in a way that really matters.

Keep Pushing

As software is eating the world, AI is eating software. If it’s a paradigm shift as big as social, mobile, or even the web, then applications must adapt or die. If you want to remain competitive in the machine learning age, then don’t fear friction.

Further Reading on Smashing Magazine