Connecting With Users: Applying Principles Of Communication To UX Research

Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.

In this article, I’ll focus on UX research. Communication is a core component of UX research, as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.

Fundamentals of Communication Theory

Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.

In this article, I will focus on the Transactional Model of Communication. There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.

The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback. Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.

Understanding the Transactional Model

Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:

  1. Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
  2. Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
  3. Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
  4. Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
  5. Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
  6. Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.
Applying the Transactional Model of Communication to Preparing for UX Research

We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:

  • Improve Clarity
    The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively.
  • Minimize misunderstanding
    By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated.
  • Enhance research participant participation
    With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.

You can address the specific elements of the Transactional Model through the following steps while preparing for research:

Defining the Sender and Receiver

In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.

For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.

Message Creation

The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.

Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.

Choosing the Right Channel

The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.

Minimizing Noise

Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.

Facilitating Feedback

You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.

Track Your Alignment to the Framework

You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.

You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.

Applying the Transactional Model to Common UX Research Methods

UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.

Interviews

Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.

Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.

Asking Open-ended Questions

To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.

Actively Listening

As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques, such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.

Being Responsive

Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.

Minimizing Noise

We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews. This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.

For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.

Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.

Strategizing the Flow

Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.

A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.

While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.

Context Considerations

You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.

For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.

Power Dynamics

Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.

We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible. You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.

Post-Interview Actions

You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:

  • Debriefing
    Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews.
  • Short surveys
    Send a brief survey via email or an online platform to gather feedback on the interview experience.
  • Follow-up calls
    Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted.
  • Thank you emails
    Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.

You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.

As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings.

We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.

By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.

Surveys

The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.

Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).

Designing the Survey

Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.

Crafting Questions

The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.

For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?”.

Minimizing Noise

Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.

Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.

When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.

For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.

You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.

The Transactional Model suggests active participation in communication is necessary for effective communication. Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.

This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.

Encouraging Feedback

While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.

You should be mindful that too many open-ended questions can cause fatigue, so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.

Post-Survey Actions

After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.

You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.

Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.

Usability Testing

Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.

Here are some ways a researcher can use elements of the Transactional Model during usability testing:

Task Assignment as Message Sending

When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.

You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.

Observing and Listening as Message Receiving

As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.

Real-time Interaction

The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).

Noise

In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.

The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.

One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.

Encouraging Feedback

Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols, where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.

You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.

We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.

Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model

The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.

The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.

Understanding Context

Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.

Deciphering Noise

In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.

Considering Sender-Receiver Dynamics

Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.

For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.

Reporting Results

Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:

  • Identify Stakeholder Groups
    Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives.
  • Prioritize Information
    Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact.
  • Adapt Communication Style
    Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.

Acknowledging Feedback

Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.

Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.

Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.

By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs.

Tracking Your Application of the Transactional Model to Your Practice

You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.

You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.

Stage Columns Description Example
Pre-Interview Planning Topic/Question (Aligned with research goals) Identify the research question and design questions that encourage open-ended responses and co-construction of meaning. Testing mobile banking app’s bill payment feature. How do you set up a new payee? How would you make a payment? What are your overall impressions?
Participant Context Note relevant demographic and personal information to tailor questions and avoid biased assumptions. 35-year-old working professional, frequent user of the online banking and mobile application but unfamiliar with using the app for bill pay.
Engagement Strategies Outline planned strategies for active listening, open-ended questions, clarification prompts, and building rapport. Open-ended follow-up questions (“Can you elaborate on XYZ? Or Please explain more to me what you mean by XYZ.”), active listening cues, positive reinforcement (“Thank you for sharing those details”).
Shared Understanding List potential challenges to understanding participant’s perspectives and strategies for ensuring shared meaning. Initially, the participant expressed some confusion about the financial jargon I used. I clarified and provided simpler [non-jargon] explanations, ensuring we were on the same page.
During Interview Verbal Cues Track participant’s language choices, including metaphors, pauses, and emotional expressions. Participant used a hesitant tone when describing negative experiences with the bill payment feature. When questioned, they stated it was “likely their fault” for not understanding the flow [it isn’t their fault].
Nonverbal Cues Note participant’s nonverbal communication like body language, facial expressions, and eye contact. Frowning and crossed arms when discussing specific pain points.
Researcher Reflexivity Record moments where your own biases or assumptions might influence the interview and potential mitigation strategies. Recognized my own familiarity with the app might bias my interpretation of users’ understanding [e.g., going slower than I would have when entering information]. Asked clarifying questions to avoid imposing my assumptions.
Power Dynamics Identify instances where power differentials emerge and actions taken to address them. Participant expressed trust in the research but admitted feeling hesitant to criticize the app directly. I emphasized anonymity and encouraged open feedback.
Unplanned Questions List unplanned questions prompted by the participant’s responses that deepen understanding. What alternative [non-bank app] methods for paying bills that you use? (Prompted by participant’s frustration with app bill pay).
Post-Interview Reflection Meaning Co-construction Analyze how both parties contributed to building shared meaning and insights. Through dialogue, we collaboratively identified specific design flaws in the bill payment interface and explored additional pain points and areas that worked well.
Openness and Flexibility Evaluate how well you adapted to unexpected responses and maintained an open conversation. Adapted questioning based on participant’s emotional cues and adjusted language to minimize technical jargon when that issue was raised.
Participant Feedback Record any feedback received from participants regarding the interview process and areas for improvement. Thank you for the opportunity to be in the study. I’m glad my comments might help improve the app for others. I’d be happy to participate in future studies.
Ethical Considerations Reflect on whether the interview aligned with principles of transparency, reciprocity, and acknowledging power dynamics. Maintained anonymity throughout the interview and ensured informed consent was obtained. Data will be stored and secured as outlined in the research protocol.
Key Themes/Quotes Use this column to identify emerging themes or save quotes you might refer to later when creating the report. Frustration with a confusing interface, lack of intuitive navigation, and desire for more customization options.
Analysis Notes Use as many lines as needed to add notes for consideration during analysis. Add notes here.

You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:

  • Date of Interview,
  • Participant ID,
  • Interview Format (e.g., in person, remote, video, phone).
Conclusion

By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.

As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.

As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice. By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.

Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.

References and Further Reading

Read file properties of video files in C++

I want to figure out what Windows does when you right-click a video file and check properties and I would like to write a similar piece of code in C++.
I should be able to figure out how to read the file type and size, but I'm lost in how to get details of the video like resolution and runtime.
Which API commands does Windows use there?

Sketchnotes And Key Takeaways From SmashingConf Antwerp 2023

I have been reading and following Smashing Magazine for years — I’ve read many of the articles and even some of the books published. I’ve also been able to attend several Smashing workshops, and perhaps one of the peak experiences of my isolation times was the online SmashingConf in August 2020. Every detail of that event was so well-designed that I felt genuinely welcomed. The mood was exceptional, and even though it was a remote event, I experienced similar vibes to an in-person conference. I felt the energy of belonging to a tribe of other great design professionals.

I was really excited to find out that the talks at SmashingConf Antwerp 2023 were going to be focused on design and UX! This time, I attended remotely again, just like back in 2020: I could watch and live-sketch note seven talks (and I’m already looking forward to watching the remaining talks I couldn’t attend live).

Even though I participated remotely, I got really inspired. I had a lot of fun, and I felt truly involved. There was an online platform where the talks were live-streamed, as well as a dedicated Slack channel for the conference attendees. Additionally, I shared my key takeaways and sketchnotes right after each talk on social media. That way, I could have little discussions around the topics &mdash, even though I wasn’t there in person.

In this article, I would like to offer a brief summary of each talk, highlighting my takeaways (and my screenshots). Then, I will share my sketchnotes of those seven talks (+ two more I watched after the conference).

Day 1 Talks

Introduction

At the very beginning of the conference, Vitaly said hello to everyone watching online, so even though I participated remotely, I felt welcomed. :-) He also shared that there is an overarching mystery theme of the conference, and the first one who could guess it would get a free ticket for the next Smashing conference — I really liked this gamified approach.

Vitaly also reminded us that we should share our success stories as well as our failure stories (how we’ve grown, learned, and improved over time).

We were introduced to the Pac-man rule: if we are having a conversation, and someone is speaking from the back and wants to join, open the door for them — just like Pac-man does (well, Pac-man opens his mouth because he wants to eat, you want to encourage conversations).

In between talks, Vitaly told us a lot of design jokes; for instance, this one related to design systems was a great fit for the first talk:

Where did Gray 500 and Button Primary go on their first date?

To a naming convention.

After this little warm-up, Molly Hellmuth delivered the first talk of the event. Molly has been a great inspiration for me not only as a design system consultant but also as a content creator and community builder. I’m also enthusiastic about learning the more advanced aspects of Figma, so I was really glad that Molly chose this topic for her talk.

“Design System Traps And Pitfalls” by Molly Hellmuth

Molly is a design system expert specializing in Figma design systems, and she teaches a course called Design System Bootcamp. Every time she runs this course, she sees students make similar mistakes. In this talk, she shared the most common mistakes and how to avoid them.

Molly shared the most common mistakes she experienced during her courses:

  • Adopting new features too quickly,
  • Adding too many color variables,
  • Using groups instead of frames,
  • Creating jumbo component sets,
  • Not prepping icons for our design system.

She also shared some rapid design tips:

  • Set the nudge amount to 8
  • We can hide components in a library by adding a period or an underscore
  • We can go to a specific layer by double-clicking on the layer icon
  • Scope variables, e.g., colors meant for text is, only available for text
  • Use auto layout stacking order (it is not only for avatars, e.g., it is great for dropdown menus, too).

“How AI Ate My Website” by Luke Wroblewski

I have been following Luke Wroblewski since the early days of my design career. I read his book “Web Form Design: Filling in the Blanks” back in 2011, so I was really excited to attend his talk. Also, the topic of AI and design has been a hot one lately, so I was very curious about the conversational interface he created.

Luke has been creating content for 27 years; for example, there are 2,012 articles on his website. There are also videos, books, and PDFs. He created an experience that lets us ask questions from AI that have been fed with this data (all of his articles, videos, books, and so on).

In his talk, he explained how he created the interaction pattern for this conversational interface. It is more like a FAQ pattern and not a chatbot pattern. Here are some details:

  • He also tackled the “what I should ask” problem by providing suggested questions below the most recent answer; that way, he can provide a smoother, uninterrupted user flow.

  • He linked all the relevant sources so that users can dig deeper (he calls it the “object experience”). Users can click on a citation link, and then they are taken to, e.g., a specific point of a video.

He also showed us how AI eats all this stuff (e.g., processing, data cleaning) and talked about how it assembles the answers (e.g., how to pick the best answers).

So, to compare Luke’s experience to e.g., Chat GPT, here are some points:

  • It is more opinionated and specific (Chat GPT gives a “general world knowledge” answer);
  • We can dig deeper by using the relevant resources.

You can try it out on the ask.lukew.com website.

“A Journey in Enterprise UX” by Stéphanie Walter

Stéphanie Walter is also a huge inspiration and a designer friend of mine. I really appreciate her long-form articles, guides, and newsletters. Additionally, I have been working in banking and fintech for the last couple of years, so working for an enterprise (in my case, a bank) is a situation I’m familiar with, and I couldn’t wait to hear about a fellow designer’s perspective and insights about the challenges in enterprise UX.

Stéphanie’s talk resonated with me on so many levels, and below is a short summary of her insightful presentation.

On complexity, she discussed the following points:

  1. Looking at quantitative data: What? How much?
    Doing some content analysis (e.g., any duplicates?)
  2. After the “what” and discovering the “as-is”: Why? How?
    • By getting access to internal users;
    • Conducting task-focused user interviews;
    • Documenting everything throughout the process;
    • “Show me how you do this today” to tackle the “jumping into solutions” mindset.

Stéphanie shared with us that there are two types of processes:

  • Fast track
    Small features, tweaks on the UI — in these cases, there is no time or no need to do intensive research; it involves mostly UI design.
  • Specific research for high-impact parts
    When there is a lot of doubt (“we need more data”). It involves gathering the results of the previous research activities; scheduling follow-up sessions; iterating on design solutions and usability testing with prototypes (usually using Axure).
    • Observational testing
      “Please do the things you did with the old tool but with the new tool” (instead of using detailed usability test scripts).
    • User diary + longer studies to help understand the behavior over a period of time.

She also shared what she wishes she had known sooner about designing for enterprise experiences, e.g., it can be a trap to oversimplify the UI or the importance of customization and providing all the data pieces needed.

It was also very refreshing that she corrected the age-old saying about user interfaces: you know, the one that starts with, “The user interface is like a joke...”. The thing is, sometimes, we need some prior knowledge to understand a joke. This fact doesn’t make a joke bad. It is the same with user interfaces. Sometimes, we just need some prior knowledge to understand it.

Finally, she talked about some of the main challenges in such environments, like change management, design politics and complexity.

Her design process in enterprise UX looks like this:

  • Complexity
    How am I supposed to design that?
  • Analysis
    Making sense of this complexity.
  • Research
    Finding and understanding the puzzle pieces.
  • Solution design
    Eventually, everything clicks into place.

The next talk was about creating a product with a Point of View, meaning that a product’s tone of voice can be “unique,” “unexpected,” or “interesting.”

“Designing A Product With A Point Of View” by Nick DiLallo

Unlike in the case of the other eight speakers whose talks I sketched, I wasn’t familiar with Nick’s work before the conference. However, I’m really passionate about UX writing (and content design), so I was excited to hear Nick’s points. After his talk, I have become a fan of his work; check out his great articles on Medium).

In his talk, Nick DiLallo shared many examples of good and not-so-good UX copies.

His first tip was to start with defining our target audience since the first step towards writing anything is not writing. Rather, it is figuring out who is going to be reading it. If we manage to define who will be reading as a starting point, we will be able to make good design decisions for our product.

For instance, instead of designing for “anyone who cooks a lot”, it is a lot better to design for “expert home chefs”. We don’t need to tell them to “salt the water when they are making pasta”.

After defining our audience, the next step is saying something interesting. Nick’s recommendation is that we should start with one good sentence that can unlock the UI and the features, too.

The next step is about choosing good words; for example, instead of “join” or “subscribe,” we can say “become a member.” However, sometimes we shouldn’t get too creative, e.g., we should never say “add to submarine” instead of “add to cart” or “add to basket”.

We should design our writing. This means that what we include signals what we care about, and the bigger something is visual, the more it will stand out (it is about establishing a meaningful visual hierarchy).

We should also find moments to add voice, e.g., the footer can contain more than a legal text. On the other hand, there are moments and places that are not for adding more words; for instance, a calendar or a calculator shouldn’t contain brand voice.

Nick also highlighted that the entire interface speaks about who we are and what our worldview is. For example, what options do we include when we ask the user’s gender?

He also added that what we do is more important than what we write. For example, we can say that it is a free trial, but if the next thing the UI asks is to enter our bank card details, well, it’s like saying that we are vegetarian, and then we eat a cheeseburger in front of me.

Nick closed his talk by saying that companies should hire writers or content designers since words are part of the user experience.

“When writing and design work together, the results are remarkable.”

“The Invisible Power of UI Typography” by Oliver Schöndorfer

This year, Oliver has quickly become one of my favorite design content creators. I attended some of his webinars, I’m a subscriber of his Font Friday newsletter, and I really enjoy his “edutainment style”. He is like a stand-up comedian. His talks and online events are full of great jokes and fun, but at the same time, Oliver always manages to share his extensive knowledge about typography and UI design. So I knew that the following talk was going to be great. :)

During his talk, Oliver redesigned a banking app screen live, gradually adding the enhancements he talked about. His talk started with this statement:

“The UI is the product, and a big part of it is the text.”

After that, he asked an important question:

“How can we make the type work for us?”

Some considerations we should keep in mind:

  • Font Choice
    System fonts are boring. We should think about what the voice of our product is! So, pick fonts that:
    • are in the right category (mostly sans, sometimes slabs),
    • have even strokes with a little contrast (it must work in small sizes),
    • have open-letter shapes,
    • have letterforms that are easy to distinguish (the “Il1” test).

  • Hierarchy
    i.e. “What is the most important thing in this view?”

Start with the body text, then emphasize and deemphasize everything else — and watch out for the accessibility aspects (e.g. minimum contrast ratios).

Accessibility is important, too!

  • Spacing
    Relations should be clear (law of proximity) and be able to define a base unit.

Then we can add some final polish (and if it is appropriate, some delight).

As Oliver said, “Go out there and pimp that type!

Day 2 Talks

“Design Beyond Breakpoints” by Christine Vallaure

I’m passionate about the designer-developer collaboration topic (I have a course and some articles about it), so I was very excited to hear Christine’s talk! Additionally, I really appreciate all the Figma content she shares, so I was sure that I’d learn some new exciting things about our favorite UI design software.

Christine’s talk was about pushing the current limits of Figma: how to do responsive design in Figma, e.g., by using the so-called container queries. These queries are like media queries, but we are not looking at the viewport size. Instead, we are looking at the container. So a component behaves differently if, e.g., it is inside a sidebar, and we can also nest container queries, e.g., tell an icon button inside a card that upon resizing, the icon should disappear).

Recommended Reading: A Primer On CSS Container Queries by Stephanie Eckles

She also shared that there is a German fairy tale about a race between a hedgehog and a rabbit. The hedgehog wins the race even though he is slower. Since he is smarter, he sends his wife (who looks exactly like him) to the finish line in advance. Christine told us that she had mixed feelings about this story because she didn’t like the idea of pretending to be fast when someone has other great skills. In her analogy, the rabbits are the developers, and the hedgehogs are the designers. Her lesson was that we should embrace each others’ tools and skills instead of trying to mimic each others’ work.

The lesson of the talk was not really about pushing the limits. Rather, the talk was about reminding us of why we are doing all this:

  • To communicate our design decisions better to the developers,
  • To try out how our design behaves in different cases (e.g., where it should break and how), and
  • It is also great for documentation purposes; she recommended the EightShapes Specs plugin by Nathan Curtis.

Her advice is:

  • We should create a playground inside Figma and try out how our components and designs work (and let developers try out our demo, too);
  • Have many discussions with developers, and don’t start these discussions from zero, e.g., read a bit about frontend development and have a fundamental knowledge of development aspects.

“It’s A Marathon, And A Sprint” by Fabricio Teixeira

If you are a design professional, you have surely encountered at least a couple of articles published by the UX Collective, a very impactful design publication. Fabricio is one of the founders of that awesome corner of the Internet, so I knew that his talk would be full of insights and little details. He shared four case studies and included a lot of great advice.

During his talk, Fabricio used the analogy of running. When we prepare for a long-distance running competition, 80% of the time, we should do easy runs, and 20% of the time should be devoted to intensive because short interval runs get the best results. He also highlighted that just like during a marathon running, things will get hard during our product design projects, but we must remember how much we trained. When someone from the audience asked how not to get overly confident, he said that we should build an environment of trust so that other people on our team can make us realize if we’ve become too confident.

He then mentioned four case studies; all of these projects required a different, unique approach and design process:

  • Product requirements are not required.
    Vistaprint and designing face masks — the world needed them to design really fast; it was a 15-day sprint, and they did not have time to design all the color and sizing selectors (and only after the launch did it turn into a marathon).

  • Timelines aren’t straight lines.
    The case study of Equinox treadmill UI: they created a fake treadmill to prototype the experience; they didn’t wait for the hardware to get completed (the hardware got delayed due to manufacturing issues), so there was no delay in the project even in the face of uncertainty and ambiguity. For example, they took into account the hand reach zones, increased the spacing between UI elements so that these remained usable even while the user was running, and so on.

Exciting challenge: Average treadmill interface, a complicated dashboard, everything is fighting for our attention.

  • Research is a mindset, not a step.
    He mentioned the Gofundme project, where they applied a fluid approach to research meaning that design and research ran in parallel, the design informed research and vice versa. Also, insights can come from anyone from the team, not just from researchers. I really liked that they started a book club, everyone read a book about social impact, and they created a Figma file that served as a knowledge hub.

  • Be ready for some math
    During the New York City Transit project, they created a real-time map of the subway system, which required them to create a lot of vectors and do some math. One of the main design challenges was, “How to clean up complexity?”

Fabricio shared that we should be “flexibly rigorous”: just as during running, we should listen to our body, we should listen to the special context of a given project. There is no magic formula out there. Rigor and discipline is important, but we must listen to our body so that we don’t lose touch of reality.

The key takeaway is that because, we as a design community focus a lot on processes, and of course there is no one way to do design, we should combine sprints and marathons, adjust our approach to the needs of the given project, and most of all, focus more on principles, e.g. how we, as a team, want to work together?

A last note is when Fabricio mentioned in the post-talk discussion with Vitaly Friedman that having a 1–3-hour long kick-off meeting with our team is too short, we will work on something for e.g. 6 months, so Fabricio’s team introduced kick-off weeks.

Kat delivered one of the most important talks (or maybe the most important talk) of the conference. The ethics of design is a topic that has been around for many years now. Delivering a talk like this is challenging because it requires a perspective that easily gets lost in our everyday design work. I was really curious about how Kat would make us think and have us question our way of working.

“Design Ethically: From Imperative To Action” by Kat Zhou

Kat’s talk walked us through our current reality such as how algorithms have built in biases, manipulate users, hide content that shouldn’t be hidden, and don’t block things that shouldn’t be allowed. The main question, however, is:

Why is that happening? Why do designers create such experiences?

Kat’s answer is that companies must ruthlessly design for growth. And we, as designers, have the power to exercise control over others.

She showed us some examples of what she considers oppressive design, like the Panopticon by Jeremy Bentham. She also provided an example of hostile architecture (whose goal is to prevent humans from resting in public places). There are also dark patterns within digital experiences similar to the New York Times subscription cancellation flow (users had to make a call to cancel).

And the end goal of oppressive design is always to get more user data, more users’ time, and more of the users’ money. What amplifies this effect is that from an employee’s (designer’s) perspective, the performance is tied to achieving OKRs.

Our challenge is how we might redesign the design process so that it doesn’t perpetuate the existing systems of power. Kat’s suggestion is that we should add some new parts to the design process:

  • There are two phases:
    Intent: “Is this problem a worthy problem to solve?”
    Results: “What consequences do our solutions have? Who is it helping? Who is it harming?”
  • Add “Evaluate”:
    “Is the problem statement we defined even ethically worthy of being addressed?”
  • Add “Forecast”:
    “Can any ethical violations occur if we implement this idea?”
  • Add “Monitor”:
    “Are there any new ethical issues occurring? How can we design around them?”

Kat shared a toolkit and framework that help us understand the consequences of the things we are building.

Kat talked about forecasting in more detail. As she said,

“Forecasted consequences often are design problems.”

Our responsibility is to design around those forecasted consequences. We can pull a product apart by thinking about the layers of effect:

  • The primary layer of effect is intended and known, e.g.: Google Search is intended and known as a search engine.
  • The secondary effect is also known, and intended by the team, e.g. Google Search is an ad revenue generator.
  • The tertiary effect: typically unintended, possibly known, e.g. Algorithms of Oppression, Safiya Umoja Noble talks about the biases built in Google Search.

So designers should define and design ethical primary and secondary effects, and forecast tertiary effects, and ensure that they don’t pose any significant harm.

I first encountered atomic design in 2015, and I remember that I was so fascinated by the clear logical structure behind this mental model. Brad is one of my design heroes because I really admire all the work he has done for the design community. I knew that behind the “clickbait title” (Brad said it himself), there’ll be some great points. And I was right: he mentioned some ideas I have been thinking about since his talk.

“Is Atomic Design Dead?” by Brad Frost

In the first part of the talk, Brad gave us a little WWW history starting from the first website all the way to web components. Then he summarized that design systems inform and influence products and vice versa.

I really liked that he listed three problematic cases:

  • When the design system team is very separated, sitting in their ivory tower.
  • When the design system police put everyone in the design system jail for detaching an instance.
  • When the product roadmaps eat the design system efforts.

He then summarized the foundations of atomic design (atoms, molecules, organisms, templates and pages) and gave a nice example using Instagram.

He answered the question asked in the title of the talk: atomic design is not dead, since it is still a useful mental model for thinking about user interfaces, and it helps teams find a balance, and equilibrium between design systems and products.

And then here came the most interesting and thought-provoking part: where do we go from here?

  1. What if we don’t waste any more human potential on designing yet another date picker, but instead, we create a global design system together, collaboratively? It’d be an unstyled component that we can style for ourselves.

  2. The other topic he brought up is the use of AI, and he mentioned Luke Wroblewski’s talk, too. He also talked about the project he is working on with Kevin Coyle: it is about converting a codebase (and its documentation) to a format that GPT 4 can understand. Brad showed us a demo of creating an alert component using ChatGPT (and this limited corpus).

His main point was that since the “genie” is out of the bottle, it is on us to use AI more responsibly. Brad closed his talk by highlighting the importance of using human potential and time for better causes than designing one more date picker.

Mystery Theme/Other Highlights

When Vitaly first got on stage, one of the things he asked the audience to keep an eye out for was an overarching mystery theme that connects all the talks. At the end of the conference, he finally revealed the answer: the theme was connected to the city of Antwerp!

Where does the name "Antwerp" come from? “Hand werpen” or “to throw a hand”. Once upon a time, there was a giant that collected money from everyone passing the river. One time, a soldier came and just cut off the hand of this giant and threw it to the other side, liberating the city. So, the story and the theme were “legends.” For instance, Molly Hellmuth included Bigfoot (Sasquatch), Stéphanie mentioned Prometheus, Nick added the word "myth" to one of his slides, Oliver applied a typeface usually used in fairy tales, Christine mentioned Sisyphus and Kat talked about Pandora’s box.

My Very Own Avatar

One more awesome thing that happened thanks to attending this conference is that I got a great surprise from the Smashing team! I won the hidden challenge 'Best Sketch Notes', and I have been gifted a personalized avatar created by Smashing Magazine’s illustrator, Ricardo.

Full Agenda

There were other great talks — I’ll be sure to watch the recordings! For anyone asking, here is the full agenda of the conference.

A huge thanks again to all of the organizers! You can check out all the current and upcoming Smashing conferences planned on the SmashingConf website anytime.

Saving The Best For Last: Photos And Recordings

The one-and-only Marc Thiele captured in-person vibes at the event — you can see the stunning, historic Bourla venue it took place in and how memorable it all must have been for the attendees! 🧡

For those who couldn’t make it in person and are curious to watch the talks, well, I have good news for you! The recordings have been recently published — you can watch them over here:


Thank you for reading! I hope you enjoyed reading this as much as I did writing it! See you at the next design & UX SmashingConf in Antwerp, maybe?

Remote Video Security Surveillance

In the rapidly evolving landscape of security technologies, remote video surveillance has emerged as a powerful tool to protect homes, businesses, and public spaces. Leveraging the advancements in camera technology, connectivity, and artificial intelligence, remote video surveillance provides a proactive approach to security, allowing real-time monitoring and response. This article explores the key components, benefits, and challenges of remote video security surveillance.

Key Components of Remote Video Surveillance

  1. High-resolution cameras: Remote video surveillance begins with the deployment of high-resolution cameras strategically positioned to cover critical areas. These cameras capture clear and detailed footage, ensuring that any potential threats or incidents are recorded with precision.
  2. Connectivity and network infrastructure: A robust network infrastructure is crucial for remote video surveillance. High-speed internet connections and reliable data transmission ensure that live video feeds can be accessed remotely without latency issues. Cloud-based solutions further enhance accessibility and scalability.
  3. Cloud storage and analytics: Cloud storage facilitates the secure storage of video footage, allowing for easy retrieval and analysis. Additionally, advanced analytics powered by artificial intelligence can be applied to identify patterns, anomalies, and potential security threats in real time.
  4. Remote monitoring platforms: Security personnel can access live video feeds and recorded footage through dedicated remote monitoring platforms. These platforms often offer user-friendly interfaces, allowing users to manage multiple cameras, customize alert settings, and respond promptly to security incidents.

Benefits of Remote Video Surveillance

  1. Real-time monitoring: One of the primary advantages of remote video surveillance is the ability to monitor live video feeds in real-time. This allows security personnel to detect and respond to incidents as they unfold, mitigating potential risks.
  2. Cost-effective security: Remote video surveillance can be a cost-effective alternative to on-site security personnel. Cameras can cover large areas, and the ability to remotely monitor multiple locations from a centralized control center reduces the need for extensive physical security infrastructure.
  3. Deterrence and prevention: Visible surveillance cameras act as a deterrent, discouraging potential criminals from engaging in illegal activities. The knowledge that an area is under constant video scrutiny can prevent incidents before they occur.
  4. Scalability and flexibility: Remote video surveillance systems are highly scalable, allowing for easy expansion as the security needs of a location evolve. Whether securing a small business or a large industrial complex, the system can adapt to varying requirements.

Challenges and Considerations

  1. Privacy concerns: The widespread deployment of surveillance cameras raises privacy concerns. Striking a balance between enhanced security and individual privacy rights requires thoughtful consideration and adherence to regulations.
  2. Cybersecurity risks: As remote video surveillance systems rely on digital networks and cloud storage, they are susceptible to cybersecurity threats. Implementing robust security measures, including encryption and regular system updates, is essential to mitigate these risks.
  3. Integration with existing systems: Integrating remote video surveillance with existing security systems, access control, and emergency response protocols requires careful planning. Seamless integration ensures a comprehensive and cohesive security infrastructure.

Conclusion

Remote video surveillance has revolutionized the way we approach security, offering real-time monitoring, cost-effective solutions, and scalability. As technology continues to advance, the integration of artificial intelligence, improved analytics, and enhanced cybersecurity measures will further strengthen the effectiveness of remote video surveillance systems. By addressing privacy concerns and diligently managing potential challenges, businesses and individuals can harness the power of this technology to create safer environments.

A High-Level Overview Of Large Language Model Concepts, Use Cases, And Tools

Even though a simple online search turns up countless tutorials on using Artificial Intelligence (AI) for everything from generative art to making technical documentation easier to use, there’s still plenty of mystery around it. What goes inside an AI-powered tool like ChatGPT? How does Notion’s AI feature know how to summarize an article for me on the fly? Or how are a bunch of sites suddenly popping up that can aggregate news and auto-publish a slew of “new” articles from it?

It all can seem like a black box of mysterious, arcane technology that requires an advanced computer science degree to understand. What I want to show you, though, is how we can peek inside that box and see how everything is wired up.

Specifically, this article is about large language models (LLMs) and how they “imbue” AI-powered tools with intelligence for answering queries in diverse contexts. I have previously written tutorials on how to use an LLM to transcribe and evaluate the expressed sentiment of audio files. But I want to take a step back and look at another way around it that better demonstrates — and visualizes — how data flows through an AI-powered tool.

We will discuss LLM use cases, look at several new tools that abstract the process of modeling AI with LLM with visual workflows, and get our hands on one of them to see how it all works.

Large Language Models Overview

Forgoing technical terms, LLMs are vast sets of text data. When we integrate an LLM into an AI system, we enable the system to leverage the language knowledge and capabilities developed by the LLM through its own training. You might think of it as dumping a lifetime of knowledge into an empty brain, assigning that brain to a job, and putting it to work.

“Knowledge” is a convoluted term as it can be subjective and qualitative. We sometimes describe people as “book smart” or “street smart,” and they are both types of knowledge that are useful in different contexts. This is what artificial “intelligence” is created upon. AI is fed with data, and that is what it uses to frame its understanding of the world, whether it is text data for “speaking” back to us or visual data for generating “art” on demand.

Use Cases

As you may imagine (or have already experienced), the use cases of LLMs in AI are many and along a wide spectrum. And we’re only in the early days of figuring out what to make with LLMs and how to use them in our work. A few of the most common use cases include the following.

  • Chatbot
    LLMs play a crucial role in building chatbots for customer support, troubleshooting, and interactions, thereby ensuring smooth communications with users and delivering valuable assistance. Salesforce is a good example of a company offering this sort of service.
  • Sentiment Analysis
    LLMs can analyze text for emotions. Organizations use this to collect data, summarize feedback, and quickly identify areas for improvement. Grammarly’s “tone detector” is one such example, where AI is used to evaluate sentiment conveyed in content.
  • Content Moderation
    Content moderation is an important aspect of social media platforms, and LLMs come in handy. They can spot and remove offensive content, including hate speech, harassment, or inappropriate photos and videos, which is exactly what Hubspot’s AI-powered content moderation feature does.
  • Translation
    Thanks to impressive advancements in language models, translation has become highly accurate. One noteworthy example is Meta AI’s latest model, SeamlessM4T, which represents a big step forward in speech-to-speech and speech-to-text technology.
  • Email Filters
    LLMs can be used to automatically detect and block unwanted spam messages, keeping your inbox clean. When trained on large datasets of known spam emails, the models learn to identify suspicious links, phrases, and sender details. This allows them to distinguish legitimate messages from those trying to scam users or market illegal or fraudulent goods and services. Google has offered AI-based spam protection since 2019.
  • Writing Assistance
    Grammarly is the ultimate example of an AI-powered service that uses LLM to “learn” how you write in order to make writing suggestions. But this extends to other services as well, including Gmail’s “Smart Reply” feature. The same thing is true of Notion’s AI feature, which is capable of summarizing a page of content or meeting notes. Hemmingway’s app recently shipped a beta AI integration that corrects writing on the spot.
  • Code and Development
    This is the one that has many developers worried about AI coming after their jobs. It hit the commercial mainstream with GitHub Copilot, a service that performs automatic code completion. Same with Amazon’s CodeWhisperer. Then again, AI can be used to help sharpen development skills, which is the case of MDN’s AI Help feature.

Again, these are still the early days of LLM. We’re already beginning to see language models integrated into our lives, whether it’s in our writing, email, or customer service, among many other services that seem to pop up every week. This is an evolving space.

Types Of Models

There are all kinds of AI models tailored for different applications. You can scroll through Sapling’s large list of the most prominent commercial and open-source LLMs to get an idea of all the diverse models that are available and what they are used for. Each model is the context in which AI views the world.

Let’s look at some real-world examples of how LLMs are used for different use cases.

Natural Conversation
Chatbots need to master the art of conversation. Models like Anthropic’s Claude are trained on massive collections of conversational data to chat naturally on any topic. As a developer, you can tap into Claude’s conversational skills through an API to create interactive assistants.

Emotions
Developers can leverage powerful pre-trained models like Falcon for sentiment analysis. By fine-tuning Falcon on datasets with emotional labels, it can learn to accurately detect the sentiment in any text provided.

Translation
Meta AI released SeamlessM4T, an LLM trained on huge translated speech and text datasets. This multilingual model is groundbreaking because it translates speech from one language into another without an intermediary step between input and output. In other words, SeamlessM4T enables real-time voice conversations across languages.

Content Moderation
As a developer, you can integrate powerful moderation capabilities using OpenAI’s API, which includes a LLM trained thoroughly on flagging toxic content for the purpose of community moderation.

Spam Filtering
Some LLMs are used to develop AI programs capable of text classification tasks, such as spotting spam emails. As an email user, the simple act of flagging certain messages as spam further informs AI about what constitutes an unwanted email. After seeing plenty of examples, AI is capable of establishing patterns that allow it to block spam before it hits the inbox.

Not All Language Models Are Large

While we’re on the topic, it’s worth mentioning that not all language models are “large.” There are plenty of models with smaller sets of data that may not go as deep as ChatGPT 4 or 5 but are well-suited for personal or niche applications.

For example, check out the chat feature that Luke Wrobleski added to his site. He’s using a smaller language model, so the app at least knows how to form sentences, but is primarily trained on Luke’s archive of blog posts. Typing a prompt into the chat returns responses that read very much like Luke’s writings. Better yet, Luke’s virtual persona will admit when a topic is outside of the scope of its knowledge. An LLM would provide the assistant with too much general information and would likely try to answer any question, regardless of scope. Members from the University of Edinburgh and the Allen Institute for AI published a paper in January 2023 (PDF) that advocates the use of specialized language models for the purpose of more narrowly targeted tasks.

Low-Code Tools For LLM Development

So far, we’ve covered what an LLM is, common examples of how it can be used, and how different models influence the AI tools that integrate them. Let’s discuss that last bit about integration.

Many technologies require a steep learning curve. That’s especially true with emerging tools that might be introducing you to new technical concepts, as I would argue is the case with AI in general. While AI is not a new term and has been studied and developed over decades in various forms, its entrance to the mainstream is certainly new and sparks the recent buzz about it. There’s been plenty of recent buzz in the front-end development community, and many of us are scrambling to wrap our minds around it.

Thankfully, new resources can help abstract all of this for us. They can power an AI project you might be working on, but more importantly, they are useful for learning the concepts of LLM by removing advanced technical barriers. You might think of them as “low” and “no” code tools, like WordPress.com vs. self-hosted WordPress or a visual React editor that is integrated with your IDE.

Low-code platforms make it easier to leverage large language models without needing to handle all the coding and infrastructure yourself. Here are some top options:

Chainlit

Chainlit is an open-source Python package that is capable of building a ChatGPT-style interface using a visual editor.

LLMStack is another low-code platform for building AI apps and chatbots by leveraging large language models. Multiple models can be chained together into “pipelines” for channeling data. LLMStack supports standalone app development but also provides hosting that can be used to integrate an app into sites and products via API or connected to platforms like Slack or Discord.

LLMStack is also what powers Promptly, a cloud version of the app with freemium subscription pricing that includes a free tier.

FlowiseAI

Stack AI is another no-code offering for developing AI apps integrated with LLMs. It is much like FlowiseAI, particularly the drag-and-drop interface that visualizes connections between apps and APIs. One thing I particularly like about Stack AI is how it incorporates “data loaders” to fetch data from other platforms, like Slack or a Notion database.

I also like that Stack AI provides a wider range of LLM offerings. That said, it will cost you. While Stack AI offers a free pricing tier, it is restricted to a single project with only 100 runs per month. Bumping up to the first paid tier will set you back $199 per month, which I suppose is used toward the costs of accessing a wider range of LLM sources. For example, Flowise AI works with any LLM in the Hugging Face community. So does Stack AI, but it also gives you access to commercial LLM offerings, like Anthropic’s Claude models and Google’s PaLM, as well as additional open-source offerings from Replicate.

Voiceflow

Install FlowiseAI

First things first, we need to get FlowiseAI up and running. FlowiseAI is an open-source application that can be installed from the command line.

You can install it with the following command:

npm install -g flowise

Once installed, start up Flowise with this command:

npx flowise start

From here, you can access FlowiseAI in your browser at localhost:3000.

It’s possible to serve FlowiseAI so that you can access it online and provide access to others, which is well-covered in the documentation.

Setting Up Retrievers

Retrievers are templates that the multi-prompt chain will query.

Different retrievers provide different templates that query different things. In this case, we want to select the Prompt Retriever because it is designed to retrieve documents like PDF, TXT, and CSV files. Unlike other types of retrievers, the Prompt Retriever does not actually need to store those documents; it only needs to fetch them.

Let’s take the first step toward creating our career assistant by adding a Prompt Retriever to the FlowiseAI canvas. The “canvas” is the visual editing interface we’re using to cobble the app’s components together and see how everything connects.

Adding the Prompt Retriever requires us to first navigate to the Chatflow screen, which is actually the initial page when first accessing FlowiseAI following installation. Click the “Add New” button located in the top-right corner of the page. This opens up the canvas, which is initially empty.

The “Plus” (+) button is what we want to click to open up the library of items we can add to the canvas. Expand the Retrievers tab, then drag and drop the Prompt Retriever to the canvas.

The Prompt Retriever takes three inputs:

  1. Name: The name of the stored prompt;
  2. Description: A brief description of the prompt (i.e., its purpose);
  3. Prompt system message: The initial prompt message that provides context and instructions to the system.

Our career assistant will provide career suggestions, tool recommendations, salary information, and cities with matching jobs. We can start by configuring the Prompt Retriever for career suggestions. Here is placeholder content you can use if you are following along:

  • Name: Career Suggestion;
  • Description: Suggests careers based on skills and experience;
  • Prompt system message: You are a career advisor who helps users identify a career direction and upskilling opportunities. Be clear and concise in your recommendations.

Be sure to repeat this step three more times to create each of the following:

  • Tool recommendations,
  • Salary information,
  • Locations.

Adding A Multi-Prompt Chain

A Multi-Prompt Chain is a class that consists of two or more prompts that are connected together to establish a conversation-like interaction between the user and the career assistant.

The idea is that we combine the four prompts we’ve already added to the canvas and connect them to the proper tools (i.e., chat models) so that the career assistant can prompt the user for information and collect that information in order to process it and return the generated career advice. It’s sort of like a normal system prompt but with a conversational interaction.

The Multi-Prompt Chain node can be found in the “Chains” section of the same inserter we used to place the Prompt Retriever on the canvas.

Once the Multi-Prompt Chain node is added to the canvas, connect it to the prompt retrievers. This enables the chain to receive user responses and employ the most appropriate language model to generate responses.

To connect, click the tiny dot next to the “Prompt Retriever” label on the Multi-Prompt Chain and drag it to the “Prompt Retriever” dot on each Prompt Retriever to draw a line between the chain and each prompt retriever.

Integrating Chat Models

This is where we start interacting with LLMs. In this case, we will integrate Anthropic’s Claude chat model. Claude is a powerful LLM designed for tasks related to complex reasoning, creativity, thoughtful dialogue, coding, and detailed content creation. You can get a feel for Claude by registering for access to interact with it, similar to how you’ve played around with OpenAI’s ChatGPT.

From the inserter, open “Chat Models” and drag the ChatAnthropic option onto the canvas.

Once the ChatAnthropic chat model has been added to the canvas, connect its node to the Multi-Prompt Chain’s “Language Model” node to establish a connection.

It’s worth noting at this point that Claude requires an API key in order to access it. Sign up for an API key on the Anthropic website to create a new API key. Once you have an API key, provide it to the Mutli-Prompt Chain in the “Connect Credential” field.

Adding A Conversational Agent

The Agent component in FlowiseAI allows our assistant to do more tasks, like accessing the internet and sending emails.

It connects external services and APIs, making the assistant more versatile. For this project, we will use a Conversational Agent, which can be found in the inserter under “Agent” components.

Once the Conversational Agent has been added to the canvas, connect it to the Chat Model to “train” the model on how to respond to user queries.

Integrating Web Search Capabilities

The Conversational Agent requires additional tools and memory. For example, we want to enable the assistant to perform Google searches to obtain information it can use to generate career advice. The Serp API node can do that for us and is located under “Tools” in the inserter.

Like Claude, Serp API requires an API key to be added to the node. Register with the Serp API site to create an API key. Once the API is configured, connect Serp API to the Conversational Agent’s “Allowed Tools” node.

Building In Memory

The Memory component enables the career assistant to retain conversation information.

This way, the app remembers the conversation and can reference it during the interaction or even to inform future interactions.

There are different types of memory, of course. Several of the options in FlowiseAI require additional configurations, so for the sake of simplicity, we are going to add the Buffer Memory node to the canvas. It is the most general type of memory provided by LangChain, taking the raw input of the past conversation and storing it in a history parameter for reference.

Buffer Memory connects to the Conversational Agent’s “Memory” node.

The Final Workflow

At this point, our workflow looks something like this:

  • Four prompt retrievers that provide the prompt templates for the app to converse with the user.
  • A multi-prompt chain connected to each of the four prompt retrievers that chooses the appropriate tools and language models based on the user interaction.
  • The Claude language model connected to the multi-chain prompt to “train” the app.
  • A conversational agent connected to the Claude language model to allow the app to perform additional tasks, such as Google web searches.
  • Serp API connected to the conversational agent to perform bespoke web searches.
  • Buffer memory connected to the conversational agent to store, i.e., “remember,” conversations.

If you haven’t done so already, this is a great time to save the project and give it a name like “Career Assistant.”

Final Demo

Watch the following video for a quick demonstration of the final workflow we created together in FlowiseAI. The prompts lag a little bit, but you should get the idea of how all of the components we connected are working together to provide responses.

Conclusion

As we wrap up this article, I hope that you’re more familiar with the concepts, use cases, and tools of large language models. LLMs are a key component of AI because they are the “brains” of the application, providing the lens through which the app understands how to interact with and respond to human input.

We looked at a wide variety of use cases for LLMs in an AI context, from chatbots and language translations to writing assistance and summarizing large blocks of text. Then, we demonstrated how LLMs fit into an AI application by using FlowiseAI to create a visual workflow. That workflow not only provided a visual of how an LLM, like Claude, informs a conversation but also how it relies on additional tools, such as APIs, for performing tasks as well as memory for storing conversations.

The career assistant tool we developed together in FlowiseAI was a detailed visual look inside the black box of AI, providing us with a map of the components that feed the app and how they all work together.

Now that you know the role that LLMs play in AI, what sort of models would you use? Is there a particular app idea you have where a specific language model would be used to train it?

References

WaterBear: Building A Free Platform For Impactful Documentaries (Part 2)

In my previous article, I talked about Waterbear, a significant project I worked on as a newly-appointed lead developer, and the lessons I learned leading a team for the first time. In this second article, I’ll go over some key technical highlights from the project. Before we start, let’s quickly remind ourselves what WaterBear is all about and what makes it so interesting.

WaterBear is a free platform bringing together inspiration and action with award-winning high-production environmental documentaries covering various topics, from animals and climate change to people and communities. The WaterBear team produces their own original films and documentaries and hosts curated films and content from various high-profile partners, including award-winning filmmakers, large brands, and significant non-governmental organizations (NGOs), like Greenpeace, WWF, The Jane Goodall Institute, Ellen MacArthur Foundation, Nikon, and many others.

For context, I am currently working at a software development company called Q Agency based in Zagreb, Croatia. We collaborated with WaterBear and its partner companies to build a revamped and redesigned version of WaterBear’s web and mobile app from the ground up using modern front-end technologies.

In the first article, I briefly discussed the technical stack that includes a React-based front-end framework, Next.js for the web app, Sanity CMS, Firebase Auth, and Firestore database. Definitely read up on the strategy and reasoning behind this stack in the first article if you missed it.

Now, let’s dive into the technical features and best practices that my team adopted in the process of building the WaterBear web app. I plan on sharing specifically what I learned from performance and accessibility practices as a first-time lead developer of a team, as well as what I wish I had known before we started.

Image Optimization

Images are pieces of content in many contexts, and they are a very important and prominent part of the WaterBear app’s experience, from video posters and category banners to partner logos and campaign image assets.

I think that if you are reading this article, you likely know the tightrope walk between striking, immersive imagery and performant user experiences we do as front-enders. Some of you may have even grimaced at the heavy use of images in that last screenshot. My team measured the impact, noting that on the first load, this video category page serves up as many as 14 images. Digging a little deeper, we saw those images account for approximately 85% of the total page size.

That’s not insignificant and demands attention. WaterBear’s product is visual in nature, so it’s understandable that images are going to play a large role in its web app experience. Even so, 85% of the experience feels heavy-handed.

So, my team knew early on that we would be leveraging as many image optimization techniques as we could that would help improve how quickly the page loads. If you want to know everything there is to optimize images, I wholeheartedly recommend Addy Osami’s Image Optimization for a treasure trove of insightful advice, tips, and best practices that helped us improve WaterBear’s performance.

Here is how we tackled the challenge.

Using CDN For Caching And WebP For Lighter File Sizes

As I mentioned a little earlier, our stack includes Sanity’s CMS. It offers a robust content delivery network (CDN) out of the box, which serves two purposes: (1) optimizing image assets and (2) caching them. Members of the WaterBear team are able to upload unoptimized high-quality image assets to Sanity, which ports them to the CDN, and from there, we instruct the CDN to run appropriate optimizations on those images — things like compressing the files to their smallest size without impacting the visual experience, then caching them so that a user doesn’t have to download the image all over again on subsequent views.

Requesting the optimized version of the images in Sanity boils down to adding query variables to image links like this:

https://cdn.sanity.io/.../image.jpg?w=1280&q=70&auto=format

Let’s break down the query variables:

  • w sets the width of the image. In the example above, we have set the width to 1280px in the query.
  • q sets the compression quality of the image. We landed on 70% to balance the need for visual quality with the need for optimized file sizes.
  • format sets the image format, which is set to auto, allowing Sanity to determine the best type of image format to use based on the user’s browser capabilities.

Notice how all of that comes from a URL that is mapped to the CDN to fetch a JPG file. It’s pretty magical how a completely unoptimized image file can be transformed into a fully optimized version that serves as a completely different file with the use of a few parameters.

In many cases, the format will be returned as a WebP file. We made sure to use WebP because it yields significant savings in terms of file size. Remember that unoptimized 1.2 MB image from earlier? It’s a mere 146 KB after the optimizations.

And all 14 image requests are smaller than that one unoptimized image!

The fact that images still account for 85% of the page weight is a testament to just how heavy of a page we are talking about.

Another thing we have to consider when talking about modern image formats is browser support. Although WebP is widely supported and has been a staple for some time now, my team decided to provide an optimized fallback JPG just in case. And again, Sanity automatically detects the user’s browser capabilities. This way, we serve the WebP version only if Sanity knows the browser supports it and only provide the optimized fallback file if WebP support isn’t there. It’s great that we don’t have to make that decision ourselves!

Have you heard of AVIF? It’s another modern image format that promises potential savings even greater than WebP. If I’m being honest, I would have preferred to use it in this project, but Sanity unfortunately does not support it, at least at the time of this article. There’s a long-running ticket to add support, and I’m holding hope we get it.

Would we have gone a different route had we known about the lack of AVIF support earlier? Cloudinary supports it, for example. I don’t think so. Sanity’s tightly coupled CDN integration is too great of a developer benefit, and as I said, I’m hopeful Sanity will give us that support in the future. But that is certainly the sort of consideration I wish I would have had early on, and now I have that in my back pocket for future projects.

Tackling The Largest Contentful Paint (LCP)

LCP is the biggest element on the page that a user sees on the initial load. You want to optimize it because it’s the first impression a user has with the page. It ought to load as soon as possible while everything under it can wait a moment.

For us, images are most definitely part of the LCP. By giving more consideration to the banner images we load at the top of the page, we can serve that component a little faster for a better experience. There are a couple of modern image attributes that can help here: loading and fetchpriority.

We used an eager loading strategy paired with a high fetchpriority on the images. This provides the browser with a couple of hints that this image is super important and that we want it early in the loading process.

<!-- Above-the-fold Large Contentful Paint image -->
<img
  loading="eager"
  fetchpriority="high"
  alt="..."
  src="..."
  width="1280"
  height="720"
  class="..."
/>

We also made use of preloading in the document <head>, indicating to the browser that we want to preload images during page load, again, with high priority, using Next.js image preload options.

<head>
  <link
    rel="preload"
    as="image"
    href="..."
    fetchpriority="high"
  />
</head>

Images that are “below the fold” can be de-prioritized and downloaded only when the user actually needs it. Lazy loading is a common technique that instructs the browser to load particular images once they enter the viewport. It’s only fairly recently that it’s become a feature baked directly into HTML with the loading attribute:

<!-- Below-the-fold, low-priority image -->
<img
  decoding="async"
  loading="lazy"
  src="..."
  alt="..."
  width="250"
  height="350"
/>

This cocktail of strategies made a noticeable difference in how quickly the page loads. On those image-heavy video category pages alone, it helped us reduce the image download size and number of image requests by almost 80% on the first load! Even though the page will grow in size as the user scrolls, that weight is only added if it passes through the browser viewport.

In Progress: Implementing srcset

My team is incredibly happy with how much performance savings we’ve made so far. But there’s no need to stop there! Every millisecond counts when it comes to page load, and we are still planning additional work to optimize images even further.

The task we’re currently planning will implement the srcset attribute on images. This is not a “new” technique by any means, but it is certainly a component of modern performance practices. It’s also a key component in responsive design, as it instructs browsers to use certain versions of an image at different viewport widths.

We’ve held off on this work only because, for us, the other strategies represented the lowest-hanging fruit with the most impact. Looking at an image element that uses srcset in the HTML shows it’s not the easiest thing to read. Using it requires a certain level of art direction because the dimensions of an image at one screen size may be completely different than those at another screen size. In other words, there are additional considerations that come with this strategy.

Here’s how we’re planning to approach it. We want to avoid loading high-resolution images on small screens like phones and tablets. With the srcset attribute, we can specify separate image sources depending on the device’s screen width. With the sizes attribute, we can instruct the browser which image to load depending on the media query.

In the end, our image markup should look something like this:

<img
  width="1280"
  height="720"
  srcset="
    https://cdn.sanity.io/.../image.jpg?w=568&...   568w,
    https://cdn.sanity.io/.../image.jpg?w=768&...   768w,
    https://cdn.sanity.io/.../image.jpg?w=1280&... 1280w
  "
  sizes="(min-width: 1024px) 1280px, 100vw"
  src="https://cdn.sanity.io/.../image.jpg?w=1280&..."
/>

In this example, we specify a set of three images:

  1. Small: 568px,
  2. Medium: 768px,
  3. Large: 1280px.

Inside the sizes attribute, we’re telling the browser to use the largest version of the image if the screen width is above 1024px wide. Otherwise, it should default to selecting an appropriate image out of the three available versions based on the full device viewport width (100vw) — and will do so without downloading the other versions. Providing different image files to the right devices ought to help enhance our performance a bit more than it already is.

Improving CMS Performance With TanStack Query

The majority of content on WaterBear comes from Sanity, the CMS behind the web app. This includes video categories, video archives, video pages, the partners’ page, and campaign landing pages, among others. Users will constantly navigate between these pages, frequently returning to the same category or landing page.

This provided my team with an opportunity to introduce query caching and avoid repeating the same request to the CMS and, as a result, optimize our page performance even more. We used TanStack Query (formerly known as react-query) for both fetching data and query caching.

const { isLoading, error, data } = useQuery( /* Options */ )

TanStack Query caches each request according to the query key we assign to it. The query key in TanStack Query is an array, where the first element is a query name and the second element is an object containing all values the query depends on, e.g., pagination, filters, query variables, and so on.

Let’s say we are fetching a list of videos depending on the video category page URL slug. We can filter those results by video duration. The query key might look something like this basic example:

const { isLoading, error, data } = useQuery(
  {
    queryKey: [
      'video-category-list',
      { slug: categorySlug, filterBy: activeFilter }
    ],
  queryFn: () => /* ... */
  }
)

These query keys might look confusing at first, but they’re similar to the dependency arrays for React’s useEffect hook. Instead of running a function when something in the dependency array changes, it runs a query with new parameters and returns a new state. TanStack Query comes with its dedicated DevTools package. It displays all sorts of useful information about the query that helps debug and optimize them without hassle.

Let’s see the query caching in action. In the following video, notice how data loads instantly on repeated page views and repeated filter changes. Compare that to the first load, where there is a slight delay and a loading state before data is shown.

We’re probably not even covering all of our bases! It’s so tough to tell without ample user testing. It’s a conflicting situation where you want to do everything you can while realistically completing the project with the resources you have and proceed with intention.

We made sure to include a label on interactive elements like buttons, especially ones where the icon is the only content. For that case, we added visually hidden text while allowing it to be read by assistive devices. We also made sure to hide the SVG icon from the assistive devices as SVG doesn’t add any additional context for assistive devices.

<!-- Icon button markup with descriptive text for assistive devices -->
<button type="button" class="...">
  <svg aria-hidden="true" xmlns="..." width="22" height="22" fill="none">...</svg
  ><span class="visually-hidden">Open filters</span>
</button>
.visually-hidden {
  position: absolute;
  width: 1px;
  height: 1px;
  overflow: hidden;
  white-space: nowrap;
  clip: rect(0 0 0 0);
  -webkit-clip-path: inset(50%);
  clip-path: inset(50%);
}

Supporting keyboard navigation was one of our accessibility priorities, and we had no trouble with it. We made sure to use proper HTML markup and avoid potential pitfalls like adding a click event to meaningless div elements, which is unfortunately so easy to do in React.

We did, however, hit an obstacle with modals as users were able to move focus outside the modal component and continue interacting with the main page while the modal was in its open state, which isn’t possible with the default pointer and touch interaction. For that, we implemented focus traps using the focus-trap-react library to keep the focus on modals while they’re opened, then restore focus back to an active element once the modal is closed.

Dynamic Sitemaps

Sitemaps tell search engines which pages to crawl. This is faster than just letting the crawler discover internal links on its own while crawling the pages.

The importance of sitemaps in the case of WaterBear is that the team regularly publishes new content — content we want to be indexed for crawlers as soon as possible by adding those new links to the top of the sitemap. We don’t want to rebuild and redeploy the project every time new content has been added to Sanity, so dynamic server-side sitemaps were our logical choice.

We used the next-sitemap plugin for Next.js, which has allowed us to easily configure the sitemap generation process for both static and dynamic pages. We used the plugin alongside custom Sanity queries that fetch the latest content from the CMS and quickly generate a fresh sitemap for each request. That way, we made sure that the latest videos get indexed as soon as possible.

Let’s say the WaterBear team publishes a page for a video named My Name is Salt. That gets added to a freshly generated XML sitemap:

Now, it’s indexed for search engines to scoop up and use in search results:

Until Next Time…

In this article, I shared some insights about WaterBear’s tech stack and some performance optimization techniques we applied while building it.

Images are used very prominently on many page types on WaterBear, so we used CDN with caching, loading strategies, preloading, and the WebP format to optimize image loading performance. We relied on Sanity for the majority of content management, and we expected repeating page views and queries on a single session, prompting us to implement query caching with TanStack Query.

We made sure to improve basic accessibility on the fly by styling focus states, enabling full keyboard navigation, assigning labels to icon buttons, providing alt text for images, and using focus traps on modal elements.

Finally, we covered how my team handled dynamic server-side rendered sitemaps using the next-sitemap plugin for Next.js.

Again, this was my first big project as lead developer of a team. There’s so much that comes with the territory. Not only are there internal processes and communication hurdles to establish a collaborative team environment, but there’s the technical side of things, too, that requires balancing priorities and making tough decisions. I hope my learning journey gives you something valuable to consider in your own work. I know that my team isn’t the only one with these sorts of challenges, and sharing the lessons I learned from this particular experience probably resonates with some of you reading this.

Please be sure to check out the full work we did on WaterBear. It’s available on the web, Android, and iOS. And, if you end up watching a documentary while you’re at it, let me know if it inspired you to take action on a cause!

References

Many thanks to WaterBear and Q Agency for helping out with this two-part article series and making it possible. I really would not have done this without their support. I would also like to commend everyone who worked on the project for their outstanding work! You have taught me so much so far, and I am grateful for it.

Facilitating Inclusive Online Workshops (Part 2)

Earlier in the first part of the series, we defined inclusivity and how it contributes to enriching the workshop experience. We established that inclusivity is about ensuring everyone has an equal opportunity to participate and contribute, regardless of their background or identity. It goes beyond merely having diversity in attendance. It’s about creating an environment where different perspectives are valued and used to drive innovative outcomes.

In the second part, I will introduce you to the principle of the inclusive workshop through the acronym P.A.R.T.S. (which stands for Promote, Acknowledge, Respect, Transparency, and Share). After the principle is explained, we will dive into what you can do during and after the workshop to implement this principle.

The P.A.R.T.S. Principle

Often, we fall into the trap of thinking, “I’ve got a mixed group of folks here. My inclusivity job is done!”

Yes, having a diverse set of individuals is often an essential first step. But it’s just that — a first step. It’s like opening the door and inviting people in. However, the real task begins after the guests have arrived. That’s when you need to ensure they feel welcome, heard, and valued.

As a facilitator, how can you make sure that people feel safe to express their ideas and participate actively during the workshop? Here’s where the P.A.R.T.S. principle comes in.

The P.A.R.T.S. principle is an acronym that encapsulates five key principles that can form the foundation of any inclusive workshop: Promote, Acknowledge, Respect, Transparency, and Share.

P — Promote

Promote active participation from all attendees.

This begins with creating an environment where participants feel at ease sharing their ideas, opinions, and experiences. As a facilitator, your role is to set this tone from the beginning. One practical way to promote participation is by establishing some ground rules that encourage everyone to contribute. Another approach is to use different facilitation techniques to draw out quieter participants, such as having a quiet brainstorming session where participants can spend more time on their own to contribute their ideas or having round-robin techniques where everyone gets a turn to speak.

A — Acknowledge

Acknowledging participants’ contributions validates their input and makes them feel heard and valued.

This can be as simple as saying, “Thank you for sharing,” or “That’s an interesting perspective.” It’s also about demonstrating that you’ve understood their input by summarizing or paraphrasing what they’ve said. By doing this, you not only confirm their feelings of being heard but also model listening behavior for other participants.

R — Respect

Respect for all ideas, experiences, and perspectives is fundamental to an inclusive workshop.

This starts with setting expectations that all ideas are welcome, no matter how outside-the-box they may seem. It also means respecting the varied communication styles, personalities, and cultural backgrounds of the participants. As a facilitator, you should encourage respect by addressing any inappropriate comments or behaviors immediately and decisively.

T — Transparency

Transparency involves clear and open communication.

As a facilitator, it’s essential to articulate the workshop’s goals and processes clearly, address questions and concerns promptly, and keep channels for feedback open and responsive. This can be done by stating the agenda upfront, explaining the purpose of each activity, and regularly checking in with participants to ensure they’re following along.

S — Share

Share the workshop’s objectives, expectations, and agenda with all participants.

This shared understanding guides the workshop process and provides a sense of direction. It also empowers participants to take ownership of their contributions and the workshop outcomes.

The P.A.R.T.S. principle is a high-level principle you can try to implement in your workshop to make sure that all voices are heard, but to guide you further into how the principle can be used, here are some practical steps you can follow before, during, and after the workshop.

Applying The P.A.R.T.S. Principle: Before And During The Workshop

Step 1. Set The Stage

Setting the stage for your workshop goes beyond just a simple introduction. This is the point at which you establish the environment and set the tone for the entire event. For example, you can set rules like: “One person speaks at a time,” “Respect all ideas,” “Challenge the idea, not the person,” and so on. Clearly stating these rules before you start will help create an environment conducive to open and productive discussions.

It’s important to let participants know that every workshop has its “highs” and “lows.” Make it clear at the outset that these fluctuations in pace and energy are normal and are part of the process. Encourage participants to be patient and stay engaged through the lows, as these can often lead to breakthroughs and moments of high productivity later, during the highs.

Step 2. Observe The Participants

As a facilitator, it’s essential for you to observe and understand the dynamics of the group — to ensure everyone is engaged and participating effectively. Below, I’ve outlined a simpler approach to participant observation that involves looking for non-verbal cues, tracking participation levels, and paying attention to reactions to the content.

Here are a few things you should be paying attention to:

  • Non-verbal cues
    Non-verbal cues can be quite telling and often communicate more than words. Pay attention to participants’ body language as captured by their cameras, such as their posture, facial expressions, and eye contact. This also applies to in-person workshops where it is, in fact, much easier to keep track of the body language of participants. For instance, leaning back or crossing arms might suggest disengagement, while constant eye contact and active note-taking might indicate interest and engagement. When you’re facilitating a remote workshop (and there is no video connection, so you won’t have access to the usual body language indicators), pay attention to the use of emojis, reactions, and the frequency of chat activity. Also, look for signals that people want to speak; they might be unmuting themselves, using the “raise hand” button, or physically raising their hands.
  • Participation levels
    Keep track of how often and who is contributing to the discussion. If you notice a participant hasn’t contributed in a while, you might want to encourage them to share their thoughts. You could ask, “We haven’t heard from you yet. Would you like to add something to the discussion?”. Conversely, if someone seems to be dominating the conversation, you could say, “Let’s hear from someone who hasn’t had a chance to speak yet.” It’s all about ensuring a balanced participation where every voice is heard.
  • Reactions to content
    Observe participants’ reactions to the topics which are being discussed. Nods of agreement, looks of surprise, or expressions of confusion can all be very revealing. If you notice a reaction that suggests confusion or disagreement, don’t hesitate to pause and address it. You could ask the participant to share their thoughts or provide further explanations to clarify any possible misunderstandings.
  • Managing conflict
    At times, disagreements or conflicts may arise during the workshop. As a facilitator, it’s your role to manage these situations and ensure a safe and respectful environment. If a conflict arises, acknowledge it openly and encourage constructive dialogue. Remind participants of the ground rules, focusing on the importance of respecting each others’ opinions and perspectives. If necessary, you could use conflict resolution techniques, such as active listening and meditating or even taking a short break to cool down the tension.

Another helpful tip is to have a space for extra ideas. This could be a whiteboard in a physical setting or a shared digital document in a virtual one. Encourage participants to write down any thoughts or ideas that come up, even if they are not immediately relevant to the current discussion. These can be revisited later and may spur new insights or discussions.

Another tip is to use workshop-specific tools such as Butter, where participants can express their emotions through the emoji reaction features and be queued to ask their questions without interrupting the speakers. Lastly, if you have a group larger than 5-6 people, consider dividing them into sub-groups and using co-facilitators to assist in managing these sub-groups. This will make the workshop experience much better for individual participants.

Observing others through laptop cameras can be difficult when there are more than 5-6 people in the virtual room. That’s a big reason why you’ll need to set the stage and establish a few ground rules at the beginning. Rules such as “Speak one person at a time,” “Use the ‘Raise Hand’ button to speak,” and “Leave questions in the chat space” can really improve the experience.

Remote workshops might not be able to replace the full experience of in-person workshops, where we can clearly see people’s body language and interact with each other more easily. However, with the right combination of tools and facilitation tips, remote workshops can probably match very closely the in-person experience and make the participants happy.

Step 3. Respect Your Schedule

As you go about your workshop, respecting your agenda is essential. This is all about sticking to your plan, staying on track, and communicating clearly with the participants about what stage you’re at and what’s coming next.

Scheduled breaks are equally as important. Let’s say you’ve planned for a 10-minute break every 45 minutes, then stick to this plan. It offers participants time to rest, grab a quick snack (or coffee/tea), refresh their minds, and prepare for the next part. This is particularly significant during online workshops where screen fatigue is a common problem.

We know workshops don’t always go as planned — disruptions are often part of the package. These could range from a technical glitch during a virtual workshop, a sudden question sparking a lengthy discussion, or just starting a bit late due to late arrivals. This is where your “buffer time” will come in handy!

Respecting the buffer time allows you to handle any disruption that may come up without compromising on the workshop content or rushing through sections to recover the lost time. If there are no disruptions, this time can be used for additional discussions or exercises or even finishing the workshop earlier — something that participants usually appreciate.

Remember to stay focused. As the facilitator, you should keep discussions on track and aligned with the workshop’s goals. If the conversation veers off-topic, gently guide it back to the main point.

Applying The P.A.R.T.S. Principle: After The Workshop

Step 1. Follow Up

A critical part of concluding your workshop is following up with participants. This not only helps solidify the decisions and actions that were agreed upon but also maintains the collaborative momentum even after the workshop ends.

  • Meeting Minutes
    Send out a concise summary of the workshop, including the key points of discussion, decisions made, and next steps. This serves as a reference document for participants and ensures everyone is on the same page.
  • Action Plan
    Detail the agreed-upon action items, the person responsible for each, and the deadlines. This provides clarity on the tasks to be accomplished post-workshop.
  • Next Steps
    Clearly communicate the next steps, whether that’s a follow-up meeting, a deadline for tasks, or further resources to explore. This ensures that the momentum from the workshop continues.

Step 2. Celebrate

Completing a workshop is no small feat. It takes dedication, focus, and collaborative effort from all participants. So, don’t let this moment pass uncelebrated. Recognizing everyone’s contributions and celebrating the completion of the workshop is an essential concluding step.

This not only serves as a token of gratitude for the participant’s time and effort but also reinforces the sense of achievement, promoting a positive and inclusive culture. Reflect on the journey you all undertook together, emphasizing the progress made, the skills developed, and the insights gained.

In your closing remarks or a follow-up communication, highlight specific achievements or breakthrough moments from the workshop. You might also share key takeaways or outcomes that align with the workshop’s objectives. This helps to not only recap the learning but also underscore the value each participant brought to the workshop.

Consider personalized gestures to commemorate the workshop — certificates of completion, digital badges, or even just a special mention can make participants feel recognized and appreciated. Celebrations, no matter how small, can build camaraderie, boost morale, and leave everyone looking forward to the next workshop.

Conclusion

Let me conclude Part 2 by quoting Simon Raybould, who wonderfully encapsulates the art of facilitation, by saying:

“The secret of facilitating is to make it easy for people to learn. If you’re not making it easy, you’re not doing it right.”
— Simon Raybould

I couldn’t agree more. The inclusive workshop is not just about getting things done; it represents the symphony of diverse voices coming together, the exploration of ideas, and the collective journey toward shared objectives. Embracing this essence of inclusivity and embedding it into your workshop design and delivery makes for an environment where everyone feels respected, collaboration is enhanced, and innovative thinking flourishes.

As a facilitator, you have the power to make the workshop experience memorable and inspiring. The influence of your efforts can extend beyond the workshop, cultivating an atmosphere of respect, diversity, and inclusivity that spills over into all collaborative activities. This is the true impact and potential of well-executed, inclusive workshops.

Further Reading & References

Here are a few additional resources on the topic of workshops. I hope you will find something useful there, too.

  • Gamestorming: A Playbook for Innovators, Rulebreakers, and Changemakers, by Dave Gray, Sunni Brown, and James Macanufo
    This well-known playbook provides a wide range of strategies and activities for designing workshops that encourage a creative, productive thinking environment. If you’re leading workshops and wish to encourage more out-of-the-box thinking, this book is a perfect source of inspiration.
  • Sprint, by Jake Knapp, John Zeratsky, and Braden Kowitz
    This is another well-known book in the workshop space. The book focuses on mastering the facilitation of Design Sprint, a workshop method by Google aimed at solving business problems and fostering collaboration. If you’re keen on leading tech teams or startups, this book is a great pick.
  • The Workshop Survival Guide, by Devin Hunt and Rob Fitzpatrick
    This guide navigates you through the end-to-end process of designing and conducting successful workshops. Whether you’re a newbie or an experienced facilitator, this resource gives comprehensive support to facilitate workshops confidently.
  • Invent To Learn: Making, Tinkering, and Engineering in the Classroom, by Sylvia Libow Martinez and Gary S. Stager
    Even though it is primarily for school educators, the book shares a wide range of methods and techniques that you can adapt to any workshop setting to create inclusive, creative, and hands-on learning environments. Highly recommended for those interested in creating an inclusive environment in any setting.
  • No Hard Feelings: The Secret Power of Embracing Emotions at Work, by Liz Fosslien and Mollie West Duffy
    Although it doesn’t focus on workshops specifically, the book gives useful insights on managing emotions at work from both participant and facilitator perspectives. It offers a broad overview of different personalities at work and how to foster emotional inclusivity, which can be valuable when facilitating workshops.
  • A Comprehensive Checklist For Running Design Workshops,” by Slava Shestopalov
    Slava’s article is a thorough guide to designing and conducting a successful workshop. This is a highly recommended read for designers, product managers, or even engineers looking to understand the nuances of running a design-centric workshop.
  • The Workshopper Playbook — A Summary” (AJ&Smart)
    The summary of “The Workshop Playbook” discusses the 4C technique that AJ&Smart developed for constructing any workshop. The 4C’s — Collect, Choose, Create, and Commit — form an exceptional workshop framework that adheres to the double-diamond method of workshop creation. If you’re interested in gaining a more profound understanding of the 4C framework, consider reading the full book by AJ&Smart.
  • The Secret To Healthy Remote Work: Fewer Meetings, More Workshops,” by Mehdi En-Naizi
    The article promotes the shift from traditional meetings to workshops in remote work settings to boost productivity and decrease stress. It highlights the workshops’ effectiveness, enhanced focus, and their role in promoting team unity and social interactions.
  • 10 Tips On Running An Online Meeting Your Team Won’t Hate (And Free Templates To Try!),” Anamaria Dorgo and Cheska Teresa
    This guide provides a detailed approach to overcoming the fatigue and frustration often associated with online meetings. The tips include clearly defining the meeting’s purpose, sticking to an agenda, creating an inclusive space for active participation, scheduling regular breaks, and using breakout rooms for more focused discussions.
  • How Silent Brainstorming Easily Engages Introverts On The Project Team,” by Annie MacLeod (DPM)
    Try out this brainstorming technique next time you need to get the team’s input on a problem or solution or if you’re working on a team with a lot of introverts.
  • Dot Voting: A Simple Decision-Making and Prioritizing Technique in UX,” Sarah Gibbons (NN/g Nielsen Norman Group)
    A few UX workshop activities work well in any situation, and dot voting is one of them. Dot voting is a simple tool used to democratically prioritize items or make decisions in a group setting. It is an easy, straightforward way to narrow down alternatives and converge on a set of concepts or ideas.
  • How Do You Encourage Introverts And Quiet Participants To Share Their Ideas In A Meeting?” (LinkedIn — Meeting Facilitation)
    Meetings are essential for collaboration, creativity, and innovation. But not everyone feels comfortable speaking up in a group setting. Some people may be introverted, shy, or simply prefer to listen and process information before sharing their thoughts. How do you encourage these quiet participants to contribute their valuable ideas in a meeting?
  • Teacher Toolkit: Think-Pair-Share” — YouTube, (Think-Pair-Share webpage)
    This versatile tool can be used in any classroom. The discussion technique gives students the opportunity to respond to questions in written form before engaging in meaningful conversation with other students. Asking students to write and discuss ideas with a partner before sharing with the larger group builds confidence, encourages greater participation, and results in more thoughtful discussions.
    (Editor’s Note: The Teacher Toolkit webpage is temporarily down. Until their server is restored, you can use a full webpage copy preserved by the WayBack Machine. — MB)
  • Fishbowl Conversation
    Fishbowl Conversation is great for keeping a focused conversation when you have a large group of people. At any time, only a few people have a conversation (the fish in the fishbowl). The remaining people are listeners (the ones watching the fishbowl). The caveat is that the listeners can join the discussion at any moment.
  • Lightning Talks” (Design sprints by Google)
    Lightning Talks are a core Design Sprint method and a powerful opportunity to build ownership in the Design Sprint challenge. Plan and set up Lightning Talks before your Design Sprint begins. After all the Lightning Talks are finished, hold an HMW sharing session to capture and share all the opportunities your team has come up with.
  • AJ&Smart’s Remote Design Sprint
    The lightning demo activity from Design Sprint is a perfect example of the “Idea Gallery” type of activity. Participants work individually to create a visual or written representation of their ideas (like a poster), and then everyone walks around to view the “gallery” and people discuss the ideas.
  • Poster Session” (Gamestorming)
    The goal of a poster session is to create a set of compelling images that summarize a challenge or topic for further discussion. Creating this set might be an “opening act,” which then sets the stage for choosing an idea to pursue, or it might be a way to get indexed on a large topic.
  • Jigsaw Activities” (The Bell Foundation)
    Jigsaw activities are a specific type of information gap activity that works best when used with the whole class. The class is first divided into groups of four to six learners who are then given some information on a particular aspect of the topic, which they later become experts in.
  • Disney Brainstorming Method
    The Disney method was developed in 1994 by Robert Dilts based on Walt Disney’s creative approach. It’s a good mix of creativity and concreteness as it’s not only about generating ideas but also looking at them with a critical eye and, eventually, having a few of them ready to be further explored and implemented.
  • Support Extroverted Students in Remote Environment — Group Discussions
    Several video platforms have options for small group discussions. If you’re using one of these, breaking into small groups can be a great opportunity to help your extroverted students feel fulfilled (and for your more introverted students to “warm up” for group discussion).
  • 37 brainstorming techniques to unlock team creativity,” by James Smart (SessionLab)
    It’s important to find a framework and idea-generation process that empowers your group to generate meaningful results, as finding new and innovative ideas is a vital part of the growth and success of any team or organization. In this article, several effective brainstorming techniques are explored in detail in categories such as creative exercises and visual idea-generation games.
  • Round-Robin Brainstorming” (MindTools blog)
    It’s all too easy to start a brainstorming session with good intentions but then overlook or miss potentially great ideas simply because one assertive person sets the tone for the entire meeting. This is why a tool like Round-Robin Brainstorming is so valuable. This method allows team members to generate ideas without being influenced by any one person, and you can then take these ideas into the next stages of the problem-solving process.
  • Eysenck’s Personality Theory” (TutorialsPoint)
    What is Eysenck’s Personality Theory? This theory has been influential in personality psychology and used to explain various phenomena, including individual differences in behavior and mental health.
  • Meeting Design: For Managers, Makers, and Everyone, a book by Kevin Hoffman
    Meetings don’t have to be painfully inefficient “snoozefests” — if you design them well. Meeting Design will teach you the design principles and innovative approaches you’ll need to transform meetings from boring to creative, from wasteful to productive.
  • State of Meetings Report 2021
    How did meetings actually change in 2020? What will the long-term impact of this change be? And could 2020 have changed the way we meet for good? These are questions that will be answered in this detailed report.
  • Social Identity Theory (Science Direct)
    Social identity theory defines a group as a collection of people who categorize themselves as belonging to the same social category and internalize the category’s social identity-defining attributes to define and evaluate themselves — attributes that capture and accentuate intragroup similarities and intergroup differences.
  • Clarizen Survey Pins Falling Productivity Levels on Communication Overload” (Bloomberg)
    A new survey by Clarizen, the global leader in collaborative work management, finds that companies’ efforts to improve collaboration among employees by opening new lines of communication can have the opposite effect.
  • Conflict Resolution Skills: What They Are and How to Use Them” (Coursera)
    Handling conflict in any context is never fun. Often, issues become more complicated than needed if the people involved need more conflict resolution and general communication skills. In this article, you’ll learn more about conflict resolution and, more specifically, how different conflict resolution skills may be useful in various situations.
  • Meeting Parking Lot” (The Facilitator’s School)
    A free template for handling off-topic questions, topics, and discussions. Available in Miro Template and Mural Template format.
  • SmashingConf Online Workshops
    Finally, do meet the friendly Smashing Magazine front-end & UX workshops! These remote workshops aim to give the same experience and access to experts that you would have in an in-person workshop without needing to leave your desk or couch. You can follow along with practical examples and interactive exercises, ask questions during the Q&A sessions, and use workshop recordings and materials to study at your own pace, at your own time.

Recreating YouTube’s Ambient Mode Glow Effect

I noticed a charming effect on YouTube’s video player while using its dark theme some time ago. The background around the video would change as the video played, creating a lush glow around the video player, making an otherwise bland background a lot more interesting.

This effect is called Ambient Mode. The feature was released sometime in 2022, and YouTube describes it like this:

“Ambient mode uses a lighting effect to make watching videos in the Dark theme more immersive by casting gentle colors from the video into your screen’s background.”
— YouTube

It is an incredibly subtle effect, especially when the video’s colors are dark and have less contrast against the dark theme’s background.

Curiosity hit me, and I set out to replicate the effect on my own. After digging around YouTube’s convoluted DOM tree and source code in DevTools, I hit an obstacle: all the magic was hidden behind the HTML <canvas> element and bundles of mangled and minified JavaScript code.

Despite having very little to go on, I decided to reverse-engineer the code and share my process for creating an ambient glow around the videos. I prefer to keep things simple and accessible, so this article won’t involve complicated color sampling algorithms, although we will utilize them via different methods.

Before we start writing code, I think it’s a good idea to revisit the HTML Canvas element and see why and how it is used for this little effect.

HTML Canvas

The HTML <canvas> element is a container element on which we can draw graphics with JavaScript using its own Canvas API and WebGL API. Out of the box, a <canvas> is empty — a blank canvas, if you will — and the aforementioned Canvas and WebGL APIs are used to fill the <canvas> with content.

HTML <canvas> is not limited to presentation; we can also make interactive graphics with them that respond to standard mouse and keyboard events.

But SVG can also do most of that stuff, right? That’s true, but <canvas> is more performant than SVG because it doesn’t require any additional DOM nodes for drawing paths and shapes the way SVG does. Also, <canvas> is easy to update, which makes it ideal for more complex and performance-heavy use cases, like YouTube’s Ambient Mode.

As you might expect with many HTML elements, <canvas> accepts attributes. For example, we can give our drawing space a width and height:

<canvas width="10" height="6" id="js-canvas"></canvas>

Notice that <canvas> is not a self-closing tag, like an <iframe> or <img>. We can add content between the opening and closing tags, which is rendered only when the browser cannot render the canvas. This can also be useful for making the element more accessible, which we’ll touch on later.

Returning to the width and height attributes, they define the <canvas>’s coordinate system. Interestingly, we can apply a responsive width using relative units in CSS, but the <canvas> still respects the set coordinate system. We are working with pixel graphics here, so stretching a smaller canvas in a wider container results in a blurry and pixelated image.

The downside of <canvas> is its accessibility. All of the content updates happen in JavaScript in the background as the DOM is not updated, so we need to put effort into making it accessible ourselves. One approach (of many) is to create a Fallback DOM by placing standard HTML elements inside the <canvas>, then manually updating them to reflect the current content that is displayed on the canvas.

Numerous canvas frameworks — including ZIM, Konva, and Fabric, to name a few — are designed for complex use cases that can simplify the process with a plethora of abstractions and utilities. ZIM’s framework has accessibility features built into its interactive components, which makes developing accessible <canvas>-based experiences a bit easier.

For this example, we’ll use the Canvas API. We will also use the element for decorative purposes (i.e., it doesn’t introduce any new content), so we won’t have to worry about making it accessible, but rather safely hide the <canvas> from assistive devices.

That said, we will still need to disable — or minimize — the effect for those who have enabled reduced motion settings at the system or browser level.

requestAnimationFrame

The <canvas> element can handle the rendering part of the problem, but we need to somehow keep the <canvas> in sync with the playing <video>and make sure that the <canvas> updates with each video frame. We’ll also need to stop the sync if the video is paused or has ended.

We could use setInterval in JavaScript and rig it to run at 60fps to match the video’s playback rate, but that approach comes with some problems and caveats. Luckily, there is a better way of handling a function that must be called on so often.

That is where the requestAnimationFrame method comes in. It instructs the browser to run a function before the next repaint. That function runs asynchronously and returns a number that represents the request ID. We can then use the ID with the cancelAnimationFrame function to instruct the browser to stop running the previously scheduled function.

let requestId;

const loopStart = () => {
  /* ... */

  /* Initialize the infinite loop and keep track of the requestId */
  requestId = window.requestAnimationFrame(loopStart);
};

const loopCancel = () => {
  window.cancelAnimationFrame(requestId);
  requestId = undefined;
};

Now that we have all our bases covered by learning how to keep our update loop and rendering performant, we can start working on the Ambient Mode effect!

The Approach

Let’s briefly outline the steps we’ll take to create this effect.

First, we must render the displayed video frame on a canvas and keep everything in sync. We’ll render the frame onto a smaller canvas (resulting in a pixelated image). When an image is downscaled, the important and most-dominant parts of an image are preserved at the cost of losing small details. By reducing the image to a low resolution, we’re reducing it to the most dominant colors and details, effectively doing something similar to color sampling, albeit not as accurately.

Next, we’ll blur the canvas, which blends the pixelated colors. We will place the canvas behind the video using CSS absolute positioning.

And finally, we’ll apply additional CSS to make the glow effect a bit more subtle and as close to YouTube’s effect as possible.

HTML Markup

First, let’s start by setting up the markup. We’ll need to wrap the <video> and <canvas> elements in a parent container because that allows us to contain the absolute positioning we will be using to position the <canvas> behind the <video>. But more on that in a moment.

Next, we will set a fixed width and height on the <canvas>, although the element will remain responsive. By setting the width and height attributes, we define the coordinate space in CSS pixels. The video’s frame is 1920×720, so we will draw an image that is 10×6 pixels image on the canvas. As we’ve seen in the previous examples, we’ll get a pixelated image with dominant colors somewhat preserved.

<section class="wrapper">
  <video controls muted class="video" id="js-video" src="video.mp4"></video>
  <canvas width="10" height="6" aria-hidden="true" class="canvas" id="js-canvas"></canvas>
</section>
Syncing <canvas> And <video>

First, let’s start by setting up our variables. We need the <canvas>’s rendering context to draw on it, so saving it as a variable is useful, and we can do that by using JavaScript’s getCanvasContext function. We’ll also use a variable called step to keep track of the request ID of the requestAnimationFrame method.

const video = document.getElementById("js-video");
const canvas = document.getElementById("js-canvas");
const ctx = canvas.getContext("2d");

let step; // Keep track of requestAnimationFrame id

Next, we’ll create the drawing and update loop functions. We can actually draw the current video frame on the <canvas> by passing the <video> element to the drawImage function, which takes four values corresponding to the video’s starting and ending points in the <canvas> coordinate system, which, if you remember, is mapped to the width and height attributes in the markup. It’s that simple!

const draw = () => {
  ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
};

Now, all we need to do is create the loop that calls the drawImage function while the video is playing, as well as a function that cancels the loop.

const drawLoop = () => {
  draw();
  step = window.requestAnimationFrame(drawLoop);
};

const drawPause = () => {
  window.cancelAnimationFrame(step);
  step = undefined;
};

And finally, we need to create two main functions that set up and clear event listeners on page load and unload, respectively. These are all of the video events we need to cover:

  • loadeddata: This fires when the first frame of the video loads. In this case, we only need to draw the current frame onto the canvas.
  • seeked: This fires when the video finishes seeking and is ready to play (i.e., the frame has been updated). In this case, we only need to draw the current frame onto the canvas.
  • play: This fires when the video starts playing. We need to start the loop for this event.
  • pause: This fires when the video is paused. We need to stop the loop for this event.
  • ended: This fires when the video stops playing when it reaches its end. We need to stop the loop for this event.
const init = () => {
  video.addEventListener("loadeddata", draw, false);
  video.addEventListener("seeked", draw, false);
  video.addEventListener("play", drawLoop, false);
  video.addEventListener("pause", drawPause, false);
  video.addEventListener("ended", drawPause, false);
};

const cleanup = () => {
  video.removeEventListener("loadeddata", draw);
  video.removeEventListener("seeked", draw);
  video.removeEventListener("play", drawLoop);
  video.removeEventListener("pause", drawPause);
  video.removeEventListener("ended", drawPause);
};

window.addEventListener("load", init);
window.addEventListener("unload", cleanup);

Let’s check out what we’ve achieved so far with the variables, functions, and event listeners we have configured.

Creating A Reusable Class

Let’s make this code reusable by converting it to an ES6 class so that we can create a new instance for any <video> and <canvas> pairing.

class VideoWithBackground {
  video;
  canvas;
  step;
  ctx;

  constructor(videoId, canvasId) {
    this.video = document.getElementById(videoId);
    this.canvas = document.getElementById(canvasId);

    window.addEventListener("load", this.init, false);
    window.addEventListener("unload", this.cleanup, false);
  }

  draw = () => {
    this.ctx.drawImage(this.video, 0, 0, this.canvas.width, this.canvas.height);
  };

  drawLoop = () => {
    this.draw();
    this.step = window.requestAnimationFrame(this.drawLoop);
  };

  drawPause = () => {
    window.cancelAnimationFrame(this.step);
    this.step = undefined;
  };

  init = () => {
    this.ctx = this.canvas.getContext("2d");
    this.ctx.filter = "blur(1px)";

    this.video.addEventListener("loadeddata", this.draw, false);
    this.video.addEventListener("seeked", this.draw, false);
    this.video.addEventListener("play", this.drawLoop, false);
    this.video.addEventListener("pause", this.drawPause, false);
    this.video.addEventListener("ended", this.drawPause, false);
  };

  cleanup = () => {
    this.video.removeEventListener("loadeddata", this.draw);
    this.video.removeEventListener("seeked", this.draw);
    this.video.removeEventListener("play", this.drawLoop);
    this.video.removeEventListener("pause", this.drawPause);
    this.video.removeEventListener("ended", this.drawPause);
  };
    }

Now, we can create a new instance by passing the id values for the <video> and <canvas> elements into a VideoWithBackground() class:

const el = new VideoWithBackground("js-video", "js-canvas");
Respecting User Preferences

Earlier, we briefly discussed that we would need to disable or minimize the effect’s motion for users who prefer reduced motion. We have to consider that for decorative flourishes like this.

The easy way out? We can detect the user’s motion preferences with the prefers-reduced-motion media query and completely hide the decorative canvas if reduced motion is the preference.

@media (prefers-reduced-motion: reduce) {
  .canvas {
    display: none !important;
  }
}

Another way we respect reduced motion preferences is to use JavaScript’s matchMedia function to detect the user’s preference and prevent the necessary event listeners from registering.

constructor(videoId, canvasId) {
  const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)");

  if (!mediaQuery.matches) {
    this.video = document.getElementById(videoId);
    this.canvas = document.getElementById(canvasId);

    window.addEventListener("load", this.init, false);
    window.addEventListener("unload", this.cleanup, false);
  }
}
Final Demo

We’ve created a reusable ES6 class that we can use to create new instances. Feel free to check out and play around with the completed demo.

See the Pen Youtube video glow effect - dominant color [forked] by Adrian Bece.

Creating A React Component

Let’s migrate this code to the React library, as there are key differences in the implementation that are worth knowing if you plan on using this effect in a React project.

Creating A Custom Hook

Let’s start by creating a custom React hook. Instead of using the getElementById function for selecting DOM elements, we can access them with a ref on the useRef hook and assign it to the <canvas> and <video> elements.

We’ll also reach for the useEffect hook to initialize and clear the event listeners to ensure they only run once all of the necessary elements have mounted.

Our custom hook must return the ref values we need to attach to the <canvas> and <video> elements, respectively.

import { useRef, useEffect } from "react";

export const useVideoBackground = () => {
  const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)");
  const canvasRef = useRef();
  const videoRef = useRef();

  const init = () => {
    const video = videoRef.current;
    const canvas = canvasRef.current;
    let step;

    if (mediaQuery.matches) {
      return;
    }

    const ctx = canvas.getContext("2d");

    ctx.filter = "blur(1px)";

    const draw = () => {
      ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    };

    const drawLoop = () => {
      draw();
      step = window.requestAnimationFrame(drawLoop);
    };

    const drawPause = () => {
      window.cancelAnimationFrame(step);
      step = undefined;
    };

    // Initialize
    video.addEventListener("loadeddata", draw, false);
    video.addEventListener("seeked", draw, false);
    video.addEventListener("play", drawLoop, false);
    video.addEventListener("pause", drawPause, false);
    video.addEventListener("ended", drawPause, false);

    // Run cleanup on unmount event
    return () => {
      video.removeEventListener("loadeddata", draw);
      video.removeEventListener("seeked", draw);
      video.removeEventListener("play", drawLoop);
      video.removeEventListener("pause", drawPause);
      video.removeEventListener("ended", drawPause);
    };
  };

  useEffect(init, []);

  return {
    canvasRef,
    videoRef,
  };
};

Defining The Component

We’ll use similar markup for the actual component, then call our custom hook and attach the ref values to their respective elements. We’ll make the component configurable so we can pass any <video> element attribute as a prop, like src, for example.

import React from "react";
import { useVideoBackground } from "../hooks/useVideoBackground";

import "./VideoWithBackground.css";

export const VideoWithBackground = (props) => {
  const { videoRef, canvasRef } = useVideoBackground();

  return (
    <section className="wrapper">
      <video ref={ videoRef } controls className="video" { ...props } />
      <canvas width="10" height="6" aria-hidden="true" className="canvas" ref={ canvasRef } />
    </section>
  );
};

All that’s left to do is to call the component and pass the video URL to it as a prop.

import { VideoWithBackground } from "../components/VideoWithBackground";

function App() {
  return (
    <VideoWithBackground src="http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4" />
  );
}

export default App;
Conclusion

We combined the HTML <canvas> element and the corresponding Canvas API with JavaScript’s requestAnimationFrame method to create the same charming — but performance-intensive — visual effect that makes YouTube’s Ambient Mode feature. We found a way to draw the current <video> frame on the <canvas>, keep the two elements in sync, and position them so that the blurred <canvas> sits properly behind the <video>.

We covered a few other considerations in the process. For example, we established the <canvas> as a decorative image that can be removed or hidden when a user’s system is set to a reduced motion preference. Further, we considered the maintainability of our work by establishing it as a reusable ES6 class that can be used to add more instances on a page. Lastly, we converted the effect into a component that can be used in a React project.

Feel free to play around with the finished demo. I encourage you to continue building on top of it and share your results with me in the comments, or, similarly, you can reach out to me on Twitter. I’d love to hear your thoughts and see what you can make out of it!

References

Useful DevTools Tips and Tricks

When it comes to browser DevTools, we all have our own preferences and personal workflows, and we pride ourselves in knowing that “one little trick” that makes our debugging lives easier.

But also — and I know this from having worked on DevTools at Mozilla and Microsoft for the past ten years — most people tend to use the same three or four DevTools features, leaving the rest unused. This is unfortunate as there are dozens of panels and hundreds of features available in DevTools across all browsers, and even the less popular ones can be quite useful when you need them.

As it turns out, I’ve maintained the DevTools Tips website for the past two years now. More and more tips get added over time, and traffic keeps growing. I recently started tracking the most popular tips that people are accessing on the site, and I thought it would be interesting to share some of this data with you!

So, here are the top 15 most popular DevTools tips from the website.

If there are other tips that you love and that make you more productive, consider sharing them with our community in the comments section!

Let’s count down, starting with…

15: Zoom DevTools

If you’re like me, you may find the text and buttons in DevTools too small to use comfortably. I know I’m not alone here, judging by the number of people who ask our team how to make them bigger!

Well, it turns out you can actually zoom into the DevTools UI.

DevTools’ user interface is built with HTML, CSS, and JavaScript, which means that it’s rendered as web content by the browser. And just like any other web content in browsers, it can be zoomed in or out by using the Ctrl+ and Ctrl- keyboard shortcuts (or Cmd+ and Cmd- on macOS).

So, if you find the text in DevTools too small to read, click anywhere in DevTools to make sure the focus is there, and then press Ctrl+ (or Cmd+ on macOS).

Chromium-based browsers such as Chrome, Edge, Brave, or Opera can also display the font used by an element that contains the text:

  • Select an element that only contains text children.
  • Open the Computed tab in the sidebar of the Elements tool.
  • Scroll down to the bottom of the tab.
  • The rendered fonts are displayed.

Note: To learn more, see “List the fonts used on a page or an element.”

12: Measure Arbitrary Distances On A Page

Sometimes it can be useful to quickly measure the size of an area on a webpage or the distance between two things. You can, of course, use DevTools to get the size of any given element. But sometimes, you need to measure an arbitrary distance that may not match any element on the page.

When this happens, one nice way is to use Firefox’s measurement tool:

  1. If you haven’t done so already, enable the tool. This only needs to be done once: Open DevTools, go into the Settings panel by pressing F1 and, in the Available Toolbox Buttons, check the Measure a portion of the page option.
  2. Now, on any page, click the new Measure a portion of the page icon in the toolbar.
  3. Click and drag with the mouse to measure distances and areas.

Note: To learn more, see “Measure arbitrary distances in the page.”

11: Detect Unused Code

One way to make a webpage appear fast to your users is to make sure it only loads the JavaScript and CSS dependencies it truly needs.

This may seem obvious, but today’s complex web apps often load huge bundles of code, even when only a small portion is needed to render the first page.

In Chromium-based browsers, you can use the Coverage tool to identify which parts of your code are unused. Here is how:

  1. Open the Coverage tool. You can use the Command Menu as a shortcut: press Ctrl+Shift+P (or Cmd+Shift+P on macOS), type “coverage” and then press Enter.)
  2. Click Start instrumenting coverage and refresh the page.
  3. Wait for the page to reload and for the coverage report to appear.
  4. Click any of the reported files to open them in the Sources tool.

The file appears in the tool along with blue and red bars that indicate whether a line of code is used or unused, respectively.

Note: To learn more, see “Detect unused CSS and JavaScript code.”

10: Change The Playback Rate Of A Video

Usually, when a video appears on a webpage, the video player that displays it also provides buttons to control its playback, including a way to speed it up or slow it down. But that’s not always the case.

In cases when the webpage makes it difficult or impossible to control a video, you can use DevTools to control it via JavaScript istead.

  1. Open DevTools.
  2. Select the <video> element in the Elements tool (called Inspector in Firefox).
  3. Open the Console tool.
  4. Type the following: $0.playbackRate = 2; and press Enter.

The $0 expression is a shortcut that refers to whatever element is currently selected in DevTools; in this case, it refers to the <video> HTML element.

By using the playbackRate property of the <video> element, you can speed up or slow down the video. Note that you could also use any of the other <video> element properties or methods, such as:

  • $0.pause() to pause the video;
  • $0.play() to resume playing the video;
  • $0.loop = true to repeat the video in a loop.

Note: To learn more, see “Speed up or slow down a video.”

9: Use DevTools In Another Language

If, like me, English isn’t your primary language, using DevTools in English might make things harder for you.

If that’s your case, know that you can actually use a translated version of DevTools that either matches your operating system, your browser, or a language of your choice.

The procedure differs per browser.

In Safari, both the browser and Web Inspector (which is what DevTools is called in Safari) inherit the language of the operating system. So if you want to use a different language for DevTools, you’ll need to set it globally by going into System preferencesLanguage & RegionApps.

In Firefox, DevTools always matches the language of the browser. So, if you want to use DevTools in, say, French, then download Firefox in French.

Finally, in Chrome or Edge, you can choose to either match the language of the browser or set a different language just for DevTools.

To make your choice:

  1. Open DevTools and press F1 to open the Settings.
  2. In the Language drop-down, choose either Browser UI language to match the browser language or choose another language from the list.

Note: To learn more, see “Use DevTools in another language.”

8: Disable Event Listeners

Event listeners can sometimes get in the way of debugging a webpage. If you’re investigating a particular issue, but every time you move your mouse or use the keyboard, unrelated event listeners are triggered, this could make it harder to focus on your task.

A simple way to disable an event listener is by selecting the element it applies to in the Elements tool (or Inspector in Firefox). Once you’ve found and selected the element, do either of the following:

  • In Firefox, click the event badge next to the element, and in the popup that appears, uncheck the listeners you want to disable.
  • In Chrome or Edge, click the Event Listeners tab in the sidebar panel, find the listener you want to remove, and click Remove.

Note: To learn more, see “Remove or disable event listeners.”

7: View Console Logs On Non-Safari Browsers On iOS

As you might know, Safari isn’t the only browser you can install and use on an iOS device. Firefox, Chrome, Edge, and others can also be used. Technically, they all run on the same underlying browser rendering engine, WebKit, so a website should more or less look the same in all of these browsers in iOS.

However, it’s possible to have bugs on other browsers that don’t replicate in Safari. This can be quite tricky to investigate. While it’s possible to debug Safari on an iOS device by attaching the device to a Mac with a USB cable, it’s impossible to debug non-Safari browsers.

Thankfully, there is a way to at least see your console logs in Chrome and Edge (and possibly other Chromium-based browsers) when using iOS:

  1. Open Chrome or Edge on your iOS device and go to the special about:inspect page.
  2. Click Start Logging.
  3. Keep this tab open and then open another one.
  4. In the new tab, go to the page you’re trying to debug.
  5. Return to the previous tab. Your console logs should now be displayed.

Note: To learn more, see “View console logs from non-Safari browsers on an iPhone.”

6: Copy Element Styles

Sometimes it’s useful to extract a single element from a webpage, maybe to test it in isolation. To do this, you’ll first need to extract the element’s HTML code via the Elements tool by right-clicking the element and choosing CopyCopy outer HTML.

Extracting the element’s styles, however, is a bit more difficult as it involves going over all of the CSS rules that apply to the element.

Chrome, Edge, and other Chromium-based browsers make this step a lot faster:

  1. In the Elements tool, select the element you want to copy styles from.
  2. Right-click the selected element.
  3. Click CopyCopy styles.
  4. Paste the result in your text editor.

You now have all the styles that apply to this element, including inherited styles and custom properties, in a single list.

Note: To learn more, see “Copy an element’s styles.”

5: Download All Images On The Page

This nice tip isn’t specific to any browser and can be run anywhere as long as you can execute JavaScript. If you want to download all of the images that are on a webpage, open the Console tool, paste the following code, and press Enter:

$$('img').forEach(async (img) => {
 try {
   const src = img.src;
   // Fetch the image as a blob.
   const fetchResponse = await fetch(src);
   const blob = await fetchResponse.blob();
   const mimeType = blob.type;
   // Figure out a name for it from the src and the mime-type.
   const start = src.lastIndexOf('/') + 1;
   const end = src.indexOf('.', start);
   let name = src.substring(start, end === -1 ? undefined : end);
   name = name.replace(/[^a-zA-Z0-9]+/g, '-');
   name += '.' + mimeType.substring(mimeType.lastIndexOf('/') + 1);
   // Download the blob using a <a> element.
   const a = document.createElement('a');
   a.setAttribute('href', URL.createObjectURL(blob));
   a.setAttribute('download', name);
   a.click();
 } catch (e) {}
});

Note that this might not always succeed: the CSP policies in place on the web page may cause some of the images to fail to download.

If you happen to use this technique often, you might want to turn this into a reusable snippet of code by pasting it into the Snippets panel, which can be found in the left sidebar of the Sources tool in Chromium-based browsers.

In Firefox, you can also press Ctrl+I on any webpage to open Page Info, then go to Media and select Save As to download all the images.

Note: To learn more, see “Download all images from the page.”

4: Visualize A Page In 3D

The HTML and CSS code we write to create webpages gets parsed, interpreted, and transformed by the browser, which turns it into various tree-like data structures like the DOM, compositing layers, or the stacking context tree.

While these data structures are mostly internal in-memory representations of a running webpage, it can sometimes be helpful to explore them and make sure things work as intended.

A three-dimensional representation of these structures can help see things in a way that other representations can’t. Plus, let’s admit it, it’s cool!

Edge is the only browser that provides a tool dedicated to visualizing webpages in 3D in a variety of ways.

  1. The easiest way to open it is by using the Command Menu. Press Ctrl+Shift+P (or Cmd+Shift+P on macOS), type “3D” and then press Enter.
  2. In the 3D View tool, choose between the three different modes: Z-Index, DOM, and Composited Layers.
  3. Use your mouse cursor to pan, rotate, or zoom the 3D scene.

The Z-Index mode can be helpful to know which elements are stacking contexts and which are positioned on the z-axis.

The DOM mode can be used to easily see how deep your DOM tree is or find elements that are outside of the viewport.

The Composited Layers mode shows all the different layers the browser rendering engine creates to paint the page as quickly as possible.

Consider that Safari and Chrome also have a Layers tool that shows composited layers.

Note: To learn more, see “See the page in 3D.”

3: Disable Abusive Debugger Statements

Some websites aren’t very nice to us web developers. While they seem normal at first, as soon as you open DevTools, they immediately get stuck and pause at a JavaScript breakpoint, making it very hard to inspect the page!

These websites achieve this by adding a debugger statement in their code. This statement has no effect as long as DevTools is closed, but as soon as you open it, DevTools pauses the website’s main thread.

If you ever find yourself in this situation, here is a way to get around it:

  1. Open the Sources tool (called Debugger in Firefox).
  2. Find the line where the debugger statement is. That shouldn’t be hard since the debugger is currently paused there, so it should be visible right away.
  3. Right-click on the line number next to this line.
  4. In the context menu, choose Never pause here.
  5. Refresh the page.

Note: To learn more, see “Disable abusive debugger statements that prevent inspecting websites.”

2: Edit And Resend Network Requests

When working on your server-side logic or API, it may be useful to send a request over and over again without having to reload the entire client-side webpage and interact with it each time. Sometimes you just need to tweak a couple of request parameters to test something.

One of the easiest ways to do this is by using Edge’s Network Console tool or Firefox’s Edit and Resend feature of the Network tool. Both of them allow you to start from an existing request, modify it, and resend it.

In Firefox:

  • Open the Network tool.
  • Right-click the network request you want to edit and then click Edit and Resend.
  • A new sidebar panel opens up, which lets you change things like the URL, the method, the request parameters, and even the body.
  • Change anything you need and click Send.

In Edge:

  • First, enable the Network Console tool by going into the Settings panel (press F1) → ExperimentsEnable Network Console.
  • Then, in the Network tool, find the request you want to edit, right-click it and then click Edit and Resend.
  • The Network Console tool appears, which lets you change the request just like in Firefox.
  • Make the changes you need, and then click Send.

Here is what the feature looks like in Firefox:

Note: To learn more, see “Edit and resend faulty network requests to debug them.”

If you need to resend a request without editing it first, you can do so too. (See: Replay a XHR request)

And the honor of being the Number One most popular DevTools tip in this roundup goes to… 🥁

1: Simulate Devices

This is, by far, the most widely viewed DevTools tip on my website. I’m not sure why exactly, but I have theories:

  • Cross-browser and cross-device testing remain, to this day, one of the most important pain points that web developers face, and it’s nice to be able to simulate other devices from the comfort of your development browser.
  • People might be using it to achieve non-dev tasks. For example, people use it to post photos on Instagram from their laptops or desktop computers!

It’s important to realize, though, that DevTools can’t simulate what your website will look like on another device. Underneath it, it is all still the same browser rendering engine. So, for example, when you simulate an iPhone by using Firefox’s Responsive Design Mode, the page still gets rendered by Firefox’s rendering engine, Gecko, rather than Safari’s rendering engine, WebKit.

Always test on actual browsers and actual devices if you don’t want your users to stumble upon bugs you could have caught.

That being said,

Simulating devices in DevTools is very useful for testing how a layout works at different screen sizes and device pixel ratios. You can even use it to simulate touch inputs and other user agent strings.

Here are the easiest ways to simulate devices per browser:

  • In Safari, press Ctrl+Cmd+R, or click Develop in the menu bar and then click Enter Responsive Design Mode.
  • In Firefox, press Ctrl+Shift+M (or Cmd+Shift+M), or use the browser menu → More toolsResponsive design mode.
  • In Chrome or Edge, open DevTools first, then press Ctrl+Shift+M (or Cmd+Shift+M), or click the Device Toolbar icon.

Here is how simulating devices looks in Safari:

Note: To learn more, see “Simulate different devices and screen sizes.”

Finally, if you find yourself simulating screen sizes often, you might be interested in using Polypane. Polypane is a great development browser that lets you simulate multiple synchronized viewports at the same time, so you can see how your website renders at different sizes at the same time.

Polypane comes with its own set of unique features, which you can also find on DevTools Tips.

Conclusion

I’m hoping you can see now that DevTools is very versatile and can be used to achieve as many tasks as your imagination allows. Whatever your debugging use case is, there’s probably a tool that’s right for the job. And if there isn’t, you may be able to find out what you need to know by running JavaScript in the Console!

If you’ve discovered cool little tips that come in handy in specific situations, please share them in the comments section, as they may be very useful to others too.

Further Reading on Smashing Magazine

Testing Sites And Apps With Blind Users: A Cheat Sheet

This article focuses on the users of screen readers — special software that converts the source code of a site or app into speech. Usually, these are people with low vision and blindness but not only. They’ll help you discover most accessibility issues. Of course, the topic is too vast for a single article, but this might help to get started.

Table Of Contents Part 1. What Is Accessibility Testing?

1.1. Testing vs. Audit

There are many ways of evaluating the accessibility of a digital product, but let’s start with distinguishing two major approaches.

Auditing is an element-by-element comparison of a site or app against a list of accessibility requirements, be it a universal standard (WCAG) or a country-specific law (like ADA in the U.S. or AODA in Ontario, Canada). There are two ways to do an audit:

  1. Automated audit
    Checking accessibility by means of web apps, plugins for design and coding software, or browser extensions (for example, axe DevTools, ARC Toolkit, WAVE, Stark, and others). These tools generate a report with issues and recommendations.
  2. Expert audit
    Evaluation of web accessibility by a professional who knows the requirements. This person can employ assistive technology and have a disability, but this is anyway an expert with advanced knowledge, not a “common user.” As a result, you get a report too, but it’s more contextual and sensible.

Testing, unlike auditing, cannot be done by one person. It involves users of assistive technologies and comprises a set of one-on-one sessions facilitated by a designer, UX researcher, or another professional.

Today we’ll focus on testing as an undervalued yet powerful method.

1.2. Usability vs. Accessibility Testing

You might have already heard about usability testing or even tried it. No wonder it’s the top research method among designers. So how is it different from its accessibility counterpart?

Common features:

  • Script
    In both cases, a facilitator prepares a full written script with an introduction, questions, and tasks based on a realistic scenario (for example, buying a ticket or ordering a taxi). By the way, here are handy testing script templates.
  • Insights gathering
    Despite accessibility testing’s main focus, it also reveals lots of usability issues, simply said, whether a site or app is easy to use. In both cases, a facilitator should ask follow-up questions to get an insight into people’s way of thinking, pain points, and needs.
  • Format
    Both testing types can be organized online or offline. Usually, one session takes from 30 minutes to 1 hour.

Key differences:

  • Participants selection
    People for usability testing are recruited mainly by demographic characteristics: job title, gender, country, professional experience, etc. When you test accessibility, you take into account the senses and assistive technologies involved in using a product.
  • What you can test
    In usability testing, you can test a live product, an interactive prototype (made in Figma, Protopie, Framer, etc.), or even a static mockup. Accessibility testing, in most cases, requires a live product; prototyping tools cannot deliver a source code compatible with assistive technology. Figma attempted to make prototypes accessible, but it’s still far from perfect.
  • Giving hints
    When participants get stuck in the flow, you help them find the way out. But when you involve people with disabilities, you have to understand how their assistive gear works. Just to give you an example, a phrase like “Click on the red cross icon in the corner” will sound silly to a blind user.

1.3. Why Opt For Testing?

Now that you know the difference between an audit and testing and the distinction between usability and accessibility testing, let’s clarify why testing is so powerful. There are two reasons:

  1. Get valuable insights.
    The idea of testing is to learn how you can improve the product. While you won’t check all interface elements and edge cases, such sessions show if the whole flow works and if people can reach the goal. Unlike even the most comprehensive audits, testing is much closer to reality and based on the usage of real assistive technology by a person with a disability.
  2. Build empathy through storytelling.
    A good story is more compelling than bare numbers. Besides, it can serve as a helpful addition to such popular pro-accessibility arguments as legal risks, winning new customers, or brand impact. Even 1–2 thorough sessions can give you enough material for a vivid story to excite the team about accessibility. An audit report alone may not be as thrilling to read.

Testing gives you more realistic insights into common scenarios. Laws and standards aren’t perfect, and formal compliance might not cover all the user challenges. Sometimes people take not the “designed” path to the goal but the one that seems safer or more intuitive, and testing reveals it.

Of course, auditing is still a powerful method; however, its combination with testing will show much more accurate results. Now, let’s talk about accessibility testing in detail.

Part 2. Recruiting Users

There are many types of disabilities and, consequently, various assistive technologies that help people browse the web. Without a deep dive into theory, let’s just recap the variety of disabilities:

  • Depending on the senses involved or the affected area of life: visual (blindness, color deficiency, low vision), physical (cerebral palsy, amputation, arthritis), cognitive (dyslexia, Down syndrome, autism), auditory (deafness, hearing loss), and so on.
  • By severity: permanent (for example, an amputated leg or some innate condition), temporary (a broken arm or, let’s say, blurred vision right after using eye drops), and situational (for instance, a noisy room or carrying a child).

Note: You can find more information on various types of disabilities on the Microsoft Inclusive Design hub.

For the sake of simplicity, we’ll focus on the case applicable to most digital products: when a site or app mostly relies on vision. In this case, visual assistive technologies offer users an alternative way to work with content online. The most common technologies are:

  • Screen readers: software that converts text into speech and has numerous handy shortcuts to navigate efficiently. (We’ll talk about it in detail in the next chapters.)
  • Refreshable Braille displays: devices able to show a line of tactile Braile-script text. Round-tipped pins are raised through holes in a surface and refresh as a user moves their cursor on the screen. Such displays are vital for blind-deaf people.
  • Virtual assistants (Amazon Alexa, Apple Siri, Google Assistant, and others): an excellent example of universal design that serves the needs of both people with disabilities and non-disabled people. Assistants interpret human speech and respond via synthesized voices.
  • High-contrast displays or special modes: for people with low vision. Some users combine a high-contrast mode with a screen reader.

2.1. Who To Involve

Debates around an optimal number of testing participants are never-ending. But we are talking here about a particular case — organizing accessibility testing for the first time, hence the recommendation is the following:

  • Invite 3–6 users with blindness and low vision who either browse the web by means of screen readers or use a special mode (for example, extra zoom or increased contrast).
  • If your product has rich data visualization (charts, graphs, dashboards, or maps), involve several people with color blindness.

In any case, it’s better to conduct even one or two high-quality sessions than a dozen of poorly prepared ones.

2.2. Where To Find People

It is not as hard to find people for testing as it seems at first glance. If you are working on a mass product for thousands of users, participants won’t need any special knowledge apart from proficiency with their assistive technology. Here are three sources we recommend checking:

  • Specialized platforms for recruiting users according to your parameters (for example, Access Works or UserTesting). This method is the fastest but not the cheapest one because platforms take their commission on top of user compensation.
  • Social media communities of people with disabilities. Try searching by the keywords like “people with disabilities,” “PWD,” “support group,” “visually impaired,” “partially sighted,” or “blind people.” Ask the admin’s permission to post your research announcement, and it won’t be rejected.
  • Social enterprises and non-profits that work in the area of inclusion, employment, and support for people with disabilities (for example, Inclusive IT in Ukraine or The Federation of the Blind and Partially Sighted in Germany). Drop them an email with your request.

We noticed that the last two points might sound like getting participants for free, but not everyone has an opportunity to volunteer.

When we organized accessibility testing sessions last year, three persons agreed to take part pro bono because it was a university course, and we didn’t get any profits. Otherwise, be ready to compensate for the participant’s time (in my experience, around €15–30). It can be an Amazon gift card or coupon for something useful in a particular country (only ensure it’s accessible).

Digital product companies that test accessibility regularly hire people with disabilities so that they have access to in-progress software and can check it iteratively before the official launch.

Part 3. Preparing For The Session

Now that you’ve recruited participants, it’s time to discuss things to prepare before the sessions. And the first question is:

3.1. Online Or offline?

There are basically two ways to conduct testing sessions: remotely or face-to-face. While we usually prefer the first one, both techniques have pros and cons, so let’s talk about them.

Benefits of online:

  • Native environment.
    Participants can use familiar home equipment, like a desktop computer or laptop, with nicely tuned assistive technology (plugins, modes, settings).
  • Cost and time efficiency.
    No need to reimburse expenses for traveling to your office. It might be quite costly if a participant arrives with an accompanying person or needs special accessible transport.
  • Easier recruitment.
    It’s more likely you’ll find a participant that meets your criteria around the world instead of searching in your city (and again, zero travel expenses).

Benefits of offline:

  • Testing products in development.
    If you have a product that isn’t public yet, participants won’t be able to easily install it or open it in a browser. So, you’ll have to invite participants to your office, but they should probably bring the portable version of their assistive technology (for example, on a USB drive).
  • Testing mobile apps.
    If a person brings a personal phone, you’ll see not only the interaction with your product but also how the device is set up and what gestures and shortcuts a person uses.
  • Helping inexperienced users.
    Using assistive technology is a skill, and you may involve someone who is not yet proficient with it. So, the offline setting is more convenient when participants get stuck and you help them find the way out.

As you can see, online testing has more universal advantages, whereas the offline format rather suits niche cases.

3.2. Communication Tools

Once you decide to test online, a logical question is what tool to choose for the session. Basically, there are two options:

Specialized testing tools (for instance, UserTesting, Lookback, UserZoom, Hotjar, Useberry):

  • Apart from basic conferencing functionality, they support advanced note-taking, automatic transcription, click heatmaps, dashboards with testing results, and other features.
  • They are quite costly. Besides, trial versions may be too limited for even a single real session.
  • Participants may get stuck with an unfamiliar tool that they’ve never used before.

Popular video conferencing tools (for example, Google Meet, Zoom, Microsoft Teams, Skype, Webex):

  • Support all the minimally required functionality, such as video calls, screen-sharing, and call recording.
  • They are usually free.
  • There is a high chance that participants know how to use them. (Note: even in this case, people may still experience trouble launching screen-sharing).

Since we are talking about your first accessibility testing, it’s much safer and easier to utilize an old good video conferencing tool, namely the one that your participants have experience with. For example, when we organized educational testing sessions for the Ukrainian Catholic University, we used Skype, and at the HTW University in Berlin, we chose Zoom.

Regardless of the tool choice, learn in advance how screen-sharing works in it. You’ll likely need to explain it to some of the participants using suitable (non-visual) language. As a result, the intro to accessibility testing sessions may take longer compared to usability testing.

3.3. Tasks

As we figured out before, accessibility testing requires a working piece of software (let’s say, an alpha or beta version); it’s harder to build, but it opens vast research opportunities. Instead of asking a participant to imagine something, you can actually observe them ordering a pizza, booking a ticket, or filling in a web form.

Recommendations for accessibility testing tasks aren’t much different from the ones in usability testing. Tasks should be real-life and formulated in a way people naturally think. Instead of referring to an interface (what button a person is supposed to click), you should describe a situation that could happen in reality.

Start a session with a mini-interview to learn about participants’ relevant experiences. For example, if you are going to test an air travel service, ask people if they travel frequently and what their desired destinations are. Based on these details, customize the tasks — booking a ticket to the place of the participant’s choice, not a generic location suggested by you.

Examples of realistic, broad tasks:

  • Testing a consumer product: bicycle online store.
    You want to buy a gift card for your colleague George who enjoys bikepacking. Choose the card value, customize other preferences, and select how George will receive the gift. (This task implies that you learned about a real George who likes cycling during a mini-interview.)
  • Testing a professional product: customer support tool.
    Your manager asked you to take a look at several critical issues that haven’t been answered for a week. Find those tickets and find out how to react to them. (This task implies that you invited a participant who worked as a customer support agent or in a similar role.)

Examples of leading UI-based tasks:

  • Consumer product
    “Open the main menu and find the ‘Other’ category. Choose a €50 gift card. In the ‘For whom’ input field enter ‘John Doe’… Select ‘Visa/Mastercard’ as a paying method…”
  • Professional product
    “Navigate to the dashboard. Choose the ‘Last week’ option in the ‘Status’ filter and look at the list of tickets. Apply the filter ‘Sort by date’ and tell me what the top-most item is…”

A testing session is 50% preparation and 50% human conversation. It’s not enough to give even a well-formulated task and silently wait.

An initial task reveals which of the ways to accomplish a task a participant will choose as the most intuitive one. When a person gets stuck, you can give hints, but they shouldn’t sound like “click XYZ button”; instead, let them explore further. Something like the following:

— No worries. So, the search doesn’t give the expected result. What else can you do here?
— Hmm, I don’t know. Maybe filtering it somehow…
— OK, please try that.

3.4. Wording

Your communication style impacts participants’ way of thinking and the level of bias. Even a huge article won’t cover all the nitty-gritty, but here are several frequent mistakes.

Beware of the following:

  • Leading tasks: “Go to the ‘Dashboard’ section and find the frequency chart” or “Scroll to the bottom to see advanced options.”
    Such hints totally ruin the session, and you will never know how a person would act in reality.
  • Selling language: “Check our purchase in one click” or “Try the ‘Smart filtering’ feature.”
    It makes people feel as if they have to praise your product, not share what they really think.
  • Humorous tasks: “Create a profile for Johnny Cash” or, for example, “Request Christmas tree delivery to Lapland.”
    Jokes distract participants and decrease session realism.
  • IT terminology: “On the dashboard, find toggle switch” or “Go to the block with dropdowns and radio buttons.”
    It’s bad for two reasons: you may confuse people with words they don’t understand; it can be a sign that you give leading tasks and excessive UI hints.

Here is recommended further reading by Nielsen Norman Group:

Part 4. Session Facilitation

As agreed before, your first accessibility testing session will probably involve a blind person or a person with low vision who uses a screen reader to browse the web. So, let’s cover the two main aspects you have to know before starting a session.

4.1. Screen Readers

A screen reader is an assistive software that transforms visual information (text and images) into speech. When a visually impaired person navigates through a site or app using a keyboard or touchscreen, the software “reads” the text and other elements out loud.

Screen readers rely on the source code but interpret it in a special way. They skip code accountable for visual effects (like colors or fonts) and take into account meaningful parts, such as heading tags, text descriptions for pictures, and labels of interactive elements (whether it’s a button, input field, or checkbox). The better a code is written, the easier it will be for users to comprehend the content.

Now that you know how screen readers function, it’s time to experience them firsthand. Depending on the operating system, you’ll have a standard embedded screen reader already available on your device:

  • VoiceOver: Mac and iOS;
  • Narrator: Windows;
  • TalkBack: Android.

During one of our training courses, we learned from blind users that the screen reader on iPhone is more comfortable and flexible than the Android one. Interestingly, people don’t like standard desktop screen readers either on Mac or on Windows and usually install one of the advanced third-party readers, for instance:

  • JAWS (Job Access With Speech): Windows, paid, the most popular screen reader worldwide;
  • NVDA (Non-Visual Desktop Access): Windows, free of charge.

4.2. Navigation

Visually impaired people usually navigate apps and sites using a keyboard or touchscreen. And while sighted people scan a page and jump from one part to another, screen reader users can keep only one element in focus at a time, be it a paragraph of text or, let’s say, an input field.

Participants of your accessibility testing will likely run into an unpassable obstacle at some point in the session, and you’ll give them hints on how to find the way out and proceed with the next task. In this case, you’ll need a special non-visual language that makes sense.

Not helpful hints:

  • “Click the cross icon in the upper right corner.”
  • “Scroll to the bottom of the modal window and find the button there.”
  • “Look at the table in the center of the page.”

Helpful hints:

  • “Please, navigate to the next/previous item.”
  • “Go to the second element in the list.”
  • “Select the last heading/link/button.”

Note: UI hints above are suggested for cases when a user is completely stuck in the flow and cannot proceed, for example, when an element is not navigable via a keyboard or, let’s say, an interactive element doesn’t have a proper label or name.

Summary

Once all the testing sessions have been completed, you can analyze the collected feedback, determine priorities, and develop an action plan. This process could be the subject of a separate guideline, but let’s cover the three key principles right away:

  • Catching information
    Testing produces tons of data, so you should be prepared to capture it; otherwise, it will be lost or obscured by your imperfect human memory. Don’t rely on a recording. Make notes in the process or ask an assistant to do that. Notes are easier to analyze and find repeating observations across sessions. Besides, they ensure you’ll have data if the recording fails.
  • Raw datainsights
    Not everything you observe in testing sessions should be perceived as a call to action. Raw data shows what happened, while insights explain reasons, motivations, and ways of thinking. For example, you see that people use search instead of filters, but the insight may be that typing a search request needs less effort than going through the filter menu.
  • Criticality and impact
    Not all observations are significant. If five users struggle to proceed because the shopping cart isn’t keyboard-navigable, it’s a major barrier both for them and the business. But if one out of five participants didn’t like the button name, it isn’t critical. Take into account the following:
    • How many participants encountered a problem;
    • How much a problem impacts reaching the goal: booking a ticket, ordering pizza, or sending a document.

Once the information has been collected and processed, it is essential to share it with the team: designers, engineers, product managers, quality assurance folks, and so on. The more interactive it will be, the better. Let people participate in the discussion, ask questions, and see what it means for their area of responsibility.

As you gain more experience in conducting testing sessions, invite team members to watch the live stream (for instance, via Google Meet) or broadcast the session to a meeting room with observers, but make sure they stay silent and don’t intrude.

Further Reading

Designing A Better Design Handoff File In Figma

Creating an effective handoff process from design to development is a critical step in any product development cycle. However, as any designer knows, it can be a nerve-wracking experience to send your carefully crafted design off to the dev team. It’s like waiting for a cake to bake — you can’t help but wonder how it will evolve in the oven and how it will taste when you take it out of the oven.

The relationship between designers and developers has always been a little rocky. Despite tools like Figma’s Inspect feature (which allows developers to inspect designs and potentially convert them to code in a more streamlined way), there are still many barriers between the two roles. Often, design details are hidden within even more detailed parts, making it difficult for developers to accurately interpret the designer’s intentions.

For instance, when designing an image, a designer might import an image, adjust its style, and call it done. More sophisticated designers might also wrap the image in a frame or auto layout so it better matches how developers will later convert it to code. But even then, many details could still be missing. The main problem here is that designers typically create their designs within a finite workspace (a frame with a specific width). In reality, however, the design elements will need to adapt to a variety of different environments, such as varying device sizes, window widths, screen resolutions, and other factors that can influence how the design is displayed. Therefore, developers will always come back with the following questions:

  • What should be the minimum/maximum width/height of the image?
  • What is its content style?
  • What effects need to be added?

As in reality, these are the details needed to be addressed.

Designers, let’s face the truth: there’s no perfect handoff.

Every developer works, thinks, and writes code differently, which means there is no such thing as the ideal handoff document. Instead, our focus should be on creating a non-perfect but still effective and usable handoff process.

In this article, we will explore how to create a design handoff document that attempts to strike the right balance between providing developers with the information they need while still allowing them the flexibility to bring the design to life in their own way.

How Can The Handoff Files Be improved?

1. Talk To Developers More Often

Design is often marked as complete once the design handoff file is created and the developers start transforming it into code. However, in reality, the design is only complete when the user finds the experience pleasant.

Therefore, crafting the design handoff file and having the developer help bring your design to the user is essentially another case study on top of the one you have already worked on. To make it perfect, just as you would talk to users, you also need to communicate with engineers — to better understand their needs, how they read your file, and perhaps even teach them a few key things about using Figma (if Figma is your primary design tool).

Here are a few tips you can teach your developers to make their lives easier when working with Figma:

Show Developers The Superpower Of The Inspect Panel

Figma’s Inspect feature allows developers to see the precise design style that you’ve used, which can greatly simplify the development process. Additionally, if you have a design library in place, Inspect will display the name of each component and style that you’ve used. This can be incredibly helpful for developers, especially if they’re working with a style guide, as they can use the component or style directly to match your design with ease.

In addition to encouraging developers to take advantage of the Inspect panel, it’s sometimes helpful to review your own design in a read-only view. This allows you to see precisely what the developers will see in the Inspect panel and ensures that components are named accurately, colors are properly linked to the design system, and other vital details are structured correctly.

Share With Developers The Way To Export Images/Icons

Handling image assets, including icons, illustrations, and images, is also an essential part of the handoff process, as the wrong format might result in a poor presentation in the production environment.

Be sure to align with your developers on how they would like you to handle the icons and images. It could either be the case where they would prefer you to export all images and icons in a single ZIP file and share it with them or the case where they would prefer to export the images and icons on their own. If it’s the latter, it’s important to explain in detail the correct way to export the images and icons so that they can handle the export process on their own!

Encourage Them To Use Figma’s Commenting Feature

It’s common for developers to have questions about the design during the handoff process. To make it easier for everyone involved, consider teaching them to leave comments directly in Figma instead of sending you a message. This way, the comments are visible to everyone and provide context for the issue at hand. Additional features, such as comment reactions and the “mark as resolved” button, further enable better interaction between team members and help everyone keep track of whether an issue has been addressed or not.

Leverage Cursor Chat

If you and the developers are both working within the same Figma file, you can also make use of the cursor chat feature to clarify any questions or issues that arise. This can be a fun and useful way to collaborate and ensure that everyone is on the same page.

Use Figma Audio Chat

If you need to discuss a complex issue in more detail, consider using Figma’s audio chat feature. This can be a quick and efficient way to clarify any questions or concerns arising during the development process.

It’s important to keep in mind that effective collaboration relies on good communication. Therefore, it’s crucial to talk to your developers regularly and understand their approach to reading and interpreting your designs, especially when you first start working with them. This sets the foundation for a productive and successful partnership.

2. Documenting Design Decisions For You And Developers

We have to be honest, the reason why building our design portfolios often takes a lot of time is the fact that we do not document every design decision along the way, and so we need to start building the case studies later by trying our best to fetch the design files and all the stuff we need.

I find it useful to document my decisions in Figma, not only just designs but, if appropriate, also competitor analysis, problem statements, and user journeys, and leave the links to these pages within the handoff file as well. The developer might not read it, but I often hear from the developers in my team that they like it as they can also dig into what the designers think while working on the design, and they can learn tips for building a product from us as well.

3. Don’t Just Leave The Design There. Add The Details

When it comes to design, details matter — just leaving the design “as is” won’t cut it. Adding details not only helps developers better understand the design, but it can also make their lives easier. Here are some tips for adding those crucial design details to your handoff.

Number The Frame/Flow If Possible

I really like the Figma handoff template that Luis Ouriach (Medium, Twitter) created. The numbering and title pattern makes it easy for developers to understand which screen belongs to which flow immediately. However, it can be complicated to update the design later as the numbering and title need to be manually updated.

Note: While there are plugins available (like, for example, Renamed), which can help with renaming multiple frames and layers all at once, this workflow can still be inconvenient when dealing with more complicated naming patterns. For instance, updating “1. Welcome → 2. Onboarding → 3. Homepage” into “1. Welcome → 2. Onboarding → 3. Sign up → 4. Homepage” can become quite a hassle. Therefore, one alternative approach is to break down the screens into different tickets or user journeys and assign a number that matches each ticket/user journey.

Name The Layers If Possible

We talked about numbering/naming the frames, but naming the layers is equally important! Imagine trying to navigate a Figma file cluttered with layers and labels like “Frame 3123,” “Rectangle 8,” and “Circle 35.” This can be confusing and time-consuming for both designers and developers, as they need to sift through numerous unnamed layers to identify the correct one.

Well-named layers facilitate better collaboration, as team members can quickly locate and comprehend the purpose of each element. This also helps ensure consistency and accuracy when translating designs into code.

If you search around in Figma, you will find a number of plugins that can help you with naming the layers in a more systematic way.

Add The Details For Interaction: Make Use Of Figma’s Section Feature

This might seem trivial, but I consider it important. Design details shouldn’t be something like “This design does X, and if you press that, it will do Y.” Instead, it’s crucial to include details like the hover state, initial state, max width/height, and the outcome of different use cases.

For this reason, I appreciate the new section feature that Figma has released. It allows me to have a big design at the top so that developers can see all of the design at once and then look at the section details for all the design and interaction details.

Make Use Of The Interactive Prototype And FigJam Features To Show The User Flow

Additionally, try to share with the developers how the design screens connect to one another. You can use the interactive prototype feature within Figma to connect the screens and make them move so that developers can understand the logic. Alternatively, you can use FigJam to connect the screens, allowing developers to see how everything is connected at a glance.

4. The Secret Weapon Is Adding Loom Video

Loom video is a lifesaver for us. You only need to record it once, and then you can share it with anyone interested in the details of your design. Therefore, I highly recommend making use of Loom! For every design handoff file, I always record a video to walk through the design. For more complicated designs, I will record a separate video specifically describing the details so that I don’t need to waste other people’s time if they’re not interested.

To attach the Loom video, I use the Loom plugin and place it right beside the handoff file. Developers can play it as many times as needed without even disturbing you, asking you more questions, and so on.

→ Download the Loom Embed Figma plugin

5. The Biggest Fear: Version Control

In an ideal world, the design would be completely finalized before developers start coding. But in reality, design is always subject to adjustments, even after development has already begun. That’s why version control is such an important topic.

Although Figma has a branching feature for enterprise customers to create new designs in a separate branch, I find it helpful to keep a few extra things in your design file.

Have A Single Source Of Truth

Always ensure that the developer handoff file you share with your team is the single source of truth for the latest design. If you make any changes, update the file directly, and keep the original as a duplicate for reference. This will prevent confusion and avoid pointing developers to different pages in Figma.

If you have access to the branching feature in Figma, it can be highly beneficial to utilize it to further streamline your workflow. When I need to update a handoff file that I have already shared with the developers, my typical process is to create a new branch in Figma first. Then I update the developer handoff file in that branch, send it to the relevant stakeholders for review, and finally merge it back into the original developer handoff file once everything is confirmed. This ensures that the link to the developer handoff file remains unchanged for the developers.

Changelogs/Future Plan

Include a changelog in the handoff file to help developers understand the latest changes made to the design.

Similarly to changelogs, if you already know of future plans to adjust the design, write them down somewhere in Figma so that the developers can understand what changes are to be expected.

6. Make Use Of Plugins

There are also a number of plugins to help you with creating your handoff:

  • EightShapes Specs
    EightShapes Specs creates specs for your design automatically with just one click.
    → Download the EightShapes Spec Figma plugin
  • Autoflow
    Autoflow allows you to connect the screens visually without using FigJam.
    → Download the Autoflow Figma plugin
  • Style Organizer
    Style Organizer allows you to make sure all of your styles are linked to your component/style so that developers won’t need to read hex code in any case.
    → Download the Style Organizer Figma plugin
7. The Ultimate Goal Is To Have A Design System

If you want to take things a step or two further, consider pushing your team to adopt a design system. This will enable the designs created in Figma to be more closely aligned with what developers expect in the code. You can match token names and name your layers/frames to align with how developers name their containers and match them in your design system.

Here are some of the benefits of using a design system:

  • Consistency
    A design system ensures a unified visual language across different platforms, resulting in a more consistent user experience.
  • Efficiency
    With a design system in place, designers and developers can reuse components and patterns, reducing the time spent on creating and updating individual elements.
  • Collaboration
    A design system facilitates better communication between designers and developers by establishing a shared language and understanding of components and their usage.

Note: If you would like to dig deeper into the topic of design systems, I recommend reading some of the Smashing Magazine articles on this topic.

Conclusion: Keep Improving The Non-perfect

Ultimately, as I mentioned at the beginning, there’s no one-size-fits-all approach to developer handoff, as it depends on various factors such as product design and the engineers we work with. However, what we can do is work closely with our engineers, communicate with them regularly, and collaborate to find solutions that make everyone’s lives easier. Just like our designs, the key to successful developer handoff is prioritizing good communication and collaboration.

Further Reading

  • Design Handoffs,” Interactive Design Foundation
    Design handoff is the process of handing over a finished design for implementation. It involves transferring a designer’s intent, knowledge, and specifications for a design and can include visual elements, user flows, interaction, animation, copy, responsive breakpoints, accessibility, and data validations.
  • A Comprehensive Guide to Executing The Perfect Design-to-Development Handoff,” Phase Mag
  • Design Handoff 101: How to handoff designs to developers,” Zeplin Blog
    Before we had tools like Figma, design handoff was a file-sharing nightmare for designers. When UI designs were ready for developers to start building, nothing could begin until designers manually added redlines to their latest local design file, saved it as a locked Sketch or Photoshop file or a PDF, and made sure developers were working on the correct file after every update. But those design tools completely changed the way teams collaborate around UI design — including the way design handoff happens. We’ve seen this in Zeplin’s own design handoff workflow and among thousands of our users, as those same top design tools help designers generate specs and share designs by simply sending a link.
  • How to communicate design to developers (checklist),” Nick Babich
  • A Front-End Developer’s Ode To Specifications,” Dmitriy Fabrikant, Smashing Magazine
    In the physical world, no one builds anything without detailed blueprints because people’s lives are on the line. In the digital world, the stakes just aren’t as high. It’s called “software” for a reason: when it hits you in the face, it doesn’t hurt as much. But, while the users’ lives might not be on the line, design blueprints (also called design specifications or specs) could mean the difference between a correctly implemented design that improves the user experience and satisfies customers and a confusing and inconsistent design that corrupts the user experience and displeases customers. (Editor’s Note: Before tools like Figma were on the rise, it was even more difficult for designers and developers to communicate and so tools such as Specctr — which this article mentions — were much needed. As of today, this article from 2014 is a bit of a trip into history, but it will also give you a fairly good idea of what design blueprints are and why they are so important in the designer-developer handoff process.)
  • Everything Developers Need To Know About Figma,” Jurn van Wissen, Smashing Magazine
    Unlike most design software, Figma is free and browser-based, so developers can easily access the full design files making the developer handoff process significantly smoother. This article teaches developers who have nothing but a basic understanding of design tools everything they need to know to work with Figma.
  • Penpot, An Open-Source Design Platform Made For Designers And Developers Alike,” Mikołaj Dobrucki, Smashing Magazine
    In the ever-evolving design tools landscape, it can be difficult to keep up with the latest and greatest. In this article, we’ll take a closer look at Penpot, the first design and prototyping tool that’s fully open-source and based on open web standards, making it an ideal choice for both designers and developers. (Editor’s Note: Today, it’s not always “There’s only Figma.” There are alternatives, and this article takes a good look at one of them — Penpot.)
  • The Best Handoff Is No Handoff,” Vitaly Friedman, Smashing Magazine
    Design handoffs are inefficient and painful. They cause frustration, friction, and a lot of back and forth. Can we avoid them altogether? Of course, we can! Let’s see how to do just that.

JavaFX Gets Video Capabilities

Get ready for high quality video on the screens of your life. Sun has entered into a multi-year agreement with On2 Technologies, to provide immersive media and content on your JavaFX applications.

"The JavaFX runtime environment is designed from the ground up to support high fidelity media, empowering content authors to deliver media-rich content and applications across all the screens of your life. On2 shares Sun's vision of driving video convergence across desktops and mobile devices and we look forward to working with On2 to deliver this capability as part of the JavaFX family of products," said Rich Green, executive vice president, Software at Sun.

RIAs written in JavaFX will be able to use the On2 video codecs from Fall 2008, at the same time as the 1.0 release of JavaFX desktop (an early access release is expected in July). We'll need to wait until Spring 2009 for JavaFX Mobile and JavaFX TV.  The same high resolution video will run across all of these platforms.

Designing Sticky Menus: UX Guidelines

We often rely on sticky headers to point user’s attention to critical features or calls to action. Think of sidebar navigation, CTAs, sticky headers and footers, “fixed” rows or columns in tables, and floating buttons. We’ve already looked into mobile navigation patterns in Smart Interface Design Patterns, but sticky menus deserve a closer look.

As users scroll, a sticky menu always stays in sight. And typically, it’s considered to be a good feature, especially if the menus are frequently used and especially if we want to speed up navigation.

However, sticky menus also come with a few disadvantages. In his recent article on Sticky Menus Are Problematic, And What To Do Instead, Adam Silver argues about some common usability issues of sticky menus — and how to solve them. Let’s take a closer look.

When Sticky Menus Are Useful

How do we decide if a menu should be sticky or not? This depends on the primary job of a page. If it’s designed to primarily convey information and we don’t expect a lot of navigation, then sticky menus aren’t very helpful.

However, if we expect users to navigate between different views on a page a lot and stay on the page while doing so — as it often is on long landing pages, product pages, and filters — then having access to navigation, A-Z or tabs can be very helpful.

Also, when users compare features in a data table, sticky headers help them verify that they always look at the right piece of data. That’s where sticky headers or columns can help and aid understanding. That’s why sticky bars are so frequently used in eCommerce, and in my experience, they improve the discoverability of content and speed of interaction.

Keep Sticky Headers Small, But Large Enough To Avoid Rage Taps

The downside of sticky menus is that they typically make it more difficult for users to explore the page as they obscure content. Full-width bars on mobile and desktop are common, but they need to be compact, especially on narrow screens. And they need to accommodate for accessible tap sizes to prevent rage taps and rage clicks.

Typically, that means we can’t have more than five items in the sticky bar navigation. The choice of the items displayed in the sticky menu should be informed by the most important tasks that users need to perform on the website. If you have more than five items, you probably might need to look into some sort of an overflow menu, as displayed by Samsung.

Whenever users have to deal with forms on a page on mobile, consider replacing sticky menus with accordions. Virtual keyboards typically take up to 60% of the screen, and with a sticky bar in view, filling in a form quickly becomes nothing short of impossible.

Accessibility Issues of Sticky Menus

By their nature, sticky menus always live on top of the content and often cause accessibility issues. They break when users zoom in. They often block the content for keyboard users who tab through the content. They obscure links and other focusable elements. And there is often not enough contrast between the menu and the content area.

Whenever we implement a sticky menu, we need to make sure that focusable elements are still visible with a sticky menu in action. And this also goes for internal page anchors that need to account for the sticky bar with the scroll-padding property in CSS.

Avoid Multiple Scrollbars Of Long Sticky Menus

When sticky menus become lengthy, the last items on the list become difficult to access. We could make them visible with some sort of an overflow menu, but often they appear as scrollable panes, causing multiple scroll bars.

Not only does this behavior cause discoverability issues, but it’s also often a cause for mistakes and repetitive actions on a page. Ideally, we would prevent it by keeping the number of items short, but often it’s not possible or can’t be managed properly.

A way out is to show the menu as an accordion instead in situations when the space is limited, especially on mobile devices. That’s what we do at Smashing Magazine in the checkout, with a button that reveals and hides the contents of the cart when needed.

Partially Persistent Menus

Because sticky menus often take up too much space, we could reveal them when needed and hide them when a user is focused on the content. That’s the idea behind partially persistent headers: as a user starts scrolling down, the menu disappears, but then any scrolling up prompts the menu to appear again.

The issue with this pattern is that sometimes users just want to jump back to a previous section of the page or double-check some details in a previous paragraph, and the menu often gets in the way. Page Laubheimer from NN/Group recommends using a slide-in animation that is roughly 300–400ms long and will preserve the natural feel without being distracting.

Alternatives To Sticky Menus

In some situations, we might not need a sticky menu after all. We can avoid their downsides with shorter pages, or lengthy pages which repeat relevant calls-to-actions or navigation within the page.

We could display a table of contents on the top of the page and bring the user’s attention to the table of contents with a back-to-top link at the bottom of the page.

Wrapping Up

Whenever the job of the page is to help users act, save, and compare, or we expect users to rely on navigation a lot, we might consider displaying sticky navigation. They are most harmful when there isn’t enough space anyway, as it often is with forms on mobile devices.

Sticky menus do come at a cost, as we need to account for usability and accessibility issues, especially for zooming, keyboard navigation, and anchor jumps. Add them if you need them, but be careful in plugging them in by default.

We need to prioritize what matters and remove what doesn’t. And too often, the focus should lie entirely on content and not navigation.

You can find more details on navigation UX in the video library on Smart Interface Design Patterns 🍣 — with a live UX training that’s coming up in September this year.

Further Resources

Of course, the techniques listed above barely scratch the surface. Here are wonderful articles around sticky headers, from design considerations to technical implementations:

Accessible Target Sizes Cheatsheet

Rage taps are annoying and frustrating. These wonderful occurrences in our interface when we need to tap twice or sometimes three times to continue our journeys. Of course, sometimes they happen because the website is too slow, but sometimes it’s the target size of interactive elements that is the culprit.

So how big should our interactive elements be these days? What would be a reliable size for icons, links and buttons — in navigation and on mobile? How do we make it more difficult for our users to make mistakes? Let’s take a look.

Note: You can find a whole video chapter on designing for touch in Smart Interface Design Patterns as well — along with 30 other chapters all around UX and design patterns.

Target Sizes Cheatsheet

One of the common recommendations for target sizes on mobile is 44×44px. This is a little bit misleading because screen pixels, or at least device-independent pixels (dips) are scaled to a multiple of the display resolution. So pixels are different on different screens, and when we have a conversation about sizes, we probably should be speaking about dips, rather than pixels.

Depending on where an element appears on the screen, it needs more or less padding. In general, we are very precise in our input in the center of the screen, but we are least precise on the edges of the screen (both on the top and at the bottom).

Accordion to Steven Hoober’s research in his book on Touch Design For Mobile Interfaces, to minimize rage taps, we need to aim for 11mm (or 31pt / 42px) on the top of the screen, and 12mm (or 34pt / 46px) at the bottom of the screen. In the center though, we could potentially go as low as 7mm (or 20pt / 27px). This includes both the width and padding of an interactive element.

How do point units translate to CSS pixels or Android/iOS units? Fortunately, Steven Hoober provides a helpful conversion table to help you translate from points to px and em units, Android SPs or DPs, iOS points and Windows DIP or px.

Not All Pixels Are The Same

As we’ve seen above, target sizes change depending on where components appear on the screen. It’s worth noting that according to the WCAG 2.1 AAA level requirements, all targets should measure at least 44 by 44px, except if the target is in a sentence or block of text. For such exceptions, we could be using 27px as a goal, but in general, the larger, the better.

For sticky menus at the top or bottom of the screen, we should probably aim for around 44–46px boxes, or preferably even more. However, for links that appear on the screen as the user scrolls down the page, we probably will be able to avoid most issues with smaller components.

This is also why we probably will be able to place at most five items in the bottom tabs on a mobile phone. Instead, we might need to use a bottom sheet that would slide up from down as an overlay on top of the screen.

Prefer “Actions” Button To Single Icons For Data Tables

Complex tables typically have hover actions that appear once a user starts hovering over a particular row. They typically include everything from highlight and export to move and delete.

In testing, showing icons on hover produces too many mistakes: not only do users often accidentally jump to a wrong row as they navigate horizontally towards the icons. They also make mistakes by accidentally clicking on the wrong spot and starting all over again.

To avoid rage clicks, it might be a good idea to test how well an “Actions” buttons or a “Split”-Button would perform instead. Indeed, that button could live on every row, would open on tap/click, and wouldn’t close automatically. It might not be ideal for every use case, but it definitely gives users more sense of control as they need to take action in a row.

Provide An Assistant For Complex Manipulations

With complex manipulation, such as rotation of an image, or selection of a small part of a larger area, we often rely on pinch and zoom or zoom in/out buttons. These options, of course, work, but they easily become a bit tedious to use for very precise manipulations — especially if used for a while.

Instead, we can attach a little handle to allow users to move their selection within the object faster and with more precision. This is how Tylko allows users to customize their shelves on mobile. Zooming is supported as well, but it’s not necessary to select one of the areas.

When Multiple Taps Are Better Than One

But what do we do if some tap areas have to be small? Perhaps we can’t reserve 27×27px for each icon — for example, when we suggest a color selection in an eCommerce site? Well, in that case, one option to consider would be to prompt a “proper” selection of colors with one additional tap. This might be a bit slower in interaction, but way more accurate.

Fewer rage clicks: Grønland Color Picker Microinteraction, designed by Mykolas Puodžiūnas. (Large preview) Always Maximize Clickable Area

Whenever possible, encapsulate the entire element, along with enough padding to ensure that you hit the magical 42–46px size to prevent rage taps for good. This typically means adding enough padding for icons but also preferring full-width or full-height bars for accordions and navigation.

Ahmad Shadeed presents a few useful examples of using spacing to increase clickable areas and prevent rage clicks. Any Lupe provides even more suggestions in her article on accessible target sizes.

Wrapping Up

When designing for touch today, we need to use at least 27×27px for small links or icons in the content area and at least 44×44px for icons at the top and at the bottom of the page.

Personally, I would always go up to 30×30px and 48×48px to make sure mistakes are really difficult to make. And, of course, always use full width or full height for the clickable areas. Hopefully, this will help us remove any possible rage taps from our websites altogether — and many of your users will sincerely appreciate it indeed.

You can find more details on navigation UX in the video library on Smart Interface Design Patterns 🍣 — with a live UX training that’s coming up in September this year.

Useful Resources

There are a few wonderful resources on accessible target sizes that might be helpful if you’d like to dive deeper in the topic:

How to Embed a YouTube Live Stream in WordPress

Do you want to embed a YouTube live stream on your WordPress website?

Embedding YouTube live streams on your WordPress site can increase engagement by allowing visitors to interact with you and your content in real time.

In this article, we will show you how to easily embed a YouTube live stream in WordPress.

How to embed a YouTube live stream in WordPress

Why Embed YouTube Live Streams in WordPress

Live streaming allows you to broadcast live video or audio content over the internet, enabling users to watch the video in real-time.

YouTube live stream

Embedding a YouTube live stream on your WordPress website is an excellent way to connect with your audience and reach more users.

It can also increase user engagement by allowing you to interact with site visitors.

A YouTube live stream can also help boost website SEO and attract more traffic. Research has shown that blog posts with at least one video get around 83% more visitors than content without any.

That being said, let’s see how you can easily embed a YouTube live stream in WordPress.

How to Embed a YouTube Live Stream in WordPress

The easiest way to embed a YouTube live stream is by using Smash Balloon’s YouTube Feed Pro plugin.

It is the best WordPress YouTube feed plugin that allows you to embed YouTube videos and live streams on your website.

First, you need to install and activate the YouTube Feed Pro plugin. For more instructions, please see our beginner’s guide on how to install a WordPress plugin.

Note: YouTube Feed also has a free version. However, it does not support the Live Stream feature.

Once the plugin has been activated, you need to visit the Youtube Feed » Settings page from the admin sidebar. Here, you need to enter the license key and click on the ‘Activate’ button.

You can get the license key from your Accounts page on the Smash Balloon website.

Activate your smash balloon license key

Next, you need to visit the YouTube Feeds » All Feeds page from the WordPress admin sidebar.

From here, simply click on the ‘Add New’ button at the top.

Click the Add New button to add the YouTube feed

This will open up the ‘Select Feed Type’ prompt.

Now, you need to choose the ‘Live Streams’ option and then click on the ‘Next’ button to continue.

Choose live stream as feed type

On the next step, you need to connect YouTube Feed Pro with your Youtube account. You will be asked to provide your YouTube API key.

If you already have an API key, simply copy and paste it into the ‘Enter API Key’ box and click on the ‘Add’ button.

Add YouTube API key

Create a YouTube API Key

If you don’t have an API key yet, then you need to go to the Google Cloud Console and sign in using your Google account.

Once you are logged in, click on the ‘Select a project’ button at the top.

Click Select Project button

This will open a popup window that will display all the projects that you have created.

Next, simply click on the ‘New Project’ button at the top.

Click the New Project button

This will take you to the ‘New Project’ page, where you can start by typing in a name for your project. This can be anything that will help you easily identify it.

Next, you must also select an ‘Organization’ and its location from the dropdown menu. You can select ‘No Organization’ and click the ‘Create’ button to continue.

Choose a project name and its organization

Once the project has been created, you will be taken to the project dashboard.

From here, you need to click on the ‘+ Enable APIs And Services’ button in the top menu.

Click on the + ENABLE APIS AND SERVICES button

This will take you to the API Library page. It shows the different Google services that you can create APIs for and use in your projects.

Now go ahead and search for ‘YouTube Data API v3’ in the search box.

Search for the YouTube data API v3 option

Once the ‘YouTube Data API v3’ result shows up, just click on it.

This will take you to a new page where you need to click on the ‘Enable’ button to activate the YouTube API key.

Enable the YouTube API

You’ll now be taken to the ‘API/Service Details’ page.

From here, simply click on the ‘Create Credentials’ button at the top.

Click the Create Credentials button

Next, you’ll be directed to a new page where you must check the box next to the ‘Public Data’ option.

After that, click on the ‘Next’ button to create your API.

Check the Public data box and click on the Next button

Your API Key will now be created and displayed on the page.

Simply copy the API key and click on the ‘Done’ button.

Copy the YouTube API key

Next, it is time to head back to the WordPress dashboard.

Go ahead and paste the API key into the API Key Required box. Then, click on the ‘Add’ button to continue.

Add YouTube API key

Add the YouTube Live Stream to Your WordPress Website

Once you’ve added your YouTube API key, you will be redirected to the ‘Select Feed Type’ page.

From here, you need to click on the ‘Live Stream’ option again, followed by the ‘Next’ button.

This will open the ‘Add Channel ID For Live Stream’ page.

Visit the Add channel ID for live stream page

Now, you need to visit the YouTube channel that contains your live-stream videos.

From here, go ahead and copy the text that comes after ‘/channel/’ or ‘/user/’ in the URL at the top.

Copy the code after channel or user in the URL

Next, switch back to the WordPress dashboard and paste the code into the ‘Add Channel ID for Livestream’ box.

After that, click on the ‘Connect’ button to connect your YouTube channel with WordPress.

Once the channel is connected, you need to click on the ‘Next’ button to continue.

Add code and click the connect button

Customize Your YouTube Feed

Now that your YouTube live stream feed has been created, you can customize it. Smash Balloon’s YouTube Feed Pro offers many different display options.

First, you will need to choose a template on the ‘Start with a template’ page. You can choose from Default, Carousel, Cards, List, Gallery, and Grid layouts.

Once you have made your choice, simply click on the ‘Next’ button.

Choose a template for your Live YouTube feed

After you have chosen a template, an editing screen will open up that displays a preview of your YouTube feed to the right and customization settings in the left menu column.

Here, you can start by expanding the ‘Feed Layout’ panel.

YouTube Feed editor

On this screen, you can switch between the layouts.

You may also be able to configure additional settings depending on your chosen layout.

Customize the YouTube feed layout

Next, you need to click on the ‘Color Scheme’ panel.

By default, YouTube Feed Pro uses the same color scheme as your WordPress theme. However, you can also use a ‘Light’ or ‘Dark’ color scheme for the video feed.

You can also design your own color scheme by clicking on the ‘Custom’ option and then using controls to change the background, text, and link colors.

Customize feed color scheme

To add a header to your YouTube feed, you need to visit the ‘Header’ panel. From here, simply toggle the ‘Enable’ switch to activate the header.

You can also use the controls to switch between standard and text header styles. Choosing the ‘Text’ option will allow you to change the text size and color.

Customize YouTube feed header

You can also customize the appearance of the video player by going to the ‘Videos’ panel.

Here, you will see a list of options.

Videos panel option

To customize the video layout and individual properties, you need to visit the ‘Video Style’ settings panel.

Here, you can select the video layout, background color, and border.

Customize video style

After that, open the ‘Edit Individual Elements’ panel. Here, simply check the boxes next to the elements you want to display along with the YouTube live stream videos.

You can show or hide the Play icon, video title, live stream countdown, descriptions, and more.

Edit the individual elements you want to display along with the video

Next, you need to head over to the ‘Hover State’ setting. Here, you can choose the individual elements that will display when the user hovers their mouse over the YouTube video.

You can pick many elements, including video title, description, date, views, and more.

Customize hover state

After that, you need to visit the ‘Video Player Experience’ panel.

From here, you can change the video player’s aspect ratio. You can also choose whether the video will start playing automatically or wait until the visitor clicks the play button.

Customize video player experience

After customizing the individual video elements, switch to the ‘Load More Button’ panel.

Here, under the ‘Load More Button’ setting, you can switch the toggle to ‘Enable.’ This will display more video suggestions after the live stream.

You can also choose the background color, hover state, and text from the settings in the left panel.

Customize the Load More button

After that, switch to the ‘Subscribe Button’ panel and toggle the switch to ‘Enable’ if you want to activate the YouTube subscribe button.

You can also change the button’s color, text, and hover state in the settings.

Customize Subscribe button

Once you have customized the YouTube live feed, you can preview how it will look on desktop computers, tablets, and smartphones. Simply click on the different buttons in the upper-right corner to preview the feed on different devices.

Finally, don’t forget to click the ‘Save’ button at the top to save your changes.

Preview and save feed

Embed the YouTube Live Stream on a WordPress Page

The next step is to embed your YouTube live feed on a WordPress page. To do this, you must first click on the ‘Embed’ button at the top.

This will open up the ‘Embed Feed’ prompt. Here, click on the ‘Add to a Page’ button to continue.

Click Add to a page button to embed YouTube feed

The popup will now show a list of all the WordPress pages on your website.

Simply choose the page where you want to embed the YouTube live stream and click the ‘Add’ button.

Choose a page where you want to embed the feed and click on the Add button

The page you selected will now open up in the block editor.

From here, you need to click the ‘Add Block’ (+) button in the top left corner and search for the ‘Feeds for YouTube’ block.

Once you have found it, add the block to your page by clicking on it.

Embed YouTube Feed on a page

Don’t forget to click on the ‘Update’ or ‘Publish’ button to save your changes or make them live.

This is how the YouTube live feed looks on our demo website.

YouTube Feed Page preview

Add a YouTube Live Feed as a Widget

You can also add a YouTube live feed to the WordPress sidebar as a widget.

First, you will need to visit the Appearance » Widgets page from the admin sidebar.

From here, click on the ‘Add Block’ (+) button in the top left corner of the screen and locate the ‘Feeds for YouTube’ block.

Next, click on the block to add it to the widget area.

Add YouTube Feed as a widget

Don’t forget to click on the ‘Update’ button to save your changes.

This is how the YouTube live feed looks in the sidebar on our demo website.

Widget Preview of YouTube feed

Add a YouTube Live Stream in the Full Site Editor

If you are using a block-based theme, then this method is for you.

First, go to the Appearance » Editor page from the admin sidebar to launch the site editor.

From here, click on the ‘Add Block (+)’ button at the top and look for the ‘Feeds for YouTube’ block.

Next, you can drag and drop the block to wherever you want to display the YouTube feed on your page.

Add the YouTube feed in FSE

Once you are done, don’t forget to click on the ‘Save’ button to apply the changes.

Here is a preview of the live streams on our demo website.

FSE preview of YouTube live feed

We hope this article helped you learn how to embed YouTube live streams in WordPress. You may also want to read our ultimate WordPress SEO guide or check out our top picks for the best social media plugins to grow your site.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Embed a YouTube Live Stream in WordPress first appeared on WPBeginner.

From Concept to Launch: The Ultimate Guide for Successful Client Briefings

Would you like to move qualified prospects through your web dev sales process more successfully, deliver consistently better results, and send your sales closing rates soaring? Of course you would, right?!

Well, good news – you’re in the right place to learn how! This no-hype guide to running a hyper successful client briefing session will show you how to boost sales of your web development services.

We’ll cover the following topics:

Your Client Briefing Secret Weapon

Q: Which of the following is an absolutely essential “must-have” to conduct a highly successful client briefing session?

A) A fancy office on the top floor of a skyscraper overlooking one of the 7 wonders of the world.

B) Sending out a stretch limo to pick up your prospective clients and drive them back after the briefing.

C) Serving clients chilled champagne, canapes, and caviar as soon as they arrive.

D) Having an impeccable sense of dress matching your suit with your hairstyle and the office decor.

Answer: None of the above.

To conduct a successful client briefing session, you need only two ears and …

A Needs Assessment Questionnaire

A Needs Assessment Questionnaire (NAQ) is an essential tool for your WordPress web development services business.

It’s a crucial part of an effective sales process as it helps you to:

  • Understand your client’s needs, preferences, and goals so you can provide them with the right solution for their needs.
  • Ask the right questions and gather the necessary information about the project’s scope, timeline, and budget to provide a realistic plan for the project and an accurate estimate of the project’s costs.
  • Identify any potential issues or concerns early in the sales process.
  • Manage the client’s expectations.
  • Qualify your prospect as being either a good fit for your services or not (yes, sometimes it’s better to let them go) and move them successfully through your sales process.
  • Establish a strong relationship with the client based on trust and communication.

Your questionnaire should be carefully crafted to glean the necessary information from the client while being concise and easy to understand.

It should also be customized to the client’s specific needs and provide clear instructions on how to complete it correctly, so that anyone in your business can conduct a client briefing session successfully.

By demonstrating a deep understanding of the client’s needs and goals, you can create a website or deliver a project that will hopefully exceed your client’s expectations. This, in turn, can lead to satisfied clients who are more likely to recommend your services to others.

The NAQ, then, is not just any old “questionnaire.” It’s an integral and valuable part of your sales process.

So, before we look at how to develop an effective Needs Assessment Questionnaire that will help you get better results in your business, let’s briefly go over the different stages of an effective sales system so we can have a clear understanding of where the Needs Assessment Questionnaire fits in.

The 7 Stages of an Effective Sales Process

An effective sales process typically consists of the following stages:

  • Stage 1: Initial Contact – This is the first stage of the sales process, where your potential client becomes aware of your service. They may visit your website, receive an email, phone call or recommendation, or see an advertisement, directory listing, etc.
  • Stage 2: Needs Assessment – In this stage, you (or your sales rep) asks questions to understand the client’s needs, challenges, and goals. The aim of this stage is try to gather information about the client’s business, industry, and competition and qualify them as a potential client.
  • Stage 3: Presentation – In this stage, you present a solution to the client’s problem or need. Your presentation may include a demonstration, samples of previous work, or a proposal.
  • Stage 4: Objections – In this stage, the client may raise objections or concerns about your proposed solution. You (or your sales rep) then address these objections and provide additional information or clarification.
  • Stage 5: Closing – In this stage, you (or your sales rep) ask for a decision. This may involve negotiating the price, terms, or delivery of the service.
  • Stage 6: Follow-up – After the sale, your business follows up with the client to provide onboarding (e.g. training), ensure satisfaction with your service, and to address any issues that may arise. You may also look for opportunities to cross-sell or upsell other services.
  • Stage 7: Referral – The final stage is when your satisfied client refers your business to others who may benefit from your services. This can be a powerful source of new business and growth for your company.

The sales process described above emphasizes the importance of understanding your client’s needs and providing a solution that meets those needs. It also highlights the need for ongoing customer engagement and relationship-building to drive long-term business success.

Your NAQ is vitally important to completing Stage 2 (Needs Assessment) of your sales process successfully.

Chart - 7 Stages of Sales Process
Assessing your clients’ needs effectively will help you deliver a better solution.

This article focuses on the Needs Assessment stage of the sales process, so let’s take an in-depth look at the role your Needs Assessment Questionnaire plays in it.

The Needs Analysis Presentation

All you need to run an effective sales presentation is an effective script and an effective sales tool.

To illustrate this, let’s say that you are asked to give a slide presentation to an audience about a subject you know little to nothing about.

If you design your slide presentation well using the right content and the right slide sequence, all you would have to do is show a slide, read the words on the slide, show the next slide and repeat the process, and you could run a successful presentation.

More importantly, anyone in your business could consistently and repeatedly deliver a successful presentation by simply following the same process. Even if you went a little off-topic and ad-libbed every now and then, the tool (i.e. the slides) and its built-in script (i.e. the words on each slide) would still guide the presenter successfully through the entire process.

This is essentially what we are aiming to achieve in “Stage 2” of the sales system… an effective and repeatable presentation that delivers consistent results and moves your client successfully to the next stage.

Stage 2, then, is your Needs Analysis Presentation and consists of two main elements:

  1. The presentation script
  2. The Needs Assessment Questionnaire

The “presentation script” is what you say and do during your client briefing session to create the best user experience possible for your client.

This includes how you greet your potential client, what you do to make them feel comfortable (e.g. offer water, tea, or coffee), the words you use to start the briefing session, the questions you ask them during the briefing, how you structure the entire meeting so clients feel relaxed and open to share information that will allow you to assess their needs and qualify them as prospects, the words you use to end the meeting and set up the next stage of the process, and so on.

For example, the “opening script” for your Needs Analysis Presentation might go something like this:

“[Client name], as I mentioned to you when setting the appointment, the purpose of today’s meeting is for us to get a better idea of your business, what it does, what problems you need help solving, what kind of results you expect from your website, and so on.

I’ve done some research on your business and there are some questions I’d like to ask so we can get the full picture of what you need and how we can help you. This will probably take about 30 minutes or so.

I will then review the information carefully with my team and come back to you with a customized solution that will best suit your needs and your budget.

And if it turns out that we are not a perfect fit for working with each other, I’ll let you know and recommend a more suitable solution.

Are you ok for us to get started?”

***

After delivering the opening script above, you then complete the Needs Assessment Questionnaire with your client. This is the tool that will guide you successfully through your Needs Analysis Presentation.

After completing your NAQ, you then deliver the “closing script,” which could be something like this:

“[Client name], thank you… I really appreciate you taking the time to answer all of these questions. This gives me everything I need.

As I mentioned at the start of the meeting, give me a day or so to review this with my team. We’ll put together the solution we think will best deliver what you’re looking for and then we’ll meet again and go through everything in detail and answer any other questions you have.

Are you happy for us to set up the next meeting now?”

The above is Stage 2 in a nutshell. Its purpose is to help you set up the next appointment, where you deliver your solution and hopefully get the client’s business.

The more attention you put into designing and structuring your Needs Assessment Questionnaire, the better the client’s experience will be and the more smoothly, consistently, and effectively your client meetings will run.

Even better, if you plan to scale your business, you will be able to train anyone to run client briefings competently. All they will need to do is learn the opening and closing scripts and use the Needs Assessment Questionnaire to complete this stage.

Now that we understand what the Needs Assessment Questionnaire’s purpose is and where it fits into the sales process, let’s start building an effective NAQ for your business.

Designing Your Needs Assessment Questionnaire

Since there is no “one size fits all” way to build a web development business, this section will provide a general framework to help you design a Needs Assessment Questionnaire customized to suit your specific needs, with a list of sections and suggested questions you can include in your NAQ.

We’ll begin by looking at the steps involved in creating a NAQ.

How To Create An Effective NAQ For Your WordPress Web Development Business

Here are the steps involved in creating an effective Needs Assessment Questionnaire that will enable you to gather the critical information needed to deliver successful WordPress web development services to your clients:

  1. Identify the key areas of information you’ll require: Begin by outlining the main areas of information you need to gather from the client, such as their business goals, target audience, website functionality, content needs, marketing strategies, budget, and timeline expectations.
  2. Determine the types of questions to ask: Once you have identified the main areas of information you need to gather, determine the types of questions to ask. Open-ended questions are ideal as they encourage clients to provide detailed information, allowing you to better understand their needs and preferences.
  3. Develop specific questions: Put together key questions for each area of information to gather more detailed insights. For example, to understand the client’s business goals and challenges, you could ask “What are your top business goals, and what challenges are you facing in achieving them?”
  4. Organize the questionnaire: Ensure that the questions flow logically and are easy for clients to understand. Group similar questions together, and consider using subheadings to organize the questionnaire by topic.
  5. Include instructions and explanations: Provide context for each question by explaining why you are asking it and how the answer will help you develop a customized solution for the client. The best way to do this is to turn this explanation into a “script” and write it into your questionnaire after each of the section headings and subheadings (e.g. “Now, I’d like to ask you questions about your current marketing efforts. This will help us understand what you are currently doing to generate new leads and drive traffic to your site, how these activities are performing, and if there are any issues that we would need to look at or improve…”). Including clear instructions and explanations will help clients understand the purpose of the questionnaire and what to expect in the web development process, and help you to fill it out.
  6. Test the questionnaire: Try out your newly created questionnaire on a few clients to ensure the questions are clear, relevant, and useful. Make any necessary adjustments to ensure the questionnaire effectively gathers the information needed for successful web development projects.
  7. Continuously review and refine: The questionnaire is not set in stone, so adjust and improve it over time based on feedback from clients and your team members. As your business evolves and new trends emerge, make sure that the questionnaire remains up-to-date and relevant.

So that’s the outline of the process. Now, let’s start putting a Needs Assessment Questionnaire together.

1) Decide What Information You Need

As mentioned above, the first step is to identify the key areas of information you need to gather from clients.

Mind-mapping the process at this stage can be useful for brainstorming ideas and organizing your thoughts.

Needs Assessment Questionnaire - Mind map
A mind map is a useful tool for planning your NAQ.

2) Define Your NAQ Categories

Once you have a clear idea of what information you need from your client, the next step is to organize this information into question categories. These will form the main sections of your NAQ.

Needs Assessment Questionnaire categories
Define the categories you will add to your Needs Assessment Questionnaire.

Think about the logical flow of your questionnaire’s sections, especially when planning subcategories, such as hosting and domains, design, functionality, and content for the website, or marketing-related questions.

For example, when discussing your client’s website needs, should you start by asking them questions about hosting and domains and then follow with questions about design, functionality, and content, or is there are better sequence that you feel would make the discussion flow more smoothly?

Also, consider things like:

  • Which areas are absolutely essential to get information from the client? Where should you insert this into your NAQ so you can make sure it gets covered in case the meeting is cut short or goes off on a tangent, or the client starts to feel overwhelmed?
  • Which areas of discussion could potentially blow out and take up a big chunk of the meeting? How can you design the process to quickly rein the client back into focus if this happens?

All of these details are very important when building a process flow for your NAQ’s design.

3) Decide on the Format

How are you going to run your Needs Analysis Presentation and record the client’s answers?

Will your client briefing sessions be done face-to-face, over the phone, online via video conferencing, or a combination of different styles?

Will your NAQ be printed with answers recorded as handwritten notes, in an electronic document, or a custom form application running from a phone, tablet, or laptop?

Probably the easiest and most effective way to start is using pen and paper. A printed questionnaire can serve as your prototype. This will allow you to review, tweak, test, and improve your sections, questions, question flow, accompanying instructions, fields for entering answers, etc, after every client briefing session.

Once you have a NAQ that delivers you consistent results, you can then turn your prototype into a format better suited for your business, like an electronic questionnaire or even an app. Or, just keep using a printed questionnaire if it works for you. Why complicate something when the simplest approach works?

4) Add Questions to Your NAQ Sections

Now that you have planned everything out, the next step is to add questions to each section of your NAQ.

Note: You don’t have to add every suggested question below to your NAQ. Just pick out the ones you need. Also, keep in mind that some questions may overlap for different sections, so include them where you think it would make the most sense for you to ask.

Let’s go over the main sections we suggest you consider including in your NAQ:

1) Overview

Your NAQ is an internal business document. It’s not something that you will leave with the client. So, it’s probably a good idea to add an Overview section. This could include a checklist of everything you need to cover during the session, such as documents or information the client needs to provide, instructions for completing certain sections, even your opening script.

2) Client’s Business

As a website developer, it’s important to understand the client’s business goals and challenges to create a website that meets their specific needs. During the client briefing session, it’s essential to ask the right questions to identify the client’s goals, target audience, unique selling points, and competition.

Questions about the client’s goals can include inquiries about what they hope to achieve with their website, whether they are looking to increase sales, generate leads, or increase brand awareness. Knowing the client’s goals will help you tailor your approach to meet these objectives.

Target audience questions should delve into the demographics of the client’s customers, their interests and behaviors, and what they are looking for in a website. By understanding the target audience, you can create a website that appeals to their audience’s needs and preferences.

Unique selling point questions can help you understand what sets the client’s business apart from the competition. This information will help you highlight these unique selling points on the website and create a competitive advantage for the client.

Finally, questions about the competition can help you understand what other businesses are offering and how the client’s website can differentiate itself. This information will help you create a website that stands out from the competition and attracts more customers to the client’s business.

Here is a list of questions you can include in this section of your NAQ:

Business Details

Prefill some of these details before your client briefing and ask the client to confirm these:

  • Company name: The legal name of the client’s business entity.
  • Contact person name: The name of the individual representing the client, such as the CEO or a manager.
  • Address: The physical address of the client’s business, including the street address, city, state/province, and zip/postal code.
  • Phone number: The primary phone number for the client’s business.
  • Email address: The email address of the client’s business or the contact person.
  • Website URL: The website address of the client’s business (if they have one).
  • Social media handles: The client’s social media handles (if applicable), such as Twitter, Facebook, Instagram, etc.
  • Industry: The industry that the client’s business operates in, such as finance, healthcare, technology, etc.
  • Legal status: The legal status of the client’s business, such as LLC, corporation, sole proprietorship, etc.
  • Revenue: The annual revenue of the client’s business.
  • Number of employees: The number of employees working for the client’s business.
  • Tax ID: The client’s tax identification number (if applicable).
  • Payment information: The payment information that the client uses to pay for goods or services, such as a credit card, bank account, or payment service.
  • Additional notes: Any additional notes or comments about the client that may be helpful for future reference.

Note: Some of this information may need to be asked or obtained at a later stage of the sales process if applicable (e.g. Revenue, Tax ID, Payment information).

About Your Business
  • What is your business and what does your business do?
  • What are your unique selling points (USPs)?
  • Who is your target audience?
  • What are the demographics of your target audience?
  • What are the interests and behavior patterns of your target audience?
  • What markets do you sell your products and services in? (Local, Regional, National, Global)
  • Is your business seasonal?
Your Business Goals
  • What are your primary business goals and objectives?
  • What difficulties are you currently experiencing in achieving them?
  • How do you envision an agency like ours will help you address these challenges?
Your Competition
  • Who are your main competitors?
  • What makes your business unique compared to your competitors?
  • What are the strengths and weaknesses of your competitors’ websites?
  • What do you like and dislike about your competitors’ websites?

3) Client’s Website

Your Needs Assessment Questionnaire should take into account the fact that a potential client may or may not already have an existing website. If so, it is essential to conduct a thorough assessment of the client’s existing website. This will help you understand their website, identify any issues that need to be addressed, and ensure that the end product is tailored to their specific needs and goals.

Here is a list of questions to ask a potential client during the client briefing session about their website to help you gain a comprehensive understanding of their needs and requirements in terms of functionality, design, content, and performance:

Hosting & Domains
  • What are your requirements for website hosting and maintenance?
  • Do you need help with website hosting or domain registration?
  • Do you have any registered domains?
  • Have you purchased webhosting for your site?

For existing websites, include the following questions:

  • Do you have any additional domains?
  • Do you have any big changes (like a migration) planned within the next 12 months?
General
  • What is the purpose of your website?
  • What are your primary business goals for this website? Is it achieving these goals?
  • What is the estimated size of your website (number of pages)?
  • Are there any legal or regulatory requirements that need to be considered for your website?

For existing websites, include the following questions:

  • What are the current issues or challenges you are experiencing with your website?
Design
  • Do you have any specific design preferences or requirements for your website?
  • Do you have any specific branding or visual identity guidelines that need to be followed?
  • What is your preferred color scheme?
  • Do you have any existing design elements that you would like us to incorporate?
  • What is your preferred tone of voice for your website?
Functionality
  • What features and functionalities do you want your website to have (e.g. eCommerce, contact forms, appointment scheduling, user registration, etc)?
  • Do you require any special integrations (e.g. social media sharing, Google Analytics, email marketing software, etc)?
  • What are your expectations for website performance (e.g. load time, speed, mobile responsiveness)?
  • Do you have any specific security requirements for your website?
  • Do you have a plan in place for website backups and security?

For existing websites, include the following questions:

  • Is your website mobile-friendly and responsive?
  • How does your website perform in terms of loading speed?
  • Is your website optimized for search engines?
  • Do you have any analytics or tracking tools installed on your website?
  • Has your website ever been negatively impacted by any core algorithm updates?
Content
  • How will you be creating and managing content for your website?
  • What type of media will you be using (e.g. images, videos, audio)?
  • Will you be updating the website content yourself or do you need ongoing maintenance and updates?
  • Do you need any help creating new content for your website?

For existing websites, include the following questions:

  • What content management system (CMS) are you currently using?
  • How frequently do you update your website’s content?
  • Do you have any existing website content that you would like to migrate to the new website?
  • Do you have any existing content that you would like us to use?

Also…

If content services are part of your offering, see the additional “Content Marketing” section below for more questions you can ask.

4) Client’s Marketing Efforts

By understanding your client’s marketing efforts, you can ensure that the website you create for them will be optimized for success.

For example, you can ask about the client’s SEO efforts, including any past keyword research or optimization. It is also important to understand any PPC campaigns the client has run, as well as their social media presence and email marketing efforts. Additionally, you can inquire about any PR campaigns the client has been a part of, including media outlets they have been featured in and soundbites from their representatives.

Here is a list of questions you could ask a potential client during the client briefing session to identify their marketing efforts related to SEO, PPC, social media, email marketing, PR, etc:

Marketing Goals

  • What are your primary marketing objectives, and how do you plan to achieve these?
  • Do you have a marketing plan in place for your website?
  • Have you done any marketing research to identify your target audience’s needs, preferences, pain points, and online behavior?
  • Have you done any competitive research to understand the strategies they are using to attract and retain customers?
  • Do you have a content marketing strategy in place? If so, what types of content have you found to be most effective in engaging your target audience?
  • What are your expectations for the role of your website in your overall marketing strategy, and how do you see it contributing to your business objectives?
  • Do you have any particular marketing challenges or pain points that you would like us to address through the website development process?
  • What increase in organic traffic (numbers or percentage) are you aiming for in the next six to 12 months?
  • How many conversions (leads and sales) would you like to get in the next six to 12 months?
  • Can you list any freelancers or agencies you have previously worked with? If so, what processes did you have in place with them that you would like for us to continue with, and what would you like to change?

Marketing Channels

  • How do you plan to promote your content to attract visitors to your website?
  • Have you ever invested in search engine optimization (SEO) services for your website? If so, what were the results?
  • Do you currently use pay-per-click (PPC) advertising to drive traffic to your website? If so, what platforms do you use, and what has been your experience with them?
  • Have you established a presence on social media? If so, which platforms do you use, and how frequently do you post updates?
  • Have you used email marketing to promote your business or website? If so, what has been your experience with it?
  • Have you invested in public relations (PR) services to increase brand awareness or promote your products/services? If so, what has been the outcome? Can you provide us with the media outlets you have been published on and existing soundbites from your representatives?
  • Are there any specific keywords or phrases that you would like your website to rank for in search engine results pages (SERPs)?
  • How do you plan to allocate your marketing budget across different channels, and what portion of it are you willing to invest in website development and maintenance?
  • Do you require any specific SEO (Search Engine Optimization) features or services?
  • Do you need assistance with setting up and integrating social media accounts?
  • What’s your top acquisition channel?

Marketing Performance

  • How do you plan to measure the success of your website?
  • How do you currently measure the success of your marketing efforts, and what metrics do you track?
  • Are you currently doing anything to acquire links? Do you have a list of websites you’d like us to start with?
  • Have you ever purchased any paid links or been part of any link schemes?
  • Has your website experienced any issues with link penalties?
  • What are the primary calls to action for your website?

Also…

Access to platforms:

  • Do you have Google Analytics set up on your website? If so, please share access with [your email]
  • Do you have Google Search Console set up on your website? If so, please share access with [your email]
  • Do you have Google Ads set up on your website? If so, please share access with [your email]

Access to documents:

  • We may need access to some existing documents to help us align our campaign with those already running. Can we get access to existing documents?
  • Can you provide us with keyword research done by previous agencies/staff?
  • Can you provide us with reports/work done by the previous agency?

5) Content Marketing

The success of a WordPress website is heavily dependent on the quality and relevance of its content. As a result, it’s important to understand the client’s content needs and preferences during the needs analysis. Understanding the client’s content preferences can help the web developer to create a website that aligns with the client’s brand identity and resonates with the target audience.

In addition to gleaning information about your client’s marketing efforts and goals using channels like paid advertising, social media, etc, understanding the client’s content needs and preferences is crucial for the success of their project.

During the needs analysis, it’s important to ask the client about the types of content they want to create and publish on their website. This could include blog posts, videos, infographics, and more. Additionally, the web developer should inquire about the topics that the client wants to cover, the frequency at which they want to publish content, and the overall tone and voice that they want to convey.

Here are some questions you can ask during the client briefing session to gain a better understanding of the client’s content marketing needs and preferences and create a website that supports those goals:

Content Creation
  • What are the main topics that your audience is interested in?
  • What topics do you want to cover in your content?
  • What type of content do you plan on publishing on your website?
  • What types of media do you plan on incorporating into your content, such as images, videos, or infographics?
  • How often do you plan on publishing new content?
  • Who will be responsible for creating content for your website?
  • What tone and voice do you want your content to convey?
  • Have you identified any gaps in your content that need to be addressed?
  • Do you have any existing content that can be repurposed or updated for your new website?
  • Are there any particular examples of content that you like or dislike?
  • Do you have any existing content that you would like to repurpose or optimize for SEO?
  • Will you need assistance creating content?
Content Management
  • How do you plan to manage your content?

6) Client’s Budget and Timeline

Before starting any project, it is crucial to set clear expectations for the budget and timeline.

Asking the right questions about the client’s budget and their timeline expectations during the briefing session will help you and your client understand the scope of the project and plan accordingly to ensure the success of the web development project.

Here are some questions you can ask a potential client to gain a better understanding of their budget constraints, project scope, and timeline expectations to create a proposal tailored to their needs and budget:

Timeline
  • What is the scope of the project?
  • What is the timeline for completing this project?
  • Are there any important deadlines that we should be aware of or strict deadlines that must be met?
  • Are there any specific project milestones that you would like to achieve?
  • How flexible are you with the project timeline?
Budget
  • What is the budget you have allocated for this project? (Ideal, minimum, maximum)
  • Have you worked with a website developer before? If so, what was your budget for that project?
  • Are you looking for a developer to work on a fixed budget or hourly rate?
  • What is the scope of the project?
  • Are there any additional services or features that you would like to include in the project?
  • Are there any budget constraints that we should be aware of?
  • Do you have a preferred payment schedule or milestone-based payment plan?
  • Is there any flexibility in the project scope, budget, or timeline?

7) Additional Notes

Create a space in your questionnaire for additional notes. Use this space to record your own thoughts, observations, contact names, things your client says that you can quote, etc.

What to Do Before and After Your Client Briefing Session

The Needs Analysis Presentation is an integral part of your overall sales process. Getting your presentation scripts and Needs Assessment Questionnaire right are vitally important.

But so is what you do before and after this stage.

Let’s look at what you can do to maximize the results from your client briefing sessions.

Before The Client Briefing Session

Here are the steps you should take before conducting your client briefing session to ensure that you are well-prepared and can conduct a successful needs analysis that will lead to a customized solution for your client’s website and marketing needs:

  • Research the client’s business: Before meeting with the client, research their business and industry to understand their target audience, competitors, and market trends.
  • Identify the client’s pain points: Determine the client’s pain points by reviewing their existing website, marketing materials, and customer feedback.
  • Customize the questionnaire: Depending on the format of your NAQ, you may be able to customize the questionnaire for each client based on their specific business, website, and marketing needs. If not, a simple way to do this is to create your ideal NAQ and then simply cross off any unnecessary questions you can skip during the client briefing session, or add any specific questions to the “Additional Notes” section of the questionnaire.
  • Set clear objectives for the meeting: Determine the objectives for the meeting with the potential client, such as understanding their goals, identifying their website requirements, and discussing their budget.
  • Schedule the meeting: Schedule the client briefing meeting at a time that is convenient for both parties, and make sure the meeting is held in a distraction-free environment.
  • Rehearse the presentation: Practise your presentation, review your scripts, and visualize how your client briefing meeting will run to create a positive and successful client experience.

After The Client Briefing Session

After conducting your needs analysis presentation with a potential client, make sure to complete the following steps to maximize your results:

  • Analyze the information: Review and compile all the information gathered during the needs analysis session. This includes the client’s business goals, website requirements, marketing efforts, and budget. If your analysis qualifies the potential client as a prospect for your business, continue with the steps below. If not, proceed no further with this process. Instead, reach out to the client and explain why you don’t think you will be the best fit for their needs.
  • Develop a proposal: Develop a comprehensive proposal that outlines your website development process, timeline, deliverables, and costs. The proposal should address the specific needs and goals of the client and should highlight how your WordPress web development services will help the client achieve their objectives.
  • Customize the proposal: Once developed, customize it to address any specific concerns or questions the client raised during the needs analysis session. Ensure that the proposal reflects the client’s unique requirements and preferences.
  • Provide a clear quote: A quote that clearly outlines the costs associated with your services should be provided. It should be transparent and easy to understand, and should reflect the services outlined in the proposal.
  • Provide a timeline: Give the client a detailed timeline for the WordPress web development project that outlines key milestones and deliverables. The timeline should be realistic and achievable, and should reflect the client’s timeline expectations.
  • Schedule the next meeting: Book in a meeting at a time that is convenient for both parties in a distraction-free environment where you will provide the client with a presentation of your solutions and recommendations.

Depending on how you structure your sales process, you may also want to:

  • Schedule a follow-up call or meeting with the client to answer any outstanding questions or clarify any concerns or misunderstandings they may have about the proposal, quote, or timeline.
  • Provide additional information or clarification as needed to ensure the client is fully informed and comfortable moving forward with the proposal, including project scope, timeline, and cost.
  • Finalize the proposal, quote, and timeline with the client, confirm the client’s agreement and obtain any necessary signatures or approvals to move forward with the WordPress web development project.

Finally, you have asked clients lots of questions about their business, so be prepared if clients have some questions about your business.

If Questions Arise, Systematize

As a WordPress web developer, one of the most important steps you can take to ensure the success of your projects is to conduct a thorough needs analysis with your clients.  This will help you understand your client’s business, goals, existing website, marketing efforts, content needs, budget, and timeline.

Asking the right questions during the client briefing process is crucial for delivering the best solution that will not only meet their needs and budget, but hopefully also exceed their expectations.

Using a needs analysis tool like a Needs Assessment Questionnaire can save you valuable time during the client briefing and in the process of qualifying prospects for your business.

Additionally, it can help your business to identify potential roadblocks and challenges upfront, allowing you to develop a strategy that addresses these before they become a problem, keep your project on track, on budget, and on time, create customized WordPress solutions tailored to your clients’ unique needs, goals, and challenges, and establish a strong relationship with your client that can lead to repeat business, referrals, and long-term partnerships.

We hope you have found this information useful. Apply it to your business and watch your sales results improve!