6 Ways Cloud Computing and Virtualization Energize Utility IT Operations With Scalability and Flexibility

Amidst the rapid advancements in the utility and energy industry, where demands continually escalate, the role of IT operations has grown significantly, requiring enhanced capabilities to ensure seamless operations. The global IT operations and service management market is expected to grow by 7.5% by 2025. The IT infrastructure and services will reach $35.98 billion by 2025. To address this need, the integration of cloud computing and virtualization has emerged as a groundbreaking solution as these technologies boast scalability and flexibility, entirely transforming the operational landscape. This article discusses the profound influence these elements have on IT operations within the utility and energy sector, providing a robust and adaptive infrastructure for the future. Before moving on to the main article, here is a pertinent case study that helps demonstrate the importance of such digital transformation. Presented below is the journey of Enel, a multinational energy company and one of the world's leading integrated electricity and gas operators.

Case Study: Enel's Digital Transformation

Headquartered in Italy, Enel operates in more than 30 countries across four continents, supplying energy to around 70 million end users. With such a vast scale of operations, Enel faced the need to revolutionize its traditional IT structure to meet the growing demands of digitalization and data management. To tackle this challenge, Enel planned a robust digital transformation strategy that placed a strong emphasis on cloud computing and virtualization. Enel migrated its legacy IT systems to a hybrid cloud model. Partnering with leading technology providers, they transitioned 70% of their workloads to the cloud. This offered an enhanced ability to scale operations in line with the growing computational demands and data storage needs. The cloud-based model significantly cut down the costs as it operates on a pay-per-use basis, thereby saving on infrastructure maintenance and energy costs. Alongside the transition to the cloud, Enel embraced virtualization to maximize the utilization of its IT resources. The company was able to run multiple virtual machines on single physical servers, reducing hardware needs while maintaining or enhancing system performance. Enel reported enhanced system reliability and security after their transition. The cloud's redundant systems and disaster recovery mechanisms provided the robust support necessary for their critical operations. In addition, advanced security protocols, including encryption, multi-factor authentication, and routine security audits, further bolstered the protection of their vast digital infrastructure.

What Makes A Great Toggle Button? (Case Study, Part 2)

In the first article of this two-parter, we have covered a crucial yet unresolved problem in UI design concerning toggle buttons. Getting across which option is active isn’t easy. There are many visual cue types to choose from: font style, colors, and outlines, just to name a few. To assess which visual cues communicate effectively the option toggled on, we have conducted a thorough case study with over 100 real users and 27 visual cues. Read on to learn about our findings and their implications to take away when designing toggle buttons of your own.

Case Study Results

Let’s see what we found out about effective ways to put an emphasis on a button to make it clear that it’s active. But first, a quick summary of our participants.

Participant Review

After our data collection was completed, we first had to review the quality of participants in our study. This review has led to the disqualification of some participants, mainly those who have shown signs of choosing the answers at random 50-50, a clear sign of not making an effort to complete the tasks genuinely. After we removed these offenders, we were left with the following numbers of participants per study:

Study type: 5-Second Test 20-Second Test
Group: 1 2 1 2
Number of participants: 28 29 30 27

Note: These numbers are still higher than the number of results that we set out to collect as the minimum since we accounted for a dropout rate of up to 16% while launching our recruitment online.

Metric For Comparing Utility Of Visual Cues

We conducted four studies with the tool Five Second Test. Two with a 5-second time limit and two with a 20-second limit. We needed a metric that could objectively compare toggles to each other and how a specific toggle fared in 5-second and 20-second test variants.

We created a weighted metric, which we named the Success-Confidence score. The Success-Confidence score is derived from the number of correct answers (according to expectations) combined with the Likert scale answers to the question: “How sure do you feel about your answer?”

First, we calculate the average confidence for correct and incorrect answers separately and for every toggle. Average confidence can range from 0 to 1 based on how participants answered the Likert scale question. For example, if every respondent who chose the correct toggle side were to respond with “Absolutely sure” on the Likert, the average confidence for the correct answers for the given toggle would be 1.

We then used the calculated average confidence for correct and incorrect answers and calculated the Success-Confidence score of the toggle by using the following formula:

Success-Confidence score = (correct_num  correct_conf) - (incorrect_num  incorrect_conf)

correct_num -> number of correct answers

incorrect_num -> number of incorrect answers for toggle

correct_conf -> average confidence with correct answers

incorrect_conf -> average confidence with incorrect answers 

Since we had different numbers of participants available for each test, we normalized the Success-Confidence score by dividing it by the total number of participants for the given test. Resulting in the following formula:

Normalized Success-Confidence score = Success-Confidence score / number of participants

The scale of normalized Success-Confidence score is -1 to 1. Minus 1 designates a toggle where all participants provide wrong answers with high confidence, and 1 designates a toggle where all respondents answer correctly with high confidence.

Evaluation Of Research Questions

RQ 1: Bold text

A low error rate of 1.7% and a high Success-Confidence score of 0.86 confirmed our expectation that emboldened text in a toggle button makes options perceived as active compared to regular text. This version of the toggle even performed well enough to earn the third best average rank among all the evaluated toggles. Based on this result, we can safely pronounce bold text in the active button as a functional yet simple solution for communicating which toggle option is selected. This knowledge should be particularly useful if all your toggle buttons use fonts of equal weight, as is often the case.

RQ 2: Text size

We tested four toggles with varying size differences between the text in the active and inactive buttons. As predicted, the toggle where the font size difference was barely noticeable performed the worst with an error rate of almost 15% and a confidence score of only 0.63. Compared to that, the toggle with the greatest difference in font size was perceived with an error rate of only 4.4% and a confidence score of 0.81, which are both significant improvements when compared to the smallest difference. The performance of the two middle toggles was set between these two extremes. Unexpectedly, the toggle with the second smallest difference slightly outperformed the toggle with the second greatest difference. However, this irregularity is small enough to be explained by noise in the data.

Since the performance rate grew in general with the font size, our expectation of “greater size difference means better visual cue for toggles” was confirmed. However, since using a significantly bigger font to represent the active state can prove to be visually unappealing, we suggest using bold text instead. Bold text is not only easier to include in your design, but also performs better.

RQ 3: Contrast of inverted colors in text labels

The black & white and orange & blue inverted color combinations proved to be some of the worst performing toggles we’ve tested, with error rates of 19.3% and 23.7% and confidence of only 0.56 and 0.41, respectively. The low confidence levels suggest that even the respondents who managed to select the correct answer weren’t confident at all in their answers. Our prediction of the darker colors perceived as active was confirmed by the error rate of less than 0.5 in both cases. However, the low confidence deteriorates the strength of the lower error rates. This means that our hypothesis that inverted font colors are an ineffective visual cue was confirmed. Try to avoid using colors of the same visual importance, as also seen in research question number 8 which concerns toggle backgrounds.

RQ 4: Cultural perception of red vs. green in text labels

A seemingly surprising (although not completely unexpected) result came from the toggle with red and green text. The error rate for this toggle is 32.5% and confidence only 0.32, making it one of the worst performing toggles overall, with an average rank of 24.67. This result suggests that the red/green combination not only fails to improve the results compared to other color couples but actually makes it worse. The possible explanation could be that the green color was perceived as a switch, not a sign of an active state. Red-green colorblindness is also the most common type of color vision deficiency, which is reason enough not to use this visual cue, as wrong answers in our experiment also reflect.

RQ 5: Color vs. black/white in text labels

The combination of colorful and white labels performed well (avg rank of 9.33). The toggle which was surprisingly problematic was the combination of color and black. This toggle with an error rate of 14% and confidence of only 0.59 shows that the participants weren’t able to pick the active side reliably. We predict that this phenomenon was most likely caused by the visual strength of black text compared to colored text, regardless of hue. Therefore, simply distinguishing active and inactive toggles by turning inactive black text colorful isn’t recommended. For better color-based approaches, simply continue reading our findings for the next research questions.

RQ 6: Primary color vs. neutral color (shades of gray) in text labels

Compared to the toggles from the immediately preceding research question, this toggle represents a middle ground between the white and black inactive options with its gray color. This was reflected in the resulting average rank of 12, which is better than the color vs. black option, but worse than color vs. white.

RQ 7: Different saturation of the same color in text labels

The last text color variant of toggles we tested also confirmed our theory. The difference in saturation was a strong enough cue to secure satisfying results (an error rate of 8.7% with a confidence of 0.77). This suggests that the respondents reliably selected the correct option. Note that while the error rate was comparable to primary vs. neutral color, different saturations of the same color inspired higher confidence. Therefore, the preferable option is to use a lower saturation of the same color instead of greyscale for inactive toggle buttons.

RQ 8: Contrast of inverted colors in the background and RQ 9: Cultural perception of red and green in the background

The toggles defined in these hypotheses were counterparts to the toggles from hypotheses 3 and 4. We kept the color pairs the same, but this time we filled the toggle’s backgrounds with the color instead of coloring the text. The results with background colors followed the same pattern as with the text, with the black-&-white combination performing the best, the orange-&-blue coming second and red-&-green taking last place. However, compared to the colored text variants, the filling variants performed better than their text alternative (error rate improvement by 5-8%).

What may seem counterintuitive at first is that although black-&-white filling has a stronger potential to stimulate confusion due to dark/light mode settings, it still performed better than black-&-white text alternative or inverted colors with hue. How a light/dark mode setting would affect the results for this specific toggle could bear further investigation. However, for building an optimal toggle, it might be unnecessary, considering the overall better results achieved by other types of toggle backgrounds.

RQ 10: Different saturations of the same color in the background

Different shades of orange achieved an error rate of 9.7% and a normalized Success-Confidence score of 0.72. Compared to that, different shades of gray had an error rate of 15% and a normalized Success-Confidence score of 0.63 for the gray toggle — both overall decent scores which proved these visual cues as usable. The improvement of the orange color over the greyscale variant has been significant (resulting in an average rank of 13.67 compared to 18). It is important to note that even though the orange variant performed better than the gray one, their performance was still average at best. If background colors are used in this form, we recommend combining them with another visual cue.

RQ 11: Saturated colors and grayscale colors in the background

As expected, the version where the inactive button was a lighter shade of gray performed better (6.1% error rate and 0.79 confidence) than the darker gray version (12.3% error rate and 0.66 confidence). It also outperformed the orange version from hypothesis 10 and overall performed well, earning the average rank of 6.67 (sixth best). The more saturated version was placed in the bottom half but still managed to outperform the grayscale version from hypothesis 10 (average rank 15 compared to 18). The results of these two hypotheses suggest that if we want to use a saturated color fill to denote activity, it is best coupled with low saturated gray.

RQ 12: Inverted design of buttons

We believed that the inversion of design would be more confusing to the users than the saturations described in hypotheses 10 and 11. With a 6.1% error rate and 0.78 Success-Confidence score, this toggle ranks just below the best saturation variant (saturated color and less saturated gray), scoring seventh place overall with an average rank of 7.33. However, it is important to note that this toggle performed significantly worse in the 20-second test compared to the 5-second test (a drop of 9 between the rankings). This can be explained by the fact that the half with the filled background (the correct one to pick) lures the user’s attention very quickly (resulting in better performance on a 5-second test). However, when the user is provided with a longer time to observe the toggle, they start to question their instincts, resulting in a more than doubled error rate (from 3.5% to 8.8%). Therefore, we recommend avoiding inversion of toggle buttons in favor of visual cues that avoid potential confusion and don’t highlight the inactive button in any way.

RQ 13: Highlighted outline of the active button

As expected, the highlighted outline provided a reliable cue for respondents to decide (8.8% error rate and 0.76 Success-Confidence score). The average rank of 10 puts this toggle in the top half of toggles performance-wise. Due to being outperformed by other visual cues, a combination with another cue is recommended for better visual clarity.

RQ 14: Inactive button coincides with the background

Another exciting result. Although we suspected that the respondents could have problems perceiving the inactive button as a button at all, this toggle achieved stellar results. With an error rate of only 0.9% and confidence of more than 0.89, it ranked first overall with an average rank of 1.33, which is an improvement over the simple saturated color vs. grayscale toggle seen in RQ11. This means that having the inactive button of the same color as the surroundings is a supreme way to communicate selection in a toggle button.

RQ 15: Embossed vs. debossed button

The error rate for both embossed toggles was 83.3% and the confidence score was the same as well with -0.58. This means that chasing skeuomorphism isn’t always the right solution, at least when it comes to toggles.

We expect this result is due to the common use of embossing effects in digital interfaces to bestow more weight on interface elements. A toggle with more visual weight would be perceived as active.

RQ 16: Check sign

As expected from its straightforward nature, the check sign icon added to an active button in a toggle performed very well, achieving the second best average rank of 2.33 with only a 5% error rate on Success-Confidence score of 0.86. The only problem we see in choosing this toggle is its potential cumbersome inclusion in the design of the web, and it may induce unwanted connections with checkboxes.

RQ 17: Radio button

Even though the nature of the radio button toggle is similar to the check sign design, when used as an icon, its meaning is less explicit. This was confirmed by achieving a worse average rank of 5.67 and a higher error rate of 9% combined with a lower Success-Confidence score of only 0.8. Despite the rather good performance of this visual cue, using radio buttons as toggles doesn’t align with their semantics since the radio buttons are meant to be used in forms, while toggles are meant to signify an immediate change of state.

Ranking The Visual Cues

We ranked the visual cues represented by toggles separately for the results they achieved in 5-second tests, 20-second tests, and the two combined. This resulted in 3 separate rankings. We calculated the average rank for every toggle and came up with the three worst and three best toggles.

Worst Toggles

Third last place — Toggle #9 — Red & Green Text Labels

  • Average rank: 24.67
  • 5-second test rank: 25
  • 20-second test rank: 24
  • Combined rank: 25

Second last place — Toggle #22 — Embossed button (no shadow version)

  • Average rank: 26.33
  • 5-second test rank: 27
  • 20-second test rank: 26
  • Combined rank: 26

Last place — Toggle #27 — Embossed button (shadow version)

  • Average rank: 26.67
  • 5-second test rank: 26
  • 20-second test rank: 27
  • Combined rank: 27

Best Toggles

Third place winner — Toggle #2 — Bold text

  • Average rank: 2.67
  • 5-second test rank: 4
  • 20-second test rank: 2
  • Combined rank: 2

Second place winner — Toggle #24 — Check sign

  • Average rank: 2.33
  • 5-second test rank: 1
  • 20-second test rank: 3
  • Combined rank: 3

First place winner — Toggle #26 — Inactive button coincides with the background

  • Average rank: 1.33
  • 5-second test rank: 2
  • 20-second test rank: 1
  • Combined rank: 1

Difference between the 5-second and 20-second test

Our secondary goal was to learn the difference in perception of toggles based on the time the respondents had to observe them, before deciding on an answer. Our expectation was that the result from the 20-second tests should be better overall (lower error rate and higher confidence score) than the results of the 5-second tests since the participants would have more time to think about the toggles in front of them.

We have calculated the average values and the results can be seen in the following table:

Test type Average error rate Average n. confidence score
5-second test 0.1728 0.5749
20-second test 0.1670 0.6013

The results confirmed our expectations since the average error rate was lower in the 20-second tests and the Success-Confidence score was higher. However, these differences were not significant. What interested us was whether any specific toggles showed significant differences between the two test variants. Therefore we focused on toggles that showed the biggest improvements/deteriorations between the 5-second and 20-second test results.

Toggles that performed better after 20 seconds

The greatest improvement in the number of ranks gained between a 5-second and a 20-second test is shared between toggles #4, #11, and #18 seen below. They all gained 6 ranks once participants had more time to observe them. This signifies that the clarity of the cues improved with added observation time.

  • 5-second test rank: 16
  • 20-second test rank: 10
  • Error Rate Difference: -0.0527
  • Normalized Success-Confidence Score Difference: 0.0913

This visual cue had the second smallest font size difference between the active and inactive states. We believe the change in rank is due to some participants needing time to notice smaller font size differences. However, the difference was noticeable enough to matter when the additional time was added to the test.

The next two toggles have enough in common for us to analyze them together.

  • 5-second test rank: 12
  • 20-second test rank: 6
  • Error Rate Difference: -0.0526
  • Normalized Success-Confidence Score Difference: 0.0912

  • 5-second test rank: 17
  • 20-second test rank: 11
  • Error Rate Difference: -0.0526
  • Normalized Confidence Score Difference: 0.0772

Both these cues were designed in a way that the more pronounced/saturated color denotes the active option while the inactive option is portrayed by a lighter color. The difference in results shows that a certain percentage of users initially view a lighter color as the more pronounced one. However, the percentage decreases when users spend more seconds thinking about the toggle. To make a toggle that is easy to comprehend right away, an interface designer should probably look at the other visual cues.

Toggles that performed worse after 20 seconds

Toggle 15

Toggle 17

Toggle Number 5-second test rank 20-second test rank Error Rate Difference N. Confidence Score Difference
15 11 19 0.0526 -0.1018
17 15 21 0.0877 -0.1299

Toggle 15 showed the biggest drop in rank, while toggle 17 suffered the most significant negative changes in error rate and confidence score.

We explain the drop in these two by the fact that these two toggles are similar in a way — both have a dark and a light half — which means they would be perceived differently, for example, when using the light mode versus dark mode setting on a mobile device. While the user’s instinctive reaction may be to pick the darker color as active, given some time, more people will begin to second-guess themselves. Instead of the darker color capturing their gaze, they may start overthinking that the brighter color is highlighted against the dark. A good toggle shouldn’t encourage such doubts.

Potential For Future Research

All the cues we tested in our study were simple/singular. Going from here, the natural next step for research would be to go deeper, with a study that focuses on evaluating our findings in more detail: Can I use a bold font in an inactive toggle button if the inactive button is even bolder? Will the combination of visual cues perform better than either cue individually? While the answers may seem intuitive, research data may prove otherwise, as our study has shown.

Another next step would be testing the effect of color alterations. Would the saturation of green work just as well as the saturation of orange?

Testing the performance of visual cues in prototypes of website navigation using different color schemes is another ambitious area for continued research. We tested our toggles in the void, but it’s possible that their performance would vary depending on the visual context.

Conclusion

In this article, we described our research where we analyzed a complex list of visual cues used by toggle buttons to communicate which of their options is active. By testing our research questions with real users, we collected a respectable amount of data to make reliable statements about the effectiveness of visual cues.

Here are some of the main points we arrived at that you should keep in mind when designing your next toggle button:

  • If you choose to use color as the main lead, we suggest you use a combination of a saturated lively color (ideally corresponding with your CTA color scheme) and a light grayscale neutral color. Using the colors in the toggle’s background fill is preferable to using colored text. If the color of the inactive button is the same as the surrounding background, this will further improve the button’s comprehensibility.
  • Contrasting colors of similar visual weight should not be used under any circumstances. Red and green’s cultural perceptions won’t help you communicate what’s selected. There are much better ways to go about this. Be wary of the black and white combination as well. Toggles that use this color scheme are the ones most prone to the confusion rooted in the dark/light mode settings.
  • You may choose a minimalistic path and use the font itself to show the difference between button states. The bold-thin combination is the go-to solution, but you may also use different font sizes. Just make sure to differentiate the font sizes well enough. Using font-weight or size is recommended to support other visual cues as well since it’s very flexible.
  • If you decide to use embossment as the main cue — you really shouldn’t. It proved to be unreliable at communicating the active state of a toggle. Even a simple border was more effective. If you decide to use embossed toggles for their visual appeal, we suggest combining embossment with a primary visual cue, such as bold text or color fill.
  • There’s no shame in using designs that you are sure will work. A tick or a radio-button icon both performed very well. The evident drawback of choosing them is the cumbersome inclusion in the design of your web since radio buttons as UI elements serve a different function from toggles. The ticks could be perceived as outdated (akin to a physical form more than a website). As for radio button icons, you might as well use a radio button instead.

Follow these tips, and your toggle button designs will no longer cause users to hesitate about what’s selected at the moment.

Resources

What Makes A Great Toggle Button? (Case Study, Part 1)

In this first part of a two-part article, we will analyze the characteristics of effective toggle buttons. These characteristics serve as visual cues for helping users recognize which of the button’s options is actively switched on. We have conducted a comprehensive research study with real users to evaluate the effectiveness of visual cues from a variety of categories. As part of our experiment, we assessed how the perception of visual cues changes if the user has more time to observe them.

In the second part, we discuss our results: which cues work better than others, which are the best, which are the worst, and why. Certain findings challenge some of the traditional beliefs in toggle button design. Finally, we present a list of best practices for how to create optimal toggle buttons based on our discoveries.

The problem of how to design an effective toggle button that shows the selected option clearly is a long-term open question among UI/UX designers. In this article, we discuss a study that we’ve conducted to find the final answers to the following questions:

  • What does a good, clear and readable toggle button look like?
  • What visual characteristics of a toggle button make it error-proof and prevent confusion and frustration of users?

First, we’ll talk a bit about toggle buttons themselves, when it’s the right time to use them, and which principles to bear in mind while doing so.

Let’s discuss a common scenario: imagine you’re buying an airline ticket. You pick the date and your destination when you suddenly come across something like this next to your ticket details:

If you’re confused about whether your ticket lets you come back home, don’t worry, you’re not the only one. The marvelous piece of UI seen above is called a toggle button.

What Are Toggles?

A toggle button. As the name suggests, it refers to a control used for switching (or toggling) between two or more states or options. Both its name and function are part of a skeuomorphic metaphor, meaning they’re based on something older and more familiar. In this case, a physical forerunner. To better understand the basis of a digital toggle button, let’s talk about the qualities of a physical toggle — the common light switch.

  1. As you can see, there are two states that a light switch can be in: on or off, with nothing in between. Similarly, a digital toggle is a control with two (or sometimes more) mutually exclusive states with one of them always set as the default value.
  2. You can see the result of interacting with a light switch straight away as the lightbulb will immediately light up or go dark. In the same way, a well-designed toggle should perform a visible change in the system — you should get direct feedback without the need to press another (Save or Submit) button.

When To Use A Toggle Button?

In short, when designing a toggle button, for the sake of your users, it’s good to hold on to these basic principles:

  • Use them only when they have an immediate effect, without any Save or Submit.
  • Apply them when the setting has a default value.

In other cases, a checkbox or a group of radio buttons may be the better option.

Toggle Switch vs. Toggle Button

There are two ways in which you can use a toggle-type element. For binary options (mostly on/off, as discussed above), you can go with a toggle switch. It’s very simple. You just activate or deactivate a function.

A toggle button is a suitable solution for switching between opposing (even multiple) options. It’s composed of two or more buttons next to each other. The selected button needs to “highlight” in some manner to signify the toggled state.

The goal is to design toggle buttons distinct enough to signal the difference in visual weight between selected and unselected options. At the same time, the buttons should be alike enough to be perceived as two (or more) parts of the same whole. The challenge is to make evident which button is active.

With toggle switches, it’s relatively simple. With a direct label present (on/off), you can read the toggle state quite easily. However, toggle buttons don’t contain the text “on” or “off.” Their label stands for explaining the state’s quality rather than showing the state itself.

Thus, when reading toggle buttons, users have to rely on other visual cues. Which, when not used right, can do more harm than good. One thing is for sure, though, the state that’s currently active should be emphasized, not the possible command for changing it. This brings us to the main focus of this article. There’s a hot question going around among UI and UX designers: How to make a good toggle switch?

When we asked this question ourselves, we found there were no general rules founded on solid user research. This is why we’ve conducted our own case study to remedy this.

Case Study: Data-based Approach To Designing Clear And Effective Toggle Buttons

With enough experience and some usability testing, you can ensure your toggle design isn’t an issue. But what if you could just tell right away what will or won’t work based on comprehensive research and cold hard data? Further, what design features help users distinguish which toggle option is the active one? How do you combine visual cues, such as colors, text size, and frames, to make the state of toggles instantly recognizable? In our study, we’ve explored the field of visual cues and focused on the question of which design characteristics signal that a button in a toggle pair is active or not.

Visual Cues

Visual cues are aspects of elements on a website that draw the user’s attention and provide information about how to use the design. They help users spot clickable features, distinguish between active or inactive states, and introduce the possibilities presented to them by a website. The signal they send should be clear and easily readable.

Research Path To Designing Toggles

Although how to design toggle buttons is a commonly discussed topic among designers, there are (to our knowledge) no clear guidelines on picking the most effective visual cues for toggle buttons. Therefore, we have decided to look closely at specific ways to highlight the active button in a toggle pair.

The anatomy of toggle buttons can usually be described as a combination of label, filling, outline, and sometimes a specific icon, whereas not all of these components have to be present at the same time. Each of them can be emphasized somehow (can stand as a visual cue), and there are many different ways to combine them together.

Standalone Visual Cues

To start from the base, we focused on standalone visual cues concerning each of the button’s possible components: label, filling, frame, and icons. Based on our assumptions of how these components influence the perception of an active/inactive button, we have formulated several specific research questions.

To test our assumptions, we have designed a set of toggle buttons that individually represent our research question for the visual cues. We wanted only one visual cue in focus on each toggle button to shape the user’s perception of an in/active state of the button. Therefore the visual cue should be the only thing that differentiates the buttons.

When designing the toggles for the study, we came across one challenge: what labels to choose? Of course, we wanted to replicate the real-life experience on the web, but at the same time, we didn’t want participants to be influenced by a particular verbal meaning. Therefore, we needed to choose the labels carefully.

The first idea was to use “Option A/Option B,” but since letters A and B implicitly imply alphabetical order, this would mean risking having a subconscious effect on the participant’s choice. Similarly, using words with meaning such as “Cat/Dog” could mean an individual preference would play its part — a cat or dog lover’s subconscious may get involved, etc.

To prevent the label’s meaning from affecting the selection, we have finally decided to label the buttons with nonsensical words without any clear associations: “Racted” and “Blison”. This way, only visual characteristics can affect the users’ perception of the button’s active and inactive state. Even if users didn’t determine a toggle’s state, they would need to choose the answers randomly instead of defaulting to another cognitive pattern.

Without further ado, here is the list of visual cues and our research questions related to them.

Label

With the label as a visual cue, we consider its key properties, such as its thickness, text size, and color.

Research question 1: Bold text

We assume that the button with the emboldened label will be perceived as active rather than the regular text.

Research question 2: Text size

When two buttons with labels differing in the size of the text inside are next to each other, we expect that the one with the larger label will be perceived as active. We also expect that the larger the difference is, the easier it will be to determine the active state.

Research question 3: Contrast of inverted colors in text labels

Contrasting colors are good for distinguishing between options. However, if you need to emphasize one of them, and therefore you need one of them to have more visual weight, it’s not that convenient. Inverted colors evoke equal options.

We expect the combination of inverted black and white to be perceived as equal options. The same would be the case with other inverted colors, such as blue and orange, seen in the figure below. Therefore, we assume participants won’t be consistent in determining which button signals the active state.

For this study, we presume the buttons with darker text colors (black and blue) to be considered active. Their darkness could evoke emphasized buttons, but as mentioned, we mainly expect inconsistency.

It needs to be said that for people with color vision deficiency, contrast and colors, in general, are insufficient cues. We mustn’t forget that according to the NHS (National Health Service), about 8 % of men suffer from daltonism, which means they can’t rely on color cues and will need more than just a color to determine the button’s in/activation.

Research question 4: Cultural perception of red vs. green in text labels

Even though it raises the same concerns as the research question above, since red and green are contrasting colors, there is a culturally determined consensus about this specific pair. In western cultures, the color green is associated with the “on” (or active/open) state, while the color red is associated with the “off” (inactive/closed) state. We expect this phenomenon will manifest in the test.

Research question 5: Color vs. black/white in text labels

When combining colors and black/white, we expect the colored label to be perceived as a signal of activation because it carries more visual weight.

Research question 6: Primary color vs. neutral colors (shades of gray) in text labels

The principle is the same as with research question 5. Neutral colors carry less visual weight. Therefore our expectation is that the colored label will signal an active state.

Research question 7: Different saturation of the same color in text labels

Our assumption is that the more saturated the color is, the more visual weight it carries. Therefore, a more intense color is expected to evoke the button’s activation.

Filling

The filling or the background of the toggle is all about color combinations. We assume that many of the same principles seen with label colors also apply here.

Research question 8: Contrast of inverted colors in background

The relationship between inverted colors of the button filling is analogical to the one between inverted colorations of toggle text. Since inverted colors carry the same visual weight, we expect the contrasting colors to be confusing. Which button is active will not be clear. Therefore, we expect the responses to be inconsistent.

For this research, we have labeled the buttons with darker filings as active since the darker shade could be perceived as the active state (however, we still expect the responses to be mostly inconsistent).

Research question 9: Cultural perception of red vs. green in background

Since red and green are a specific case of contrasting colors (due to our western cultural perception of this pair, rather than color inversion), we assume that the green button will be perceived as the active one.

Research question 10: Different saturations of the same color in background

As we stated with label colorations, color saturation carries a visual weight. We assume that the more saturated color will be perceived as the activation signal. In the case of a neutral color such as gray, we expect the higher value (from the HSV color model) of the filling to function as a clue.

We expect this effect to be more evident with more saturated colors (in this case, orange) than neutral colors (gray) since saturated colors carry more visual weight.

Research question 11: Saturated and grayscale colors in background

We expect that since the button filled with the saturated (yellow) color carries more visual weight, it will be perceived as emphasized. Therefore, it will signal an active state when standing opposite a grayscale-colored button.

Additionally, we predict that the distinction will be easier to make with the combination of yellow and less saturated gray because of the more evident contrast.

Research question 12: Inverted design of buttons

A common way to design toggles is to invert the color of the background with the color of the text. The problem with this visual cue is that both buttons in the pair carry a part of it: on the left, there’s the colored filling, and on the right, there’s the colored text. The blue filling might have a stronger visual weight, but we expect the distinction between the active and inactive button to be more difficult for users to interpret consistently than some other visual cues, such as color saturation.

Outline

This category of research questions focuses on the presence/absence of an outline, as well as on its placement. Although outlines can differ in several aspects, such as color or thickness, these are parallel to visual cues, which are already covered in categories Label and Filling. Hence we have dedicated this category to determine the effect of an outline in general, as well as some cues which are outline-specific.

Regarding the color, we have discussed its impact in the previous sections, and we expect the effect to be analogical when used on the outline.

Research question 13: Highlighted outline of the active button

We expect the button with a highlighted outline to be perceived as the active one.

This one may seem obvious, but it never does any harm to check up on it or compare its effectiveness to other visual cues.

Research question 14: Inactive button coincides with the background

This is a hybrid between background and border-based visual cues. It attempts to improve a saturated vs. grayscale background color cue, as seen in RQ11, by making it clearer that the gray-colored button is blending into the background while the colored active button is separated from it. Our concern, however, is that when the outline is missing, and the inactive button coincides with the background, it may be harder for participants to decide about the active state, since the inactive button may not be perceived as a button at all.

Research question 15: Embossed vs. debossed button

Making something embossed is a common way to highlight something on a website. However, should we follow the guidance of making toggle buttons resemble physical buttons, then, in this case, it should be the other way around. The button that’s pushed (aka debossed) is active. Therefore, to verify whether this is true, we hypothesize that the button that’s pushed in will be perceived as “on,” while the button which is embossed and hence isn’t pushed in will be perceived as “off.” This pushed look should be even stronger when supported with a shadow effect.

However, the embossed button appears more “in your face,” which might signal its activation as well. We expect the distinction to be inconsistent.

Icons

Icons provide yet another way to highlight an active button. Can they possibly outweigh all the other visual cues? To confirm the effectiveness of the icons, the designs in this research question category abandon all of the other visual stimuli in favor of including icons.

Research question 16: Check sign

We expect the presence of a check sign on one button to evoke an active state.

Research question 17: Radio button

One possibility of designing toggle buttons is to combine them with radio buttons. Radio buttons usually stand by themselves and are used in different contexts from toggle buttons. However, their signature radio button circles are an easy and clear mechanism for communicating selection, which could be used to emphasize the active side of a toggle.

We expect the button with the filled radio button to appear as active.

Testing The Effectiveness Of Standalone Visual Cues

We verified our assumptions in a quantitative online study. We needed to support our findings with a reliably good amount of hard data and the online research with UXtweak tools was an easy way to get them. We used two versions of UXtweak’s Five Second Test. UXtweak allows you to adjust the display time of the Five Second Test to your needs, so we ran one test where we showed the toggles to participants for 5 seconds and another one where we did the same for 20 seconds. We did this to see whether there would be a difference in results when the participants had more time to think about the toggles before they made decisions and answered questions.

We designed two variants of every tested toggle button, where either the left button (Racted) or the right button (Blison) was designed as active. We then split these variants randomly into two groups. One-half of the participants completed the research with stimulus group A while the second half was presented with group B. We did this to negate the effect of whether the visual cue was on the left or the right side.

Five Second Test

In the Five Second Test, the participants were shown toggle button designs one by one. Their task, given to them before each stimulus, was to first look at a toggle button for 5 seconds, then answer some questions. The same two questions were always asked. This is the first one:

“Which option was turned on?”

The answer was given via a radio button group, the available options being Racted (left) and Blison (right) for easy identification.

We considered several wordings of how this question should be formulated during our study’s design. Although this might seem marginal, the wrong wording could affect the participant’s understanding of the task and, with it, the whole study. For example, consider one of the other proposals: “Which button was pushed?” When you look at the designs, usually it’s the active button that is more visually expressive, highlighted in some way.

The psychology of perception of 2D pictures, specifically the atmospheric perspective principle, states that we perceive the objects which are less saturated and somehow blurred as further from us than the more visible ones. And if we’d asked which button was pushed, which in accordance with the physical world means further from us, we could get exactly the opposite answers than needed.

The second question aimed to verify the level of certainty about why the toggle option is currently active:

“How sure do you feel about your answer?”

Participants could answer on 5 points Likert scale, starting at “Not sure at all” and ending with an “Absolutely sure.” This gave us additional information about the clarity of the visual cues. Since some answer to a first question was required to continue, having a secondary means of comparing visual cues was useful in case the number of participants who picked the expected answer was the same in both.

To make sure that the instructions were understood correctly, in the first warm-up task (which wasn’t analyzed later), we asked participants one more question: “Which option have you chosen? The one that was…”, following with two answer options: “On” and “Off.” This question made the participants think about what they were supposed to do, and it verified whether they would perform the study how they were meant to.

Twenty Second Test

Analogically to the Five Second Test, the participant’s task was to look at the designs for a limited amount of time, although this time, the period was 20 seconds. We decided to conduct this variant of the study as well to check whether the performance gets better with more time to observe the designs.

The time period was the only thing different from the 5-second variant.

Questionnaire During The Study

At the beginning of the study, each participant was asked to fill in a small survey to help us profile the participants. We were interested in:

  • The age of the participants,
  • Their gender,
  • Their highest achieved level of education,
  • The frequency at which they browse the web,
  • The purpose of why they browse the web most,
  • A self-evaluation of their skills as a web user.

Having this data would let us understand the composition of our user sample but also search for detailed insights, such as whether the frequency of web browsing corresponds to the performance in the test or whether the perception of toggles is age-dependent (younger people who are more active online may have more experience with using them).

After the test was completed, we gave the participants an opportunity to leave us a message in a post-study question.

Participants

In total, we aimed to collect 100 responses to represent the general population. This data background would allow us later to generalize our findings during the interpretation of results.

For recruitment, we used UXtweak’s User Panel, which was perfect for an online unmoderated study with the need for a large number of participants. With the User Panel, we could comfortably order the participants and select some characteristics they should possess. We involved people from 16 to 75 years old, and we selected English-speaking countries’ participants (Canada, USA, GB, and Australia) to run the recruitment since our study was designed in English.

Results

Stay tuned for Part 2 to learn about our findings and how to make your toggle buttons clear to everyone at first sight!

Related Reading on Smashing Magazine

Growing UX Maturity: Knowledge Sharing And Mentorship (Part 2)

This series of articles presents tactics UX practitioners can use to promote the growth of UX maturity in their organizations or product teams. I covered the importance of finding and utilizing UX Champions and showing the ROI/value of UX in the first article of this series. Today, I’ll focus on two additional tactics for UX practitioners to grow their organization’s UX maturity in this article, knowledge sharing and mentorship.

Chapman and Plewes’ framework (see image below) describes five steps or stages of organizational UX maturity that I’m referencing when I mention UX maturity stages within the tactics I present.

The table below lists the six tactics and their relationship to UX maturity. Note that the tactics don’t build on the prior tactics, you can and should implement multiple tactics simultaneously. However, some tactics, such as mentoring, might not be possible in an organization with low UX maturity that lacks the support for a mentoring program.

  1. Finding and utilizing UX Champions (Go to Part 1 →)
    Beginning stages: the UX champion will plant seeds and open doors for growing UX in an organization.
  2. Demonstrating the ROI/value of UX (Go to Part 1 →)
    Beginning stages justify more investment; later stages justify continued investment.
  3. Knowledge Sharing/Documenting what UX work has been done
    Less relevant/possible in the earliest stages of maturity when there is little UX being done. Creates a foundation and then serves to maintain institutional knowledge even when individuals leave or change roles.
  4. Mentoring
    Middle and later stages of maturity. Grow individual skills in a two-way direction that also exposes more people to UX and improves the knowledge transfer of more senior UX, which should lead to a shared understanding of how UX looks and is implemented in the organization.
  5. Education of UX staff on UX tools and specific areas of UX expertise (coming up in Part 3)
    All stages of maturity require continued education of UX staff.
  6. Education of non-UX staff on UX principles and processes(coming up in Part 3)
    All stages of maturity benefit from the education of non-UX staff.

I’ll focus on two tactics in this article:

  • Tactic #3
    Knowledge sharing/document what’s been done and make it available across the organization;
  • Tactic #4
    Mentorship

These two tactics are particularly applicable for an organization at stage 3 or early stage 4 of Chapman and Plewes’ UX maturity model. These tactics serve to document and build upon existing UX accomplishments, provide UX resources for current and future staff, and create and propagate the specific UX processes and values within your organization.

Tactic 3: Knowledge Sharing/Document What’s Been Done And Make It Available Across The Organization

Organizations with more mature UX have well-documented UX processes, as well as a history of what they have learned through UX research and exploration through design iteration and testing. You can’t create a mature organization without lessons learned. Mature organizations do not reinvent the wheel each time they start a product or project, in terms of how UX is integrated. Organizations with more mature UX gain efficiency through documentation of the lessons learned from past UX, and consistency in how UX is practiced/applied across products.

Each organization might have a unique culture of how information is documented and shared. Sometimes intranets and shared internal sites are highly used and easily searchable for the content you need. Sometimes, not so much. In the latter case, these repositories gather dust, and the knowledge is eventually lost in time which is replaced with something flashier or something considered more in-line with the needs of the company.

You will need to decide what might be the best way to both document and then preserve lessons learned for the needs of your organization. Here are some options:

  • Manual sending/sharing
    Manual sharing includes one on one and group conversation about UX research and design with other professionals both within and outside of UX roles at your organization. This can include e-mailing reports and files as attachments or links for others to access. This is the most time-consuming and least impactful in terms of the ability to have others easily find your work. You’re essentially relying on word of mouth and for others to save your work to pass along to future team members. I still suggest having these conversations as often as possible. There is a lot of value in these conversations when you have them with individuals and they see your passion for UX and creating great experiences.

  • Informal meetings, one-off presentations, lunch and learns, and cross-project meetings focused on the organization’s UX work
    These are events where you can talk about relevant examples of UX from projects or products within the organization. The great parnjksvt of this is making connections between the people attending these presentations, who might otherwise not interact with each other during the course of their day-to-day tasks. As with manual sharing, this is time-consuming and relies on getting the right people in the room. You can increase the impact of these events if you record them and share out the video link with others who are unable to attend live.

  • Catalogued research and files accessible online
    This can include traditional go-to file repositories: Sharepoint sites, Onedrive, Box.com, Dropbox, and Google Drive (whatever platform works for your organization). You might also look towards licensing UX-specific platforms meant for storing and sharing UX research and product information such as Handrail, Productboard, and other collaboration tools that offer repositories. (Note: I haven’t used nor do I endorse either platform listed.)
    While any of these options offer the positive side of being accessible by anyone within the organization, each has the drawback that people need to know how to access it and how to use it. Also, each needs someone to create and maintain standards like tags and naming conventions, if it will stay manageable and useful. UXPin offers a resource detailing what you can consider documenting as part of your UX documentation, and Nielsen Norman Group offers a guide for setting up a research repository.

  • Systems and guides
    Organizations reaching the highest levels of UX maturity have design standards and design systems in place that include content and code for facilitating UX consistency and standards across the organization. Audrey Hacq provides a thorough guide to what makes up a design system. Hacq, in citing the words of Jina Anne states that design systems consist of “Tools for designers & developers, patterns, components, guidelines” as well as “brand values, shared ways of working, mindset, shared beliefs.”
    The drawback with a design system is the effort you will need to put in to create and maintain the system. You aren’t likely to have the time or ability to mandate the use of the design system if you are in an organization with little UX maturity. However, you can set your sights on reaching this level of documentation, and as UX becomes more prevalent and resources are increased, the value of creating the system will overcome the inertia that might initially exist to such a large endeavor.

You might consider a mix of the options above. For example, you should always consider including informal and one-off presenting opportunities in conjunction with something more formalized and enduring. However you decide to start documenting your UX, you need a foundation in order to grow and focus energy on other areas of UX. You don’t want to start your process from scratch each time.

If your organization is in the beginning stages of UX you might find yourself responsible for starting the repository. You might not have control over each area or product UX work is occurring, or how documentation occurs. You can attempt to work with others in order to standardize what and how things are documented. You can also use the list from UXPin to begin documenting what you can, and add to this as you get more resources or other motivated UX practitioners join your organization.

Case Study: Large Pharmaceutical Company With Low UX Maturity

We were tasked with building UX capacity and documenting the accomplishments of specific UX work over the course of eight months working across product teams with a large pharmaceutical company. We conducted stakeholder and user interviews, redesigned a number of products, and did usability testing of current and future designs. We documented our processes and accomplishments with interview protocols, sketch files, journey maps, research reports, usability testing finding and recommendation reports, and decision trees to use for the creation of future designs.

We used each of the methods listed above to share the knowledge we’d gained and document this for future staff engaging in UX work at the company.

  • Manual sending/sharing
    We worked directly with members of various teams to provide them an understanding of the research protocols and other outputs our team created. We also shared these files in an editable format for them to repurpose or use as templates for later projects. We used our contacts to identify people who might benefit from having the documents and included them in emails containing the files.

  • Informal meetings, one-off presentations, lunch and learns, and cross-project meetings
    The company was very large, with staff located across the world. We were fortunate to have an effective internal Champion who was able to identify critical individuals and teams for us to present our work too. We also spent time onsite at various locations and were able to have one on one conversations with key parties who we were introduced to while we were on site. Many of these interactions were impromptu, and would not have occurred if we did not have a presence and an insider advocating for us to share our work. We presented multiple times on the various aspects of the work we were doing, and tailored the message to be effective to the audience — e.g., tactical usability testing findings were presented to product team members, while higher-level overviews and near-final designs were presented to key executive stakeholders.

  • Catalogued research and files accessible online
    The company used a number of common platforms for archiving and storing documents. We created a UX-specific repository and tagged the content with user-friendly tags, using terminology that would be familiar to company staff across the organization. We shared the link to the page and the documents in as many forums, online, email, and in documents, as we could.

  • Systems and guides
    We didn’t create a design system. We did create a guide for making certain UX decisions for a specific set of products the company had. Essentially, a decision tree to determine if there was a need to update an element of the design, and if so, whether we had any existing information from our research and design to help inform the new element, or if new research and testing would be required. This document was shared with the appropriate members of the product teams, as well as with managers who might be able to advocate the creation of similar guides for other products as more UX work was accomplished.

While I can’t speak to the long-term impact of our work, we left behind a foundation of UX outputs that were well documented, distributed, and accessible for reference in the future. We completed our time with the client and left them with the framework for how to continue conducting, documenting, and distributing UX work. You can use similar techniques and tailor them to the needs and culture of your organization.

Tactic 4: Mentorship

You, your organization, and your peers all stand to gain from an effective mentorship program. Mentorship, possibly more than any other kind of training or experience, has the potential to grow individuals’ skills, create cohesive teams, and shape the UX philosophy and processes of an organization. Mentorship is a key component of the growth of professionals in many other fields including health care and education.

Effective mentorship can help with growing your organization's UX maturity in that you utilize the existing resources of your more experienced UX staff to grow the abilities of the less experienced staff, who in turn push the more experienced staff to grow and learn more about their own UX practice. This two-way process of growth can compound the benefit and lead to a larger change in the products and teams the UX staff work with. You can use mentorship to start a positive reaction that can set the direction for UX growth for a long-term period of time. You need to put thought into a mentorship program if you want to maximize the benefit. Since mentorship is an inherently personal relationship between the mentee and the mentor, the connection to growing UX maturity needs to be made explicit. You might also expand the influence and understanding of UX if you choose to include team members from outside of typical UX roles in your mentorship program.

You need to consider the following when designing your mentorship program:

  • What is the goal
    What are you trying to accomplish and what are the outcomes of your mentorship program? You should include thinking about how this program will increase UX maturity at the organizational level, and how the program will benefit participants, both as mentees and mentors.

  • Formal or informal
    Will your program be formal with guidelines for mentors and mentees to adhere to, or will it be more informal and unstructured, with loosely defined outcomes? The table below compares some key factors differentiating formal and informal mentorship programs:

Formal Informal
Participant Pool Predefined roles and positions are able to or required to participate. Individuals expressing interest are able to participate.
Timeline Set timeline with milestones identified and a predefined end date. Less structured, milestones are flexible, mentor/mentee determine end date.
Goals Program managers set generic goals, mentor/mentee refine goals using existing structure. Most goals have a relationship to the growth/benefit of the organization and the individuals. Mentor/Mentee customize goals to the needs of the individuals involved. Goals might not tie directly to the organization's needs. Mentor/Mentee revisit goals and update them to reflect the reality of how the mentee has progressed and other factors impacting the mentee.
Assignment Mentors and mentees are matched through a formalized process. For example, completing a questionnaire that sees who is most aligned, matching based on role/job title, or team/product based. Mentors and mentees have the opportunity to determine who they match with. For example, prior interactions suggest a potential for positive relationship, offering mentees a brief intro call with a number of potential mentors before deciding who they might want to match with.
Activities Predefined relationship building and education opportunities, for example attending networking events, conferences, review sessions, and trainings. Participants choose which activities and the frequency. For example, a weekly coffee chat with a monthly review meeting and informal conversations as needed.
Outcomes/Assessment Outcomes and assessment are based on a template and reflect the desired outcomes of the organization. Assessment is formalized and used to determine effectiveness of the program as part of a final evaluation. Outcomes and assessment are reflective of mentee’s needs and goals that have evolved over the course of the program. Assessment might be informal discussion and reflection.

Whether you choose to have a formal or informal mentorship program, you can look at the line between the two as blurry. You should borrow from either side. For example, why wouldn’t you encourage coffee/tea/water walks and informal conversations as a way to build closer relationships in a formal program? And if an effective assessment exists for your organization to measure the effectiveness of your informal mentorship program, why wouldn't you use it?

You should also give deep thought to who participates in your programs. As mentorship benefits both mentors and mentees, you can use this as an opportunity to inspire and educate more seasoned staff, along with an opportunity to grow newer employees. Reverse mentoring is a potentially powerful idea to explore when thinking about maximizing the benefit of a mentoring program to growing UX maturity. This type of mentoring involves pairing more senior-level staff as the mentees, while they gain the perspective of the more junior staff. You might find many of your senior leadership are not as familiar with UX, while newer staff have the opportunity to show them what the benefits are, turning them into advocates for UX growth in the organization.

You need to provide training and support to mentors regardless of the decision to make your program formal or informal. You cannot assume someone will make a good mentor based on how well they perform their job. We can all benefit from additional insight into research-backed ways to support mentees. An additional suggestion from research on effective mentorship programs is allowing mentors and mentees to provide input into the mentor matching process.

Case Study: Mentoring A Large Media Company Staff Member Transitioning Into A UX Role

I’ve had the privilege of serving as a mentor to someone transitioning into a UX research and strategy role at their organization. Initially, our relationship started as a formal client-consultant relationship, however, it evolved once we realized there would be an opportunity through informal mentorship-type activities for both of us to grow personally and professionally, as well as growing the role and maturity of UX at the media company.

I’ll provide the details of mentoring relationships using the factors from the chart above.

  • Participant Pool
    Our mentorship relationship was highly informal. We were the only people participating in the mentorship program because we chose to form the relationship after interacting with each other through professional activities and realizing our interests and goals overlapped. We didn’t initiate our relationship as a mentorship, this developed organically.

  • Timeline
    The mentorship lasted approximately 18 months. This is notable in that the time I spent with the client was less than 12 months, we voluntarily continued our mentoring relationship and activities beyond the time I was working with the organization. In that sense, the arrangement was truly voluntary in the end, even though we initially were together as client-consultant.

  • Goals
    Our goals shifted over time. Initially, the purpose of the mentorship was to develop the UX skills of the mentee. Our goals were broad and high level — for example, learn common UX processes, gain experience with common UX research methods. As we progressed our goals become more refined — e.g., present findings to product team X and develop a protocol for usability testing. We were able to have micro-goals that we updated frequently given our constant contact and checking in. I think there was an additional benefit in that I was working on the same products and projects as my mentee. I know this isn’t always the situation, but it allowed me to have an understanding of the day-to-day challenges and requests being made of my mentee. We were then able to turn these challenges into goals to address next.

  • Assignment
    We self-assigned to each other. We determined to engage in mentorship on our own after spending weeks working together and realizing mentorship would further both of our goals.

  • Activities
    We were able to frequently collaborate given the working relationship I had. I don’t think it would be realistic for mentor-mentee relationships to have as many activities as this if you aren’t able to have frequent — almost daily — interactions. Our activities informal calls, formal assignments, attending meetings together, conducting strategy sessions to roadmap goals and related activities, observation of what I did, creating and iterating on documents together, collecting and analyzing data together, co-working at each other’s spaces, co-creating reports, attending conferences together, and sharing conversation over coffee or a meal.

  • Outcomes/Assessment
    Our assessment of the mentorship was informal and frequent. We would often discuss if we were still getting what we needed and expected out of the arrangement. Fortunately, the answer was yes. We also spent time reflecting and determining if we wanted to focus more on certain areas.

The final outcomes benefited me, the mentee, as well as the UX maturity of the organization. I grew as a mentor and as a UX practitioner. I was forced to think deeper about the things I do and why I do them throughout the course of the mentorship. My mentee was excellent at asking me to share the logic behind why we use certain methods, why we make certain recommendations, how we present findings to different stakeholders, and what supporting information I can provide to justify my process. I found it challenging and refreshing.

My mentee grew their UX knowledge and skills to the point they were able to lead the UX work on a number of projects. They accomplished the goals we had set out to accomplish, as well as many of the micro-goals we set along the way.

The organization’s UX Maturity benefited equally from the outcome of the mentorship. The mentee understood when and how to implement UX in their organization. The mentee went on to justify a budget to hire an additional UX staff that reported to them (increased resources). This allowed the mentee to have time to implement UX processes on other products that were currently lacking UX attention (improved timing of UX on a number of products). The mentee made numerous presentations to leadership and was able to get a number of the staff engaged and excited to promote the growth of UX at the organization (impact leadership and culture).

Putting These Tactics Into Practice

I’ve covered two additional tactics for UX practitioners to grow their organization's UX Maturity. You won’t need to spend money on either of these tactics, but they do require resources of time and access to tools for storing or sharing information. You will need to decide on many of the best ways to approach information sharing or setting up a mentorship program that works for your organization.

Hopefully, I’ve demonstrated that there isn’t a large barrier to entry for either of these tactics. You can engage in knowledge sharing if you start documenting what you have learned from any UX work (these documents should already exist) and create an easy-to-find repository using the file storage system your organization uses. Or, you can create a list of relevant people to distribute UX-related material to and start sending them artifacts via email attachment. For mentorship, you don’t need to create a huge program with complex rules. I was able to engage in an informal relationship mentoring someone with whom I was already working with on a daily basis. Our key ingredient was a desire to learn from each other and common goals. Your organization might require some level of definition and oversight, but you might begin by looking at some of your teammates when it comes to exploring what the seeds of a mentorship program might look like.

You can use the tactics presented here standalone or along with the ones presented in the previous article. The third and final article will focus on the education of UX staff on UX tools and specific areas of UX expertise and the education of non-UX staff on UX principles and processes. Stay tuned!

How Cloud is Boosting Global Digital Transformation

Case Study: AWS Selected by GE for App Cloud Provider

General Electronics (GE) has selected AWS for hosting more than 2,000 apps on the cloud platform, which includes the ones used by GE Power, GE Aviation, GE Healthcare, GE Transportation and GE Digital Divisions. Most of the applications have already migrated to AWS since GE undertook digital transformation in the year 2014.  More applications are suggested to be migrating to AWS, as it is a “preferred cloud provider” by the company.

The company chose AWS as the cloud provider because AWS is the industry-leading cloud services and allowed them (GE) to push their boundaries, thinking big and delivering better outcomes.