How To Build Confidence In Your UX Work How To Build Confidence In Your UX Work Vitaly Friedman 2025-03-11T15:00:00+00:00 2025-03-11T22:04:08+00:00 When I start any UX project, typically, there is very little confidence in the successful outcome of my UX initiatives. In fact, there is quite […]
UxThe Human Element: Using Research And Psychology To Elevate Data Storytelling The Human Element: Using Research And Psychology To Elevate Data Storytelling Victor Yocco & Angelica Lo Duca 2025-02-26T10:00:00+00:00 2025-03-04T21:34:45+00:00 Data storytelling is a powerful communication tool that combines data analysis with narrative techniques to […]
UxHow To Test And Measure Content In UX How To Test And Measure Content In UX Vitaly Friedman 2025-02-13T08:00:00+00:00 2025-03-04T21:34:45+00:00 Content testing is a simple way to test the clarity and understanding of the content on a page — be it a paragraph of text, […]
UxHow To Build Confidence In Your UX Work How To Build Confidence In Your UX Work Vitaly Friedman 2025-03-11T15:00:00+00:00 2025-03-11T22:04:08+00:00 When I start any UX project, typically, there is very little confidence in the successful outcome of my UX initiatives. In fact, there is quite […]
Ux
2025-03-11T15:00:00+00:00
2025-03-11T22:04:08+00:00
When I start any UX project, typically, there is very little confidence in the successful outcome of my UX initiatives. In fact, there is quite a lot of reluctance and hesitation, especially from teams that have been burnt by empty promises and poor delivery in the past.
Good UX has a huge impact on business. But often, we need to build up confidence in our upcoming UX projects. For me, an effective way to do that is to address critical bottlenecks and uncover hidden deficiencies — the ones that affect the people I’ll be working with.
Let’s take a closer look at what this can look like.
.course-intro{–shadow-color:206deg 31% 60%;background-color:#eaf6ff;border:1px solid #ecf4ff;box-shadow:0 .5px .6px hsl(var(–shadow-color) / .36),0 1.7px 1.9px -.8px hsl(var(–shadow-color) / .36),0 4.2px 4.7px -1.7px hsl(var(–shadow-color) / .36),.1px 10.3px 11.6px -2.5px hsl(var(–shadow-color) / .36);border-radius:11px;padding:1.35rem 1.65rem}@media (prefers-color-scheme:dark){.course-intro{–shadow-color:199deg 63% 6%;border-color:var(–block-separator-color,#244654);background-color:var(–accent-box-color,#19313c)}}
This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Free preview.
Bottlenecks are usually the most disruptive part of any company. Almost every team, every unit, and every department has one. It’s often well-known by employees as they complain about it, but it rarely finds its way to senior management as they are detached from daily operations.
The bottleneck can be the only senior developer on the team, a broken legacy tool, or a confusing flow that throws errors left and right — there’s always a bottleneck, and it’s usually the reason for long waiting times, delayed delivery, and cutting corners in all the wrong places.
We might not be able to fix the bottleneck. But for a smooth flow of work, we need to ensure that non-constraint resources don’t produce more than the constraint can handle. All processes and initiatives must be aligned to support and maximize the efficiency of the constraint.
So before doing any UX work, look out for things that slow down the organization. Show that it’s not UX work that disrupts work, but it’s internal disruptions that UX can help with. And once you’ve delivered even a tiny bit of value, you might be surprised how quickly people will want to see more of what you have in store for them.
Meetings, reviews, experimentation, pitching, deployment, support, updates, fixes — unplanned work blocks other work from being completed. Exposing the root causes of unplanned work and finding critical bottlenecks that slow down delivery is not only the first step we need to take when we want to improve existing workflows, but it is also a good starting point for showing the value of UX.
To learn more about the points that create friction in people’s day-to-day work, set up 1:1s with the team and ask them what slows them down. Find a problem that affects everyone. Perhaps too much work in progress results in late delivery and low quality? Or lengthy meetings stealing precious time?
One frequently overlooked detail is that we can’t manage work that is invisible. That’s why it is so important that we visualize the work first. Once we know the bottleneck, we can suggest ways to improve it. It could be to introduce 20% idle times if the workload is too high, for example, or to make meetings slightly shorter to make room for other work.
The idea that the work is never just “the work” is deeply connected to the Theory of Constraints discovered by Dr. Eliyahu M. Goldratt. It showed that any improvements made anywhere beside the bottleneck are an illusion.
Any improvement after the bottleneck is useless because it will always remain starved, waiting for work from the bottleneck. And any improvements made before the bottleneck result in more work piling up at the bottleneck.
To improve flow, sometimes we need to freeze the work and bring focus to one single project. Just as important as throttling the release of work is managing the handoffs. The wait time for a given resource is the percentage of time that the resource is busy divided by the percentage of time it’s idle. If a resource is 50% utilized, the wait time is 50/50, or 1 unit.
If the resource is 90% utilized, the wait time is 90/10, or 9 times longer. And if it’s 99% of time utilized, it’s 99/1, so 99 times longer than if that resource is 50% utilized. The critical part is to make wait times visible so you know when your work spends days sitting in someone’s queue.
The exact times don’t matter, but if a resource is busy 99% of the time, the wait time will explode.
Our goal is to maximize flow: that means exploiting the constraint but creating idle times for non-constraint to optimize system performance.
One surprising finding for me was that any attempt to maximize the utilization of all resources — 100% occupation across all departments — can actually be counterproductive. As Goldratt noted, “An hour lost at a bottleneck is an hour out of the entire system. An hour saved at a non-bottleneck is worthless.”
I can only wholeheartedly recommend The Phoenix Project, an absolutely incredible book that goes into all the fine details of the Theory of Constraints described above.
It’s not a design book but a great book for designers who want to be more strategic about their work. It’s a delightful and very real read about the struggles of shipping (albeit on a more technical side).
People don’t like sudden changes and uncertainty, and UX work often disrupts their usual ways of working. Unsurprisingly, most people tend to block it by default. So before we introduce big changes, we need to get their support for our UX initiatives.
We need to build confidence and show them the value that UX work can have — for their day-to-day work. To achieve that, we can work together with them. Listening to the pain points they encounter in their workflows, to the things that slow them down.
Once we’ve uncovered internal disruptions, we can tackle these critical bottlenecks and suggest steps to make existing workflows more efficient. That’s the foundation to gaining their trust and showing them that UX work doesn’t disrupt but that it’s here to solve problems.
Meet Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Watch the free preview or jump to the details.
$ 495.00 $ 799.00
Get Video + UX Training
25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.
25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.
The Human Element: Using Research And Psychology To Elevate Data Storytelling The Human Element: Using Research And Psychology To Elevate Data Storytelling Victor Yocco & Angelica Lo Duca 2025-02-26T10:00:00+00:00 2025-03-04T21:34:45+00:00 Data storytelling is a powerful communication tool that combines data analysis with narrative techniques to […]
Ux
2025-02-26T10:00:00+00:00
2025-03-04T21:34:45+00:00
Data storytelling is a powerful communication tool that combines data analysis with narrative techniques to create impactful stories. It goes beyond presenting raw numbers by transforming complex data into meaningful insights that can drive decisions, influence behavior, and spark action.
When done right, data storytelling simplifies complex information, engages the audience, and compels them to act. Effective data storytelling allows UX professionals to effectively communicate the “why” behind their design choices, advocate for user-centered improvements, and ultimately create more impactful and persuasive presentations. This translates to stronger buy-in for research initiatives, increased alignment across teams, and, ultimately, products and experiences that truly meet user needs.
For instance, The New York Times’ Snow Fall data story (Figure 1) used data to immerse readers in the tale of a deadly avalanche through interactive visuals and text, while The Guardian’s The Counted (Figure 2) powerfully illustrated police violence in the U.S. by humanizing data through storytelling. These examples show that effective data storytelling can leave lasting impressions, prompting readers to think differently, act, or make informed decisions.
The importance of data storytelling lies in its ability to:
While there are numerous models of data storytelling, here are a few high-level areas of focus UX practitioners should have a grasp on:
Narrative Structures: Traditional storytelling models like the hero’s journey (Vogler, 1992) or the Freytag pyramid (Figure 3) provide a backbone for structuring data stories. These models help create a beginning, rising action, climax, falling action, and resolution, keeping the audience engaged.
Data Visualization: Broadly speaking, these are the tools and techniques for visualizing data in our stories. Interactive charts, maps, and infographics (Cairo, 2016) transform raw data into digestible visuals, making complex information easier to understand and remember.
Moving beyond these basic structures, let’s explore how more sophisticated narrative techniques can enhance the impact of data stories:
Example:
Presenting data on declining user engagement could follow the hero’s journey. The “call to adventure” is the declining engagement. The “challenges” are revealed through data points showing where users are dropping off. The “insights” are uncovered through further analysis, revealing the root causes. The “resolution” is the proposed solution, supported by data, that the audience (the hero) can implement.
Many data storytelling models follow a traditional, linear structure: data selection, audience tailoring, storyboarding with visuals, and a call to action. While these models aim to make data more accessible, they often fail to engage the audience on a deeper level, leading to missed opportunities. This happens because they prioritize the presentation of data over the experience of the audience, neglecting how different individuals perceive and process information.
While existing data storytelling models adhere to a structured and technically correct approach to data creation, they often fall short of fully analyzing and understanding their audience. This gap weakens their overall effectiveness and impact.
These shortcomings reveal a critical flaw: while current models successfully follow a structured data creation process, they often neglect the deeper, audience-centered analysis required for actual storytelling effectiveness. To bridge this gap,
Data storytelling must evolve beyond simply presenting information — it should prioritize audience understanding, engagement, and accessibility at every stage.
“
Traditional models can be improved by focusing more on the following two critical components:
Audience understanding: A greater focus can be concentrated on who the audience is, what they need, and how they perceive information. Traditional models should consider the unique characteristics and needs of specific audiences. This lack of audience understanding can lead to data stories that are irrelevant, confusing, or even misleading.
Effective data storytelling requires a deep understanding of the audience’s demographics, psychographics, and information needs. This includes understanding their level of knowledge about the topic, their prior beliefs and attitudes, and their motivations for seeking information. By tailoring the data story to a specific audience, storytellers can increase engagement, comprehension, and persuasion.
Psychological principles: These models could be improved with insights from psychology that explain how people process information and make decisions. Without these elements, even the most beautifully designed data story may fall flat. Traditional models of data storytelling can be improved with two critical components that are essential for creating impactful and persuasive narratives: audience understanding and psychological principles.
By incorporating audience understanding and psychological principles into their storytelling process, data storytellers can create more effective and engaging narratives that resonate with their audience and drive desired outcomes.
All storytelling involves persuasion. Even if it’s a poorly told story and your audience chooses to ignore your message, you’ve persuaded them to do that. When your audience feels that you understand them, they are more likely to be persuaded by your message. Data-driven stories that speak to their hearts and minds are more likely to drive action. You can frame your message effectively when you have a deeper understanding of your audience.
Humans process information based on psychological cues such as cognitive ease, social proof, and emotional appeal. By incorporating these principles, data storytellers can make their narratives more engaging, memorable, and persuasive.
Psychological principles help data storytellers tap into how people perceive, interpret, and remember information.
The Theory of Planned Behavior
While there is no single truth when it comes to how human behavior is created or changed, it is important for a data storyteller to use a theoretical framework to ensure they address the appropriate psychological factors of their audience. The Theory of Planned Behavior (TPB) is a commonly cited theory of behavior change in academic psychology research and courses. It’s useful for creating a reasonably effective framework to collect audience data and build a data story around it.
The TPB (Ajzen 1991) (Figure 5) aims to predict and explain human behavior. It consists of three key components:
As shown in Figure 5, these three components interact to create behavioral intentions, which are a proxy for actual behaviors that we often don’t have the resources to measure in real-time with research participants (Ajzen, 1991).
UX researchers and data storytellers should develop a working knowledge of the TPB or another suitable psychological theory before moving on to measure the audience’s attitudes, norms, and perceived behavioral control. We have included additional resources to support your learning about the TPB in the references section of this article.
OK, we’ve covered the importance of audience understanding and psychology. These two principles serve as the foundation of the proposed model of storytelling we’re putting forth. Let’s explore how to integrate them into your storytelling process.
At the core of successful data storytelling lies a deep understanding of your audience’s psychology. Here’s a five-step process to integrate UX research and psychological principles effectively into your data stories:
Before diving into data, it’s crucial to establish precisely what you aim to achieve with your story. Do you want to inform, persuade, or inspire action? What specific message do you want your audience to take away?
Why it matters: Defining clear objectives provides a roadmap for your storytelling journey. It ensures that your data, narrative, and visuals are all aligned toward a common goal. Without this clarity, your story risks becoming unfocused and losing its impact.
How to execute Step 1: Start by asking yourself:
Frame your objectives using action verbs and quantifiable outcomes. For example, instead of “raise awareness about climate change,” aim to “persuade 20% of the audience to adopt one sustainable practice.”
Example:
Imagine you’re creating a data story about employee burnout. Your objective might be to convince management to implement new policies that promote work-life balance, with the goal of reducing reported burnout cases by 15% within six months.
This step involves gathering insights about your audience: their demographics, needs, motivations, pain points, and how they prefer to consume information.
Why it matters: Understanding your audience is fundamental to crafting a story that resonates. By knowing their preferences and potential biases, you can tailor your narrative and data presentation to capture their attention and ensure the message is clearly understood.
How to execute Step 2: Employ UX research methods like surveys, interviews, persona development, and testing the message with potential audience members.
Example:
If your data story aims to encourage healthy eating habits among college students, your research might conduct a survey of students to determine what types of attitudes exist towards specific types of healthy foods for eating, to apply that knowledge in your data story.
This step bridges the gap between raw data and meaningful insights. It involves exploring your data to identify patterns, trends, and key takeaways that support your objectives and resonate with your audience.
Why it matters: Careful data analysis ensures that your story is grounded in evidence and that you’re using the most impactful data points to support your narrative. This step adds credibility and weight to your story, making it more convincing and persuasive.
How to execute Step 3:
Example:
If your objective is to demonstrate the effectiveness of a new teaching method, analyzing how your audience perceives their peers to be open to adopting new methods, their belief that they are in control over the decision to use a new teaching method, and their attitude towards the effectiveness of their current teaching methods to create groups that have various levels of receptivity in trying new methods, allowing you to later tailor your data story for each group.
In this step, you will see that The Theory of Planned Behavior (TPB) provides a robust framework for understanding the factors that drive human behavior. It posits that our intentions, which are the strongest predictors of our actions, are shaped by three core components: attitudes, subjective norms, and perceived behavioral control. By consciously incorporating these elements into your data story, you can significantly enhance its persuasive power.
Why it matters: The TPB offers valuable insights into how people make decisions. By aligning your narrative with these psychological drivers, you increase the likelihood of influencing your audience’s intentions and, ultimately, their behavior. This step adds a layer of strategic persuasion to your data storytelling, making it more impactful and effective.
How to execute Step 4:
Here’s how to leverage the TPB in your data story:
Influence Attitudes: Present data and evidence that highlight the positive consequences of adopting the desired behavior. Frame the behavior as beneficial, valuable, and aligned with the audience’s values and aspirations.
This is where having a deep knowledge of the audience is helpful. Let’s imagine you are creating a data story on exercise and your call to action promoting exercise daily. If you know your audience has a highly positive attitude towards exercise, you can capitalize on that and frame your language around the benefits of exercising, increasing exercise, or specific exercises that might be best suited for the audience. It’s about framing exercise not just as a physical benefit but as a holistic improvement to their life. You can also tie it to their identity, positioning exercise as an integral part of living the kind of life they aspire to.
Shape Subjective Norms: Demonstrate that the desired behavior is widely accepted and practiced by others, especially those the audience admires or identifies with. Knowing ahead of time if your audience thinks daily exercise is something their peers approve of or engage in will allow you to shape your messaging accordingly. Highlight testimonials, success stories, or case studies from individuals who mirror the audience’s values.
If you were to find that the audience does not consider exercise to be normative amongst peers, you would look for examples of similar groups of people who do exercise. For example, if your audience is in a certain age group, you might focus on what data you have that supports a large percentage of those in their age group engaging in exercise.
Enhance Perceived Behavioral Control: Address any perceived barriers to adopting the desired behavior and provide practical solutions. For instance, when promoting daily exercise, it’s important to acknowledge the common obstacles people face — lack of time, resources, or physical capability — and demonstrate how these can be overcome.
This is where you synthesize your data, audience insights, psychological principles (including the TPB), and storytelling techniques into a compelling and persuasive narrative. It’s about weaving together the logical and emotional elements of your story to create an experience that resonates with your audience and motivates them to act.
Why it matters: A well-crafted narrative transforms data from dry statistics into a meaningful and memorable experience. It ensures that your audience not only understands the information but also feels connected to it on an emotional level, increasing the likelihood of them internalizing the message and acting upon it.
How to execute Step 5:
Structure your story strategically: Use a clear narrative arc that guides your audience through the information. Begin by establishing the context and introducing the problem, then present your data-driven insights in a way that supports your objectives and addresses the TPB components. Conclude with a compelling call to action that aligns with the attitudes, norms, and perceived control you’ve cultivated throughout the narrative.
Example:
In a data story about promoting exercise, you could:
- Determine what stories might be available using the data you have collected or obtained. In this example, let’s say you work for a city planning office and have data suggesting people aren’t currently biking as frequently as they could, even if they are bike owners.
- Begin with a relatable story about lack of exercise and its impact on people’s lives. Then, present data on the benefits of cycling, highlighting its positive impact on health, socializing, and personal feelings of well-being (attitudes).
- Integrate TPB elements: Showcase stories of people who have successfully incorporated cycling into their daily commute (subjective norms). Provide practical tips on bike safety, route planning, and finding affordable bikes (perceived behavioral control).
- Use infographics to compare commute times and costs between driving and cycling. Show maps of bike-friendly routes and visually appealing images of people enjoying cycling.
- Call to action: Encourage the audience to try cycling for a week and provide links to resources like bike share programs, cycling maps, and local cycling communities.
Our next step is to test our hypothesis that incorporating audience research and psychology into creating a data story will lead to more powerful results. We have conducted preliminary research using messages focused on climate change, and our results suggest some support for our assertion.
We purposely chose a controversial topic because we believe data storytelling can be a powerful tool. If we want to truly realize the benefits of effective data storytelling, we need to focus on topics that matter. We also know that academic research suggests it is more difficult to shift opinions or generate behavior around topics that are polarizing (at least in the US), such as climate change.
We are not ready to share the full results of our study. We will share those in an academic journal and in conference proceedings. Here is a look at how we set up the study and how you might do something similar when either creating a data story using our method or doing your own research to test our model. You will see that it closely aligns with the model itself, with the added steps of testing the message against a control message and taking measurements of the actions the message(s) are likely to generate.
Step 1: We chose our topic and the data set we wanted to explore. As I mentioned, we purposely went with a polarizing topic. My academic background was in messaging around conservation issues, so we explored that. We used data from a publicly available data set that states July 2023 was the hottest month ever recorded.
Step 2: We identified our audience and took basic measurements. We decided our audience would be members of the general public who do not have jobs working directly with climate data or other relevant fields for climate change scientists.
We wanted a diverse range of ages and backgrounds, so we screened for this in our questions on the survey to measure the TPB components as well. We created a survey to measure the elements of the TPB as it relates to climate change and administered the survey via a Google Forms link that we shared directly, on social media posts, and in online message boards related to topics of climate change and survey research.
Step 3: We analyzed our data and broke our audience into groups based on key differences. This part required a bit of statistical know-how. Essentially, we entered all of the responses into a spreadsheet and ran a factor analysis to define groups based on shared attributes. In our case, we found two distinct groups for our respondents. We then looked deeper into the individual differences between the groups, e.g., group 1 had a notably higher level of positive attitude towards taking action to remediate climate change.
Step 4 [remember this happens simultaneously with step 3]: We incorporated aspects of the TPB in how we framed our data analysis. As we created our groups and looked at the responses to the survey, we made sure to note how this might impact the story for our various groups. Using our previous example, a group with a higher positive attitude toward taking action might need less convincing to do something about climate change and more information on what exactly they can do.
Table 1 contains examples of the questions we asked related to the TPB. We used the guidance provided here to generate the survey items to measure the TPB related to climate change activism. Note that even the academic who created the TPB states there are no standardized questions (PDF) validated to measure the concepts for each individual topic.
Item | Measures | Scale |
---|---|---|
How beneficial do you believe individual actions are compared to systemic changes (e.g., government policies) in tackling climate change? | Attitude | 1 to 5 with 1 being “not beneficial” and 5 being “extremely beneficial” |
How much do you think the people you care about (family, friends, community) expect you to take action against climate change? | Subjective Norms | 1 to 5 with 1 being “they do not expect me to take action” and 5 being “they expect me to take action” |
How confident are you in your ability to overcome personal barriers when trying to reduce your environmental impact? | Perceived Behavioral Control | 1 to 5 with 1 being “not at all confident” and 5 being “extremely confident” |
Table 1: Examples of questions we used to measure the TPB factors. We asked multiple questions for each factor and then generated a combined mean score for each component.
Step 5: We created data stories aligned with the groups and a control story. We created multiple stories to align with the groups we identified in our audience. We also created a control message that lacked substantial framing in any direction. See below for an example of the control data story (Figure 7) and one of the customized data stories (Figure 8) we created.
Step 6: We released the stories and took measurements of the likelihood of acting. Specific to our study, we asked the participants how likely they were to “Click here to LEARN MORE.” Our hypothesis was that individuals would express a notably higher likelihood to want to click to learn more on the data story aligned with their grouping, as compared to the competing group and the control group.
Step 7: We analyzed the differences between the preexisting groups and what they stated was their likelihood of acting. As I mentioned, our findings are still preliminary, and we are looking at ways to increase our response rate so we can present statistically substantiated findings. Our initial findings are that we do see small differences between the responses to the tailored data stories and the control data story. This is directionally what we would be expecting to see. If you are going to conduct a similar study or test out your messages, you would also be looking for results that suggest your ARIDS-derived message is more likely to generate the expected outcome than a control message or a non-tailored message.
Overall, we feel there is an exciting possibility and that future research will help us refine exactly what is critical about generating a message that will have a positive impact on your audience. We also expect there are better models of psychology to use to frame your measurements and message depending on the audience and topic.
For example, you might feel Maslow’s hierarchy of needs is more relevant to your data storytelling. You would want to take measurements related to these needs from your audience and then frame the data story using how a decision might help meet their needs.
Traditional models of data storytelling, while valuable, often fall short of effectively engaging and persuading audiences. This is primarily due to their neglect of crucial aspects such as audience understanding and the application of psychological principles. By incorporating these elements into the data storytelling process, we can create more impactful and persuasive narratives.
The five-step framework proposed in this article — defining clear objectives, conducting UX research, analyzing data, applying psychological principles, and crafting a balanced narrative — provides a roadmap for creating data stories that resonate with audiences on both a cognitive and emotional level. This approach ensures that data is not merely presented but is transformed into a meaningful experience that drives action and fosters change. As data storytellers, embracing this human-centric approach allows us to unlock the full potential of data and create narratives that truly inspire and inform.
Effective data storytelling isn’t a black box. You can test your data stories for effectiveness using the same research process we are using to test our hypothesis as well. While there are additional requirements in terms of time as a resource, you will make this back in the form of a stronger impact on your audience when they encounter your data story if it is shown to be significantly greater than the impact of a control message or other messages you were considering that don’t incorporate the psychological traits of your audience.
Please feel free to use our method and provide any feedback on your experience to the author.
How To Test And Measure Content In UX How To Test And Measure Content In UX Vitaly Friedman 2025-02-13T08:00:00+00:00 2025-03-04T21:34:45+00:00 Content testing is a simple way to test the clarity and understanding of the content on a page — be it a paragraph of text, […]
Ux
2025-02-13T08:00:00+00:00
2025-03-04T21:34:45+00:00
Content testing is a simple way to test the clarity and understanding of the content on a page — be it a paragraph of text, a user flow, a dashboard, or anything in between. Our goal is to understand how well users actually perceive the content that we present to them.
It’s not only about finding pain points and things that cause confusion or hinder users from finding the right answer on a page but also about if our content clearly and precisely articulates what we actually want to communicate.
.course-intro{–shadow-color:206deg 31% 60%;background-color:#eaf6ff;border:1px solid #ecf4ff;box-shadow:0 .5px .6px hsl(var(–shadow-color) / .36),0 1.7px 1.9px -.8px hsl(var(–shadow-color) / .36),0 4.2px 4.7px -1.7px hsl(var(–shadow-color) / .36),.1px 10.3px 11.6px -2.5px hsl(var(–shadow-color) / .36);border-radius:11px;padding:1.35rem 1.65rem}@media (prefers-color-scheme:dark){.course-intro{–shadow-color:199deg 63% 6%;border-color:var(–block-separator-color,#244654);background-color:var(–accent-box-color,#19313c)}}
This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Free preview.
A great way to test how well your design matches a user’s mental model is Banana Testing. We replace all key actions with the word “Banana,” then ask users to suggest what each action could prompt.
Not only does it tell you if key actions are understood immediately and if they are in the right place but also if your icons are helpful and if interactive elements such as links or buttons are perceived as such.
One reliable technique to assess content is content heatmapping. The way we would use it is by giving participants a task, then asking them to highlight things that are clear or confusing. We could define any other dimensions or style lenses as well: e.g., phrases that bring more confidence and less confidence.
Then we map all highlights into a heatmap to identify patterns and trends. You could run it with print-outs in person, but it could also happen in Figjam or in Miro remotely — as long as your tool of choice has a highlighter feature.
These little techniques above help you discover content issues, but they don’t tell you what is missing in the content and what doubts, concerns, and issues users have with it. For that, we need to uncover user needs in more detail.
Too often, users say that a page is “clear and well-organized,” but when you ask them specific questions, you notice that their understanding is vastly different from what you were trying to bring into spotlight.
Such insights rarely surface in unmoderated sessions — it’s much more effective to observe behavior and ask questions on the spot, be it in person or remote.
Before testing, we need to know what we want to learn. First, write up a plan with goals, customers, questions, script. Don’t tweak words alone — broader is better. In the session, avoid speaking aloud as it’s usually not how people consume content. Ask questions and wait silently.
After the task is completed, ask users to explain the product, flow, and concepts to you. But: don’t ask them what they like, prefer, feel, or think. And whenever possible, avoid the word “content” in testing as users often perceive it differently.
There are plenty of different tests that you could use:
When choosing the right way to test, consider the following guidelines:
In many tasks, there is rarely anything more impactful than the careful selection of words on a page. However, it’s not only the words alone that are being used but the voice and tone that you choose to communicate with customers.
Use the techniques above to test and measure how well people perceive content but also check how they perceive the end-to-end experience on the site.
Quite often, the right words used incorrectly on a key page can convey a wrong message or provide a suboptimal experience. Even though the rest of the product might perform remarkably well, if a user is blocked on a critical page, they will be gone before you even blink.
Meet Measure UX & Design Impact (8h), a new practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT
to save 20% off today. Jump to the details.
$ 495.00 $ 799.00
Get Video + UX Training
25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.
25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.
The Role Of Illustration Style In Visual Storytelling The Role Of Illustration Style In Visual Storytelling Thomas Bohm 2025-01-14T08:00:00+00:00 2025-03-04T21:34:45+00:00 Illustration has been used for 10,000 years. One of the first ever recorded drawings was of a hand silhouette found in Spain, that is more […]
Ux
2025-01-14T08:00:00+00:00
2025-03-04T21:34:45+00:00
Illustration has been used for 10,000 years. One of the first ever recorded drawings was of a hand silhouette found in Spain, that is more than 66,000 years old. Fast forward to the introduction of the internet, around 1997, illustration has gradually increased in use. Popular examples of this are Google’s daily doodles and the Red Bull energy drink, both of which use funny cartoon illustrations and animations to great effect.
Typically, illustration was done using pencils, chalk, pens, etchings, and paints. But now everything is possible — you can do both analog and digital or mixed media styles.
As an example, although photography might be the most popular method to communicate visuals, it is not automatically the best default solution. Illustration offers a wider range of styles that help companies engage and communicate with their audience. Good illustrations create a mood and bring to life ideas and concepts from the text. To put it another way, visualisation.
Good illustrations can also help give life to information in a better way than just using text, numbers, or tables.
How do we determine what kind of illustration or style would be best? How should illustration complement or echo your corporate identity? What will your main audience prefer? What about the content, what would suit and highlight the content best, and how would it work for the age range it is primarily for?
Before we dive into the examples, let’s discuss the qualities of good illustration and the importance of understanding your audience. The rubric below will help you make good choices for your audience’s benefit.
Just look at what we are more often than not presented with.
It is really important to know and consider different audiences. Not all of us are the same and have the same physical, cognitive, education, or resources. Our writing, designs, and illustrations need to take into account users’ make-up and capabilities.
There are some common categories of audiences:
Below are interesting examples of illustrations, in no particular order, that show how different styles communicate and echo different qualities and affect mood and tone.
Good for formal, classy, and sophisticated imagery that also lends itself to imaginative expression. It is a great example of texture and light that delivers a really humane and personal feel that you would not get automatically by using software.
Strengths
A great option for highly abstract concepts and compositions with a funny, unusual, and unreal aspect. You can do some really striking and clever stuff with this style to engage readers in your content.
Strengths
Perfect for abstract hybrid illustration and photo illustration with a surreal fantasy aspect. This is a great example of merging different imagery together to create a really dramatic, scary, and visually arresting new image that fits the musician’s work as well.
Strengths
Well-suited for showing fun or humorous aspects, creating concepts with loads of wit and cleverness. New messages and forms of communication can be created with this style.
Strengths
Works well for showing fun, quirky, or humorous aspects and concepts, often with loads of wit and cleverness. The simplicity of style can be quite good for people who struggle with more advanced imagery concepts, making it quite accessible.
Strengths
Designed for clean and clear illustrations that are all-encompassing and durable. Due to the nature of this illustration style, it works quite well for a wide range of people as it is not overly stylistic in one direction or another.
Strengths
Best suited for imagining rustic imagery, echoing a vintage feel. This a great example of how texture and non-cleanliness can create and enhance the feeling of the imagery; it is very Western and old-fashioned, perfect for the core meaning of the illustration.
Strengths
Highly effective for clean, legible, quickly recognizable imagery and concepts, especially at small sizes as well. It is no surprise that many pictograms are to be seen in quick viewing environments such as airports and show imagery that has to work for a wide range of people.
Strengths
A great option for visually attractive and abstract imagery and concepts. This style lends itself to much customising and experimentation from the illustrator, giving some really cool and visually striking results.
Strengths
Ideal for imagery that has an old, historic, and traditional feel. Has a great feel achieved through sketchy markings, etchings, and a greyscale colour palette. You would not automatically get this from software, but given the right context or maybe an unusual juxtaposed context (like the clash against a modern, clean, fashionable corporate identity), it could work really well.
Strengths
It serves as a great choice for highly realistic illustration with a friendly, widely accessible character element. This style is not overly stylistic and lends itself to being accepted by a wider range of people.
Strengths
It’s especially useful for high-impact, bright, animated, and colourful concepts. Some really cool, almost animated graphic communication can be created with this style, which can also be put to much humorous use. The boldness and in-your-face style promote visual engagement.
Strengths
Well-suited for bold block-coloured silhouettes and imagery. It is so bold and impactful, and there is still loads of detail there, creating a really cool and sharp illustration. The illustration works well in black and white and would be further enhanced with colour.
Strengths
Perfect for humane, detailed imagery with plenty of feeling and character. The sketchy style highlights unusual details and lends itself to an imaginative feeling and imagery.
Strengths
Especially useful for highly imaginative and fantasy imagery. By using gradients and a light-to-dark color palette, the imagery really has depth and says, ‘Take me away on a journey.’
Strengths
It makes an excellent option for giving illustration a humane and tangible feel, with echoes of old historical illustrations. The murky black-and-white illustration really has an atmosphere to it.
Strengths
It offers great value for block silhouette imagery that has presence, sharpness, and impact. Is colour even needed? The black against the light background goes a long way to communicating the imagery.
Strengths
A great option for imagery that has motion and flare to it, with a slight feminine feel. No wonder this style of illustration is used for fashion illustrations, great for expressing lines and colours with motion, and has a real fashion runway flare.
Strengths
Ideal for humorous imagery and illustration with a graphic edge and clarity. The layering of light and dark elements really creates an illustration with depth, perfect for playing with the detail of the character, not something you would automatically get from a clean vector illustration. It has received more thought and attention than clean vector illustration typically does.
Strengths
It serves as a great choice for traditional romantic imagery that has loads of detail, texture, and depth of feeling. The rose flowers are a good example of this illustration style because they have so much detail and colour shades.
Strengths
Well-suited for highly sketchy imagery to make something an idea or working concept. The white lines against the black background have an almost animated effect and give the illustrations real movement and life. This style is a good example of using pure lines in illustration but to great effect.
Strengths
There are plenty of options, such as using pencils, chalk, pens, etchings, and paints, then possibly scanning in. You can also use software like Illustrator, Photoshop, Procreate, Corel Painter, Sketch, Inkscape, or Figma. But no matter what tools you choose, there’s one essential ingredient you’ll always need, and that is a mind and vision for illustration.
Creating An Effective Multistep Form For Better User Experience Creating An Effective Multistep Form For Better User Experience Amejimaobari Ollornwi 2024-12-03T10:00:00+00:00 2025-03-04T21:34:45+00:00 For a multistep form, planning involves structuring questions logically across steps, grouping similar questions, and minimizing the number of steps and the amount […]
Ux
2024-12-03T10:00:00+00:00
2025-03-04T21:34:45+00:00
For a multistep form, planning involves structuring questions logically across steps, grouping similar questions, and minimizing the number of steps and the amount of required information for each step. Whatever makes each step focused and manageable is what should be aimed for.
In this tutorial, we will create a multistep form for a job application. Here are the details we are going to be requesting from the applicant at each step:
You can think of structuring these questions as a digital way of getting to know somebody. You can’t meet someone for the first time and ask them about their work experience without first asking for their name.
Based on the steps we have above, this is what the body of our HTML with our form should look like. First, the main <form>
element:
<form id="jobApplicationForm">
<!-- Step 1: Personal Information -->
<!-- Step 2: Work Experience -->
<!-- Step 3: Skills & Qualifications -->
<!-- Step 4: Review & Submit -->
</form>
Step 1 is for filling in personal information, like the applicant’s name, email address, and phone number:
<form id="jobApplicationForm">
<!-- Step 1: Personal Information -->
<fieldset class="step" id="step-1">
<legend id="step1Label">Step 1: Personal Information</legend>
<label for="name">Full Name</label>
<input type="text" id="name" name="name" required />
<label for="email">Email Address</label>
<input type="email" id="email" name="email" required />
<label for="phone">Phone Number</label>
<input type="tel" id="phone" name="phone" required />
</fieldset>
<!-- Step 2: Work Experience -->
<!-- Step 3: Skills & Qualifications -->
<!-- Step 4: Review & Submit -->
</form>
Once the applicant completes the first step, we’ll navigate them to Step 2, focusing on their work experience so that we can collect information like their most recent company, job title, and years of experience. We’ll tack on a new <fieldset>
with those inputs:
<form id="jobApplicationForm">
<!-- Step 1: Personal Information -->
<!-- Step 2: Work Experience -->
<fieldset class="step" id="step-2" hidden>
<legend id="step2Label">Step 2: Work Experience</legend>
<label for="company">Most Recent Company</label>
<input type="text" id="company" name="company" required />
<label for="jobTitle">Job Title</label>
<input type="text" id="jobTitle" name="jobTitle" required />
<label for="yearsExperience">Years of Experience</label>
<input
type="number"
id="yearsExperience"
name="yearsExperience"
min="0"
required
/>
</fieldset>
<!-- Step 3: Skills & Qualifications -->
<!-- Step 4: Review & Submit -->
</form>
Step 3 is all about the applicant listing their skills and qualifications for the job they’re applying for:
<form id="jobApplicationForm">
<!-- Step 1: Personal Information -->
<!-- Step 2: Work Experience -->
<!-- Step 3: Skills & Qualifications -->
<fieldset class="step" id="step-3" hidden>
<legend id="step3Label">Step 3: Skills & Qualifications</legend>
<label for="skills">Skill(s)</label>
<textarea id="skills" name="skills" rows="4" required></textarea>
<label for="highestDegree">Degree Obtained (Highest)</label>
<select id="highestDegree" name="highestDegree" required>
<option value="">Select Degree</option>
<option value="highschool">High School Diploma</option>
<option value="bachelor">Bachelor's Degree</option>
<option value="master">Master's Degree</option>
<option value="phd">Ph.D.</option>
</select>
</fieldset>
<!-- Step 4: Review & Submit -->
<fieldset class="step" id="step-4" hidden>
<legend id="step4Label">Step 4: Review & Submit</legend>
<p>Review your information before submitting the application.</p>
<button type="submit">Submit Application</button>
</fieldset>
</form>
And, finally, we’ll allow the applicant to review their information before submitting it:
<form id="jobApplicationForm">
<!-- Step 1: Personal Information -->
<!-- Step 2: Work Experience -->
<!-- Step 3: Skills & Qualifications -->
<!-- Step 4: Review & Submit -->
<fieldset class="step" id="step-4" hidden>
<legend id="step4Label">Step 4: Review & Submit</legend>
<p>Review your information before submitting the application.</p>
<button type="submit">Submit Application</button>
</fieldset>
</form>
Notice: We’ve added a hidden
attribute to every fieldset
element but the first one. This ensures that the user sees only the first step. Once they are done with the first step, they can proceed to fill out their work experience on the second step by clicking a navigational button. We’ll add this button later on.
To keep things focused, we’re not going to be emphasizing the styles in this tutorial. What we’ll do to keep things simple is leverage the Simple.css style framework to get the form in good shape for the rest of the tutorial.
If you’re following along, we can include Simple’s styles in the document <head>
:
<link rel="stylesheet" href="https://cdn.simplecss.org/simple.min.css" />
And from there, go ahead and create a style.css
file with the following styles that I’ve folded up.
<details>
<summary>View CSS</summary>
body {
min-height: 100vh;
display: flex;
align-items: center;
justify-content: center;
}
main {
padding: 0 30px;
}
h1 {
font-size: 1.8rem;
text-align: center;
}
.stepper {
display: flex;
justify-content: flex-end;
padding-right: 10px;
}
form {
box-shadow: 0px 0px 6px 2px rgba(0, 0, 0, 0.2);
padding: 12px;
}
input,
textarea,
select {
outline: none;
}
input:valid,
textarea:valid,
select:valid,
input:focus:valid,
textarea:focus:valid,
select:focus:valid {
border-color: green;
}
input:focus:invalid,
textarea:focus:invalid,
select:focus:invalid {
border: 1px solid red;
}
</details>
An easy way to ruin the user experience for a multi-step form is to wait until the user gets to the last step in the form before letting them know of any error they made along the way. Each step of the form should be validated for errors before moving on to the next step, and descriptive error messages should be displayed to enable users to understand what is wrong and how to fix it.
Now, the only part of our form that is visible is the first step. To complete the form, users need to be able to navigate to the other steps. We are going to use several buttons to pull this off. The first step is going to have a Next button. The second and third steps are going to have both a Previous and a Next button, and the fourth step is going to have a Previous and a Submit button.
<form id="jobApplicationForm">
<!-- Step 1: Personal Information -->
<fieldset>
<!-- ... -->
<button type="button" class="next" onclick="nextStep()">Next</button>
</fieldset>
<!-- Step 2: Work Experience -->
<fieldset>
<!-- ... -->
<button type="button" class="previous" onclick="previousStep()">Previous</button>
<button type="button" class="next" onclick="nextStep()">Next</button>
</fieldset>
<!-- Step 3: Skills & Qualifications -->
<fieldset>
<!-- ... -->
<button type="button" class="previous" onclick="previousStep()">Previous</button>
<button type="button" class="next" onclick="nextStep()">Next</button>
</fieldset>
<!-- Step 4: Review & Submit -->
<fieldset>
<!-- ... -->
<button type="button" class="previous" onclick="previousStep()">Previous</button>
<button type="submit">Submit Application</button>
</fieldset>
</form>
Notice: We’ve added onclick
attributes to the Previous and Next buttons to link them to their respective JavaScript functions: previousStep()
and nextStep()
.
The nextStep()
function is linked to the Next button. Whenever the user clicks the Next button, the nextStep()
function will first check to ensure that all the fields for whatever step the user is on have been filled out correctly before moving on to the next step. If the fields haven’t been filled correctly, it displays some error messages, letting the user know that they’ve done something wrong and informing them what to do to make the errors go away.
Before we go into the implementation of the nextStep
function, there are certain variables we need to define because they will be needed in the function. First, we need the input fields from the DOM so we can run checks on them to make sure they are valid.
// Step 1 fields
const name = document.getElementById("name");
const email = document.getElementById("email");
const phone = document.getElementById("phone");
// Step 2 fields
const company = document.getElementById("company");
const jobTitle = document.getElementById("jobTitle");
const yearsExperience = document.getElementById("yearsExperience");
// Step 3 fields
const skills = document.getElementById("skills");
const highestDegree = document.getElementById("highestDegree");
Then, we’re going to need an array to store our error messages.
let errorMsgs = [];
Also, we would need an element in the DOM where we can insert those error messages after they’ve been generated. This element should be placed in the HTML just below the last fieldset
closing tag:
<div id="errorMessages" style="color: rgb(253, 67, 67)"></div>
Add the above div
to the JavaScript code using the following line:
const errorMessagesDiv = document.getElementById("errorMessages");
And finally, we need a variable to keep track of the current step.
let currentStep = 1;
Now that we have all our variables in place, here’s the implementation of the nextstep()
function:
function nextStep() {
errorMsgs = [];
errorMessagesDiv.innerText = "";
switch (currentStep) {
case 1:
addValidationErrors(name, email, phone);
validateStep(errorMsgs);
break;
case 2:
addValidationErrors(company, jobTitle, yearsExperience);
validateStep(errorMsgs);
break;
case 3:
addValidationErrors(skills, highestDegree);
validateStep(errorMsgs);
break;
}
}
The moment the Next button is pressed, our code first checks which step the user is currently on, and based on this information, it validates the data for that specific step by calling the addValidationErrors()
function. If there are errors, we display them. Then, the form calls the validateStep()
function to verify that there are no errors before moving on to the next step. If there are errors, it prevents the user from going on to the next step.
Whenever the nextStep()
function runs, the error messages are cleared first to avoid appending errors from a different step to existing errors or re-adding existing error messages when the addValidationErrors
function runs. The addValidationErrors
function is called for each step using the fields for that step as arguments.
Here’s how the addValidationErrors
function is implemented:
function addValidationErrors(fieldOne, fieldTwo, fieldThree = undefined) {
if (!fieldOne.checkValidity()) {
const label = document.querySelector(`label[for="${fieldOne.id}"]`);
errorMsgs.push(`Please Enter A Valid ${label.textContent}`);
}
if (!fieldTwo.checkValidity()) {
const label = document.querySelector(`label[for="${fieldTwo.id}"]`);
errorMsgs.push(`Please Enter A Valid ${label.textContent}`);
}
if (fieldThree && !fieldThree.checkValidity()) {
const label = document.querySelector(`label[for="${fieldThree.id}"]`);
errorMsgs.push(`Please Enter A Valid ${label.textContent}`);
}
if (errorMsgs.length > 0) {
errorMessagesDiv.innerText = errorMsgs.join("n");
}
}
This is how the validateStep()
function is defined:
function validateStep(errorMsgs) {
if (errorMsgs.length === 0) {
showStep(currentStep + 1);
}
}
The validateStep()
function checks for errors. If there are none, it proceeds to the next step with the help of the showStep()
function.
function showStep(step) {
steps.forEach((el, index) => {
el.hidden = index + 1 !== step;
});
currentStep = step;
}
The showStep()
function requires the four fieldsets in the DOM. Add the following line to the top of the JavaScript code to make the fieldsets available:
const steps = document.querySelectorAll(".step");
What the showStep()
function does is to go through all the fieldsets
in our form and hide whatever fieldset
is not equal to the one we’re navigating to. Then, it updates the currentStep
variable to be equal to the step we’re navigating to.
The previousStep()
function is linked to the Previous button. Whenever the previous button is clicked, similarly to the nextStep
function, the error messages are also cleared from the page, and navigation is also handled by the showStep
function.
function previousStep() {
errorMessagesDiv.innerText = "";
showStep(currentStep - 1);
}
Whenever the showStep()
function is called with “currentStep - 1
” as an argument (as in this case), we go back to the previous step, while moving to the next step happens by calling the showStep()
function with “currentStep + 1
” as an argument (as in the case of the validateStep()
function).
One other way of improving the user experience for a multi-step form, is by integrating visual cues, things that will give users feedback on the process they are on. These things can include a progress indicator or a stepper to help the user know the exact step they are on.
To integrate a stepper into our form (sort of like this one from Material Design), the first thing we need to do is add it to the HTML just below the opening <form>
tag.
<form id="jobApplicationForm">
<div class="stepper">
<span><span class="currentStep">1</span>/4</span>
</div>
<!-- ... -->
</form>
Next, we need to query the part of the stepper that will represent the current step. This is the span tag with the class name of currentStep
.
const currentStepDiv = document.querySelector(".currentStep");
Now, we need to update the stepper value whenever the previous or next buttons are clicked. To do this, we need to update the showStep()
function by appending the following line to it:
currentStepDiv.innerText = currentStep;
This line is added to the showStep()
function because the showStep()
function is responsible for navigating between steps and updating the currentStep
variable. So, whenever the currentStep
variable is updated, the currentStepDiv should also be updated to reflect that change.
One major way we can improve the form’s user experience is by storing user data in the browser. Multistep forms are usually long and require users to enter a lot of information about themselves. Imagine a user filling out 95% of a form, then accidentally hitting the F5 button on their keyboard and losing all their progress. That would be a really bad experience for the user.
Using localStorage
, we can store user information as soon as it is entered and retrieve it as soon as the DOM content is loaded, so users can always continue filling out their forms from wherever they left off. To add this feature to our forms, we can begin by saving the user’s information as soon as it is typed. This can be achieved using the input
event.
Before adding the input
event listener, get the form element from the DOM:
const form = document.getElementById("jobApplicationForm");
Now we can add the input
event listener:
// Save data on each input event
form.addEventListener("input", () => {
const formData = {
name: document.getElementById("name").value,
email: document.getElementById("email").value,
phone: document.getElementById("phone").value,
company: document.getElementById("company").value,
jobTitle: document.getElementById("jobTitle").value,
yearsExperience: document.getElementById("yearsExperience").value,
skills: document.getElementById("skills").value,
highestDegree: document.getElementById("highestDegree").value,
};
localStorage.setItem("formData", JSON.stringify(formData));
});
Next, we need to add some code to help us retrieve the user data once the DOM content is loaded.
window.addEventListener("DOMContentLoaded", () => {
const savedData = JSON.parse(localStorage.getItem("formData"));
if (savedData) {
document.getElementById("name").value = savedData.name || "";
document.getElementById("email").value = savedData.email || "";
document.getElementById("phone").value = savedData.phone || "";
document.getElementById("company").value = savedData.company || "";
document.getElementById("jobTitle").value = savedData.jobTitle || "";
document.getElementById("yearsExperience").value = savedData.yearsExperience || "";
document.getElementById("skills").value = savedData.skills || "";
document.getElementById("highestDegree").value = savedData.highestDegree || "";
}
});
Lastly, it is good practice to remove data from localStorage
as soon as it is no longer needed:
// Clear data on form submit
form.addEventListener('submit', () => {
// Clear localStorage once the form is submitted
localStorage.removeItem('formData');
});
localStorage
If the user accidentally closes their browser, they should be able to return to wherever they left off. This means that the current step value also has to be saved in localStorage
.
To save this value, append the following line to the showStep()
function:
localStorage.setItem("storedStep", currentStep);
Now we can retrieve the current step value and return users to wherever they left off whenever the DOM content loads. Add the following code to the DOMContentLoaded
handler to do so:
const storedStep = localStorage.getItem("storedStep");
if (storedStep) {
const storedStepInt = parseInt(storedStep);
steps.forEach((el, index) => {
el.hidden = index + 1 !== storedStepInt;
});
currentStep = storedStepInt;
currentStepDiv.innerText = currentStep;
}
Also, do not forget to clear the current step value from localStorage
when the form is submitted.
localStorage.removeItem("storedStep");
The above line should be added to the submit handler.
Creating multi-step forms can help improve user experience for complex data entry. By carefully planning out steps, implementing form validation at each step, and temporarily storing user data in the browser, you make it easier for users to complete long forms.
For the full implementation of this multi-step form, you can access the complete code on GitHub.
As more desktop-based tools and mobile productivity apps shift to the cloud, Cloud-based Integrated Development Environments (IDEs) have become essential for web developers. These cloud IDEs allow you to code, debug, and collaborate directly from your browser, providing a seamless experience for building websites and […]
CodingAs more desktop-based tools and mobile productivity apps shift to the cloud, Cloud-based Integrated Development Environments (IDEs) have become essential for web developers. These cloud IDEs allow you to code, debug, and collaborate directly from your browser, providing a seamless experience for building websites and web applications without the need for local setup.
Popular platforms like GitHub have made it easy to transition to cloud-based coding, and now full-featured Cloud IDEs are the preferred choice for many developers, especially those working in remote and collaborative settings.
In this list, you’ll find 10 of the best Cloud IDEs for web development, each offering unique features to streamline your coding workflow.
CodeSandbox is an excellent choice if you’re into front-end development, especially when you need to prototype quickly. It supports frameworks like React, Vue, and Angular, so it’s easy to start building right away. The intuitive interface makes it straightforward to work with multiple files, and you can see your changes live without a complicated setup.
CodeSandbox also integrates with GitHub, making it simple to pull in code or push changes to your repositories. It’s like having a collaborative code editor and deployment tool in one!
Replit is a versatile, beginner-friendly IDE that supports a wide range of programming languages. It’s designed for collaborative coding, making it ideal for group projects, learning to code, or sharing quick code snippets. Replit’s live multiplayer feature allows you to code in real time with others.
It also comes with an AI-powered assistant and various project templates, helping you get started quickly. Replit’s flexibility makes it a solid choice for developers at any stage, whether you’re just learning or working on full-fledged projects.
GitHub Codespaces takes the popular Visual Studio Code (VSCode) and brings it directly to the cloud. This means you get the full VSCode development experience with add-ons, themes, debugging, command palettes, and even terminal access, all right in your browser.
With Codespaces, you can launch a coding environment from any GitHub repository, making it incredibly convenient for working on projects without the need for local setup. It’s a game-changer for developers who want seamless transitions between devices.
StackBlitz is a cloud IDE tailored for JavaScript and TypeScript development. It’s a fantastic choice if you’re working with frameworks like Angular, React, or Vue, as it’s optimized for them. StackBlitz enables you to start coding instantly with live previews of your work, giving you fast feedback.
One unique feature is offline support, so you can keep working even without an internet connection. It’s a powerful, quick, and reliable choice for front-end developers who want to streamline their workflow.
AWS Cloud9 is Amazon’s cloud-based IDE that offers built-in support for over 40 programming languages. It’s especially valuable if you’re working with serverless applications or AWS services, as it includes a terminal for managing and deploying your cloud resources directly from the editor.
Ideal for both front-end and back-end development, AWS Cloud9 comes with collaborative features, so teams can code together in real-time. With a robust setup for building, running, and debugging code in the cloud, it’s perfect for developers focused on cloud-based applications.
Gitpod automates your dev environment setup, allowing you to start coding instantly by launching pre-configured workspaces. It integrates seamlessly with GitHub, GitLab, and Bitbucket, so you can dive into coding without spending time on setup.
You can customize each project environment using a `.gitpod.yml` file, making it easy to create and share consistent setups across teams. Gitpod’s automation and compatibility with popular repositories make it a great choice for teams and open-source contributors.
Glitch is a unique cloud IDE focused on building and sharing full-stack apps. It’s popular for its collaborative and community-driven approach, allowing developers to quickly prototype and deploy apps in a fun, supportive environment.
With real-time previews and easy sharing options, Glitch is perfect for creative projects, hackathons, or learning web development. It’s especially well-suited for beginners or anyone looking for a more playful approach to building web applications.
Codeanywhere is a versatile cloud IDE that supports over 75 languages and frameworks, making it a go-to for developers working on diverse projects. It’s packed with features like SSH and FTP support, which means you can connect to remote servers and manage code in multiple environments.
Its various pricing options and flexible setup make Codeanywhere accessible for both solo developers and teams. With a mobile-friendly interface, you can even code on the go, giving you the freedom to work wherever you are.
PaizaCloud is a straightforward, user-friendly cloud IDE that’s ideal for web and server development. Its drag-and-drop interface allows you to set up environments quickly with tools like MySQL, PostgreSQL, and more, so you can dive into coding with ease.
This IDE’s simplicity makes it great for beginners, while its support for multiple web technologies makes it versatile enough for experienced developers. PaizaCloud is perfect for anyone looking for a quick, no-fuss setup to build and test web applications.
Visual Studio Codespaces brings the power of Visual Studio to the cloud, offering a customizable coding experience that’s accessible from anywhere. With the same extensions, themes, and settings available in Visual Studio Code, it feels like working on a local machine.
Perfect for developers who want a seamless, cloud-based experience, Codespaces is scalable with adjustable resources, making it ideal for both personal and collaborative projects. It’s a powerful tool for anyone looking to code, collaborate, and debug without the need for extensive setup.
Visit Visual Studio Codespaces
The post Cloud IDEs For Web Developers – Best Of appeared first on Hongkiat.
Alternatives To Typical Technical Illustrations And Data Visualisations Alternatives To Typical Technical Illustrations And Data Visualisations Thomas Bohm 2024-11-08T09:00:00+00:00 2025-03-04T21:34:45+00:00 Good technical illustrations and data visualisations allow users and clients to, in a manner of speaking, take time out, ponder and look at the information […]
Ux
2024-11-08T09:00:00+00:00
2025-03-04T21:34:45+00:00
Good technical illustrations and data visualisations allow users and clients to, in a manner of speaking, take time out, ponder and look at the information in a really accessible and engaging way. It can obviously also help you communicate certain categories of information and data.
The aim of the article is to inspire and show you how, by using different technical illustrations and data visualisations, you can really engage and communicate with your users and do much more good for the surrounding content.
Below are interesting and not commonly seen examples of technical illustration and data visualisation, that show data and information. As you know, more commonly seen examples are bar graphs and pie charts, but there is so much more than that!
So, keep reading and looking at the following examples, and I will show you some really cool stuff.
Typically, technical illustration and data visualisations were done using paper, pens, pencils, compasses, and rulers. But now everything is possible — you can do analog and digital. Since the mainstream introduction of the internet, around 1997, data (text, numerical, symbol) has flourished, and it has become the current day gold currency. It is easy for anyone to learn who has the software or knows the coding language. And it is much easier to do technical illustrations and data visualisation than in previous years. But that does not always mean that what is done today is better than what was done before.
There are some common categories of audiences:
Sara Dholakia in her article “A Guide To Getting Data Visualization Right” points out the following considerations:
Just look at what we are more often than not presented with.
So, let us dive into some cool examples that you can understand and start using today that will also give your work and content a really cool edge and help it stand out and communicate better.
It provides a great way to show relationships and connections between items and different components, and the 3D style adds a lot to the diagram.
It’s an effective way to highlight and select information or data in relation to its surrounding data and information.
Being great at showing two categories of information and comparing them horizontally, they are an alternative to typical horizontal or vertical bar graphs.
They are an excellent way to enliven overused 2D pie and bar graphs. 3D examples of common graphs give a real sense of quality and depth whilst enhancing the data and information much more than 2D versions.
This diagram is a good way to show the progression and the journey of information and data and how they are connected in relation to their data value. It’s not often seen, but it’s really cool.
A stream graph is a great way to show the data and how it relates to the other data — much more interesting than just using lines as is typically seen.
It provides an excellent way to show a map in a different and more interesting form than the typically seen 2D version. It really gives the map a sense of environment, depth, and atmosphere.
It’s a great way to show the data spatially and how the data value relates, in terms of size, to the rest of the data.
A waterfall chart is helpful in showing the data and how it relates in a vertical manner to the range of data values.
It shows the data against the other data segments and also as a value within a range of data.
A lollipop chart is an excellent method to demonstrate percentage values in a horizontal manner that also integrates the label and data value well.
It’s an effective way to illustrate data values in terms of size and sub-classification in relation to the surrounding data.
There are many options available, including specialized software like Flourish, Tableau, and Klipfolio; familiar tools like Microsoft Word, Excel, or PowerPoint (with redrawing in software like Adobe Illustrator, Affinity Designer, or CorelDRAW); or learning coding languages such as D3, Three.js, P5.js, WebGL, or the Web Audio API, as Frederick O’Brien discusses in his article “Web Design Done Well: Delightful Data Visualization Examples.”
But there is one essential ingredient that you will always need, and that is a mind and vision for technical illustration and data visualisation.
Designing For Gen Z: Expectations And UX Guidelines Designing For Gen Z: Expectations And UX Guidelines Vitaly Friedman 2024-10-30T09:00:00+00:00 2025-03-04T21:34:45+00:00 Every generation is different in very unique ways, with different habits, views, standards, and expectations. So when designing for Gen Z, what do we need […]
Ux
2024-10-30T09:00:00+00:00
2025-03-04T21:34:45+00:00
Every generation is different in very unique ways, with different habits, views, standards, and expectations. So when designing for Gen Z, what do we need to keep in mind? Let’s take a closer look at Gen Z, how they use tech, and why it might be a good idea to ignore common design advice and do the opposite of what is usually recommended instead.
.course-intro{–shadow-color:206deg 31% 60%;background-color:#eaf6ff;border:1px solid #ecf4ff;box-shadow:0 .5px .6px hsl(var(–shadow-color) / .36),0 1.7px 1.9px -.8px hsl(var(–shadow-color) / .36),0 4.2px 4.7px -1.7px hsl(var(–shadow-color) / .36),.1px 10.3px 11.6px -2.5px hsl(var(–shadow-color) / .36);border-radius:11px;padding:1.35rem 1.65rem}@media (prefers-color-scheme:dark){.course-intro{–shadow-color:199deg 63% 6%;border-color:var(–block-separator-color,#244654);background-color:var(–accent-box-color,#19313c)}}
This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Free preview.
When we talk about Generation Z, we usually refer to people born between 1995 and 2010. Of course making universal statements about a cohort where some are adults in their late 20s and others are school students is at best ineffective and at worst wrong — yet there are some attributes that stand out compared to earlier generations.
Gen Z is the most diverse generation in terms of race, ethnicity, and identity. Research shows that young people today are caring and proactive, and far from being “slow, passive and mindless” as they are often described. In fact, they are willing to take a stand and break their habits if they deeply believe in a specific purpose and goal. Surely there are many distractions along that way, but the belief in fairness and sense of purpose has enormous value.
Their values reflect that: accessibility, inclusivity, sustainability, and work/life balance are top priorities for Gen Zs, and they value experiences, principles, and social stand over possessions.
Gen Z grew up with technology, so unsurprisingly digital experiences are very familiar and understood by them. On the other hand, digital experiences are often suboptimal at best — slow, inaccessible, confusing, and frustrating. Plus, the web is filled with exaggerations and generic but fluffy statements. So it’s not a big revelation that Gen Zs are highly skeptical of brands and advertising by default (rightfully so!), and rely almost exclusively on social circles, influencers, and peers as main research channels.
They might sometimes struggle to spot what’s real and what’s not, but they are highly selective about their sources. They are always connected and used to following events live as they unfold, so unsurprisingly, Gen Z tends to have little patience.
And sure enough, Gen Z loves short-form content, but that doesn’t necessarily equate to a short attention span. Attention span is context-dependent, as documentaries and literature are among Gen Z’s favorites.
Most design advice on Gen Z focuses on producing “short form, snackable, bite-sized” content. That content is optimized for very short attention spans, TikTok-alike content consumption, and simplified to the core messaging. I would strongly encourage us to do the opposite.
We shouldn’t discount Gen Z as a generation with poor attention spans and urgent needs for instant gratification. Gen Zs have very strong beliefs and values, but they are also inherently curious and want to reshape the world. We can tell a damn good story. Captivate and engage. Make people think. Many Gen Zs are highly ambitious and motivated, and they want to be challenged and to succeed. So let’s support that. And to do that, we need to remain genuine and authentic.
As Michelle Winchester noted, Gen Zs have very diverse perspectives and opinions, and they possess a discerning ability to detect disingenuous content. That’s also where mistrust towards AI comes into play, along with AI fatigue. As Nilay Patel mentioned on Ezra Klein Show, today when somebody says that something is “AI-generated”, usually it’s not a praise, but rather a testament how poor and untrustworthy it actually is.
Gen Z expects better. Hence brands that value sincerity, honesty, and authenticity are perceived as more trustworthy compared to brands that don’t have an opinion, don’t take a stand, don’t act for their beliefs and principles. For example, the “Keep Beauty Real” campaign by Dove (shown below) showcases the value of genuine human beauty, which is so often missed and so often exaggerated to extremes by AI.
So whenever you can, aim for the opposite of perfect. Say what you think and do what you promise. Reflect the real world with real people using real products, however imperfect they are. That’s how you build a strong relationship and trust with Gen Z.
Because Gen Z are so incredibly diverse, their needs are extremely diverse and demanding as well. This doesn’t necessarily mean customization of features or adapting the layout entirely based on custom settings or preferences. But it does mean providing an accessible experience out of the box.
Simple things matter. High enough color contrast. Links that look like links. Buttons that look like buttons. Forms that are broken down into simple steps to follow. Diverse gender and identity options. Proper tab order. Keyboard accessibility. Reduced motion for people who opt in for reduced motion sickness. Dark mode and light mode.
It’s nothing groundbreaking really. Just basic things that help focus and get things done. In fact, accessibility is better for everyone — not just for Gen Z (who expect and demand it) but also for absolutely everybody around the world.
Many design mock-ups that we are creating today are typically designed and presented on large screens first. However, depending on your user base, a vast majority of users (and that’s especially true for Gen Zs), will use almost exclusively mobile devices to access your products and services. This surely will be different for enterprise software, but consumer products are much less likely to be used on desktop devices by younger Gen Zs.
Get into the habit of presenting your design mock-ups in mobile views only first. Help people read better. Content design has never been more important — especially when designing for mobile screens. Here are a few guidelines to keep in mind:
Many people, and especially Gen Z, turn on closed captioning by default these days. Perhaps the spoken language isn’t their native language, or perhaps they aren’t quite familiar with the accent of some speakers, or maybe they don’t have headphones nearby, don’t want to use them, or can’t use them. In short, closed captions are better for everybody and they increase ROI and audience.
Gareth Ford Williams has put together a visual language of closed captions and has kindly provided a PDF cheatsheet that is commonly used by professional captioners. There are some generally established rules about captioning, and here are some that I found quite useful when working on captioning for my own video course:
On YouTube, users can select a font used for subtitles and choose between monospaced and proportional serif and sans-serif, casual, cursive, and small-caps. But perhaps, in addition to stylistic details, we could provide a careful selection of fonts to help audiences with different needs. This could include a dyslexic font or a hyper-legible font, for example.
Additionally, we could display presets for various high contrast options for subtitles. This gives users a faster selection, requiring less effort to configure just the right combination of colors and transparency. Still, it would be useful to provide more sophisticated options just in case users need them.
On the other hand, in times of instant gratification with likes, reposts, and leaderboards, people often learn that a feeling of achievement comes from extrinsic signals, like reach or attention from other people. The more important it is to support intrinsic motivation.
As Paula Gomes noted, intrinsic motivation is characterized by engaging in behaviors just for their own sake. People do something because they enjoy it. It is when they care deeply for an activity and enjoy it without needing any external rewards or pressure to do it.
Typically this requires 3 components:
In practical terms, that means setting people up for success. Preparing the knowledge and documents and skills they need ahead of time. Building knowledge up without necessarily rewarding them with points. It also means allowing people to have a strong sense of ownership of the decisions and the work they are doing. And adding collaborative goals that would require cooperation with team members and colleagues.
The younger people are, the more difficult it is to distinguish between what’s real and what isn’t. Whenever possible, show sources or at least explain where to find specific details that back up claims that you are making. Encourage people to make up their mind, and design content to support that — with scientific papers, trustworthy reviews, vetted feedback, and diverse opinions.
And: you don’t have to shy away from technical details. Don’t make them mandatory to read and understand, but make them accessible and available in case readers or viewers are interested.
In times where there is so much fake, exaggerated, dishonest, and AI-generated content, it might be just enough to be perceived as authentic, trustworthy, and attention-worthy by the highly selective and very demanding Gen Z.
I keep repeating myself like a broken record, but better accessibility is better for everyone. As you hopefully have noticed, many attributes and expectations that we see in Gen Z are beneficial for all other generations, too. It’s just good, honest, authentic design. And that’s the very heart of good UX.
What I haven’t mentioned is that Gen Z genuinely appreciates feedback and values platforms that listen to their opinions and make changes based on their feedback. So the best thing we can do, as designers, is to actively involve Gen Z in the design process. Designing with them, rather than designing for them.
And, most importantly: with Gen Z, perhaps for the first time ever, inclusion and accessibility is becoming a default expectation for all digital products. With it comes the sense of fairness, diversity, and respect. And, personally, I strongly believe that it’s a great thing — and a testament how remarkable Gen Zs actually are.
I’ve just launched “How To Measure UX and Design Impact” 🚀 (8h), a new practical guide for UX leads to measure UX impact on business. Use the code 🎟 IMPACT
to save 20% off today. And thank you for your kind and ongoing support, everyone! Jump to details.
Managing multiple PHP versions is a common challenge when developing PHP applications, where applications often require different versions due to varying framework dependencies and compatibility requirements. While switching between PHP versions can be daunting, especially at the system level, several tools can streamline this process. […]
CodingManaging multiple PHP versions is a common challenge when developing PHP applications, where applications often require different versions due to varying framework dependencies and compatibility requirements. While switching between PHP versions can be daunting, especially at the system level, several tools can streamline this process.
In this article, we’ll explore effective solutions for managing multiple PHP versions, helping you choose the right tool to simplify your development workflow. So, without further ado, let’s get started.
.no-js #ref-block-post-47222 .ref-block__thumbnail { background-image: url(“https://assets.hongkiat.com/uploads/thumbs/250×160/how-to-upgrade-php.jpg”); }
PHP7.4 has been released with a handful of new features — like the arrow function array_map(fn (Foo $foo)… Read more
Homebrew, the popular package manager for macOS and Linux, simplifies PHP version management. After installing Homebrew from their official website, follow these steps to set up and switch between PHP versions:
To manage multiple PHP versions with Homebrew, we’ll first tap into Shivam Mathur’s widely-used PHP repository. This repository provides access to various PHP versions that you can install:
brew tap shivammathur/php
Once the repository is tapped, you can install your desired PHP versions. Here’s how to install PHP 7.4, 8.2, and the latest version (currently 8.3):
brew install shivammathur/php/php@7.4 brew install shivammathur/php/php@8.2 brew install shivammathur/php/php
Feel free to install any combination of versions that your projects require. Each version will be stored separately on your system.
While Homebrew allows you to install multiple PHP versions simultaneously, your system can only use one version at a time through its PATH
. Think of it like having multiple PHP versions installed in your toolbox, but only one can be your active tool.
Let’s assume you are currently running PHP 8.3, but now you need to switch to PHP 7.4. First, unlink the current version to “disconnect” the currently active PHP version from PATH
.
brew unlink php
After unlinking the current version, you can link the other version using the brew link
command:
brew link php@7.4
Now, when you run php -v
, it will show the active PHP version as 7.4, as you can see below.
Homebrew makes it easy to use multiple PHP versions on macOS and Linux through the CLI. But it also comes with its own set of pros and cons. So consider the following when deciding if Homebrew is the right choice for you.
PHP Monitor is a lightweight macOS application designed to help developers manage and switch between different PHP versions easily. It offers a familiar and intuitive UI that appears at the top of your screen, allowing you to switch between PHP versions with a single click. This app integrates with Homebrew, making it easier to manage your PHP setup without using the terminal.
As we can see above, you can view which PHP versions are installed on your machine, the current version active globally, access the PHP configuration file, view the memory limit, and more.
The app also provides a simple way to install and update PHP versions from the Manage PHP Installations… menu.
.phpctlrc
file to switch between PHP versions.PHPCTL is a tool designed to help developers easily switch between different PHP versions by leveraging Docker containers. This makes PHPCTL portable and platform-independent, allowing you to manage PHP versions on any operating system that supports Docker. It also provides additional CLI tools, such as phpctl create
for new projects, phpctl repl
for interactive shells, and phpctl init
for configuration setup, among other handy features.
Before getting started, you’ll need Docker installed on your system. Docker Desktop works great, or if you’re on macOS, you might prefer OrbStack.
Once you have Docker installed, you can install PHPCTL using the following command:
/bin/bash -c "$(curl -fsSL https://phpctl.dev/install.sh)"
Or, if you have Homebrew installed, you can run:
brew install opencodeco/phpctl/phpctl
This will download the PHPCTL binary to your system and make it executable, allowing you to use the tool right away. The script automatically installs PHPCTL and sets up the necessary paths, so no manual configuration is required.
After installation, you can check if it was successfully installed by running:
phpctl list
This command will list all the subcommands and other information about the current PHP installation, as you can see below.
You can also run the php
and composer
commands directly.
php -v composer -v
These two commands will actually run inside a Docker container. PHPCTL will automatically mount the current directory to the container, so you can work on your project as if you were working on your local machine.
Unlike with Homebrew or PHP Monitor, where you need to run a command or click on the UI to switch to the PHP version, with PHPCTL, you will need to create a file .phpctlrc
and specify which PHP version you’d like to run within the given directory.
PHP_VERSION=83
When you run php
or composer
in the directory, PHPCTL will automatically switch to the PHP version specified in the .phpctlrc
file.
That’s all. It’s very convenient and provides a seamless development experience once it is fully configured. However, it also comes with its own set of pros and cons.
PVM simplifies PHP version management on Windows. Similar to Node Version Manager (nvm) but specifically for PHP, PVM eliminates common Windows PATH
variable headaches and streamlines switching between different PHP versions.
Download the latest PVM release from the official Github repository. Then, create a folder at C:UsersYourUsername.pvmbin
and place the downloaded pvm.exe
in this folder.
Lastly, add the .pvmbin
folder to your system’s PATH
variable through System Properties > Environment Variables.
Once installed, you can use PVM to switch between PHP versions quickly and easily. Since it is heavily inspired by nvm, the commands are similar. Here are some commands to get you started:
PVM makes it easier to install multiple PHP versions on Windows. If you need a version that’s not currently installed on your computer, you can use the install command:
pvm install 8.2
…which will download and install PHP 8.2 on your computer.
If you want to switch to a specific PHP version, use the use command. You must specify at least the major and minor version, and PVM will choose the latest available patch version if it’s not provided.
pvm use 8.2
If you want to switch to a specific patch version, include the patch number as well:
pvm use 8.2.3
That’s all. PVM is a great tool for managing PHP versions on Windows, but it also comes with its own set of pros and cons.
Laravel Valet is a lightweight development environment designed specifically for macOS that makes PHP development a breeze. What makes Valet particularly convenient is its built-in PHP version management that allows you to switch between PHP versions for different projects without complex configurations.
To get started, install Valet using Composer as a global package:
composer global require laravel/valet
After installation, run the Valet installation command:
valet install
Valet makes PHP version switching simple with the valet use php@version
command. For example:
valet use php@8.2
It automatically installs the version via Homebrew if it’s currently missing.
For project-specific PHP versions, you can create a .valetrc
file in your project’s root directory with the line php=php@8.2
. Then, simply run:
valet use
…and Valet will automatically switch to the PHP version specified in the .valetrc
file.
With the right tools, managing multiple PHP versions becomes effortless across macOS, Linux, or Windows. Hopefully, this article helps you pick the solution that matches your workflow.
The post 5 Ways to Manage Multiple Versions of PHP appeared first on Hongkiat.
In our previous article, we covered how to create simple pages in Flask and use Jinja2 as the templating engine. Now, let’s explore how Flask handles requests. Understanding how HTTP requests work and how to manage them in Flask is key, as this allows you […]
CodingIn our previous article, we covered how to create simple pages in Flask and use Jinja2 as the templating engine. Now, let’s explore how Flask handles requests.
Understanding how HTTP requests work and how to manage them in Flask is key, as this allows you to build more interactive and dynamic web apps, such as building a form, API endpoints, and handling file uploads.
Without further ado, let’s get started.
An HTTP request is a message sent, usually by a browser, to the server asking for data or to perform an action. For example, when you visit a webpage, your browser sends a GET request to the server to retrieve the page’s content.
There are several different types of HTTP requests, and Flask can handle all of them, including GET
to retrieve data, POST
to send data to the server like submitting a form, PUT
to update existing data on the server, and DELETE
to delete data from the server.
Flask makes handling requests straightforward by using routes. In our previous articles, we used routes to create static and dynamic pages. By default, routes only respond to GET
requests, but you can easily handle other HTTP methods by specifying them in the route.
Assuming we have a contact page at /contact
, we probably would want the page to handle both GET
and POST
requests to allow users to load the page, as well as to submit the form. To make the page handle these two HTTP methods, we can pass in the methods
argument, for example:
@app.route('/contact', methods=['GET', 'POST']) def submit(): if request.method == 'POST': data = request.form['input_data'] return render_template('contact.html', data=data) return render_template('contact.html')
In this example, users can load the /contact
page. When the form is submitted, Flask retrieves the form data and passes it to the contact.html
template. Then, within the template, you can access and process the data using Jinja2 templating.
Data may be passed to a URL via query parameters. This is commonly found on a search page where the search query is passed as a query parameter. These are the parts of the URL after a ?
, like /search?query=flask
. Flask makes it easy to access query parameters with the request.args
dictionary, for example:
@app.route('/search') def search(): query = request.args.get('query') # Meilisearch # See: https://github.com/meilisearch/meilisearch-python result = index.search(query) if query: return render_template('search.html', result=result) return 'No search query provided.'
In this case, when a user visits /search?query=flask
, we take the query and use it to retrieve the search result, which is then passed to the search.html
template for rendering.
When building an API, we need the data delivered in JSON format. Flask provides a simple way to handle JSON data in requests with the jsonify
function. Here’s an example of handling JSON data:
from flask import jsonify @app.route('/api/data') def api_data(): return make_response(jsonify({"message": 'Success'}), 200)
Flask also makes handling file uploads easy, using the request.files
object.
@app.route('/upload', methods=['GET', 'POST']) def upload_file(): if request.method == 'POST': file = request.files['file'] file.save(f'/uploads/{file.filename}') return redirect(url_for('index.html'))
In this example, when a user submits a file via the form, Flask saves the file to the specified directory and then redirects the user to the homepage.
Sometimes you also need to get headers or cookies from the request in your app, such as for passing authentication or tracking user data. Flask provides easy access to headers through request.headers
and cookies through request.cookies
. Here’s a basic example of how we use it to authenticate for an API endpoint:
@app.route('/api/data') def check(): auth = request.headers.get('Authorization') nonce = request.cookies.get('nonce') # Simple authentication check if auth == 'Bearer X' and nonce == 'Y': return jsonify({"message": "Authenticated"}), 200 else: return jsonify({"message": "Unauthorized"}), 401
Flask makes handling HTTP requests a breeze. Whether you’re working with basic GET
requests, handling form submissions with POST
, or dealing with more complex scenarios like JSON data and file uploads, it provides the APIs, functions, and tools you need to get the job done. We’ve only scratched the surface of Flask’s request-handling capabilities, but hopefully, this gives you a solid foundation to start building your own Flask apps.
The post How to Handle HTTP Requests in Flask appeared first on Hongkiat.