Special thanks to Karen Kerno, Sneha Dasgupta, Rachel Rosenberg, and Nick Bachan for piloting usability heuristic evaluations, training adjacent groups on how to conduct them, and evolving the framework to be comprehensive at Indeed.
Heuristic evaluation (Nielsen and Molich, 1990; Nielsen 1994) is a usability engineering method for finding the usability problems in a user interface design so that they can be attended to as part of an iterative design process.
For Accessibility & Inclusivity considerations, use this this checklist.
Download the PDF here.
Last updated: Mar 9, 2025
Gain Leadership Alignment: Secure leadership agreement on the evaluation timeline, deliverables, and how findings will drive prioritized improvements.
Build a Cross-Functional Evaluation Team: Assemble a diverse, cross-functional team to conduct the usability evaluation.
Define the Evaluation Scope: Decide if the scope will include single pages, multi-page flows, cross-product workflows, and/or single or multiple devices and experiential touchpoints.
Setup the Evaluation Template: Customize your evaluation. Start with the core heuristics below, then add relevant considerations from content, AI, performance, and more. Feel free to add or adjust questions as needed.
Document Findings with a Scoring Template: Use a scoring template to rate severity and reach, provide concise issue descriptions, and support findings with available user research or behavioral data.
Develop Prioritized Recommendations: Generate prioritized recommendations based on evaluation findings, summarized in a concise, topline report for the team.
Collaborate on Solutions and Prioritize: Collaborate with relevant teams to brainstorm and prioritize solutions using an effort-impact matrix, considering severity and reach.
For more information, explore how-to articles from Nielsen Norman Group and Indeed UX.
The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Is the system consistently providing feedback to the user about what's happening?
Are loading times, processing, and other system actions clearly communicated?
Does the user understand the current state of the system at all times?
(Content) Is the content's status (e.g., updated, draft) clearly communicated?
(AI/LLM/Automation) Does the AI provide clear indications of when it is processing, generating, or retrieving information?
(AI/LLM/Automation) If the AI encounters an error or limitation, is this clearly communicated to the user?
(Performance/Responsiveness) Does the product load quickly and efficiently, especially on varying network conditions and devices?
(Performance/Responsiveness) Are animations and transitions smooth and responsive?
(Performance/Responsiveness) Does the product handle large datasets or complex operations without significant delays?
The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
Does the system use language, concepts, and conventions familiar to the user?
Does the system follow real-world conventions and expectations?
Are metaphors and analogies used appropriately to aid understanding?
(Content) Is the content's language and tone appropriate for the target audience?
(AI & Automation) Does the AI's language and responses feel natural and human-like, while maintaining clarity?
(AI & Automation) Does the AI's behavior align with user expectations and real-world scenarios?
(Context of Use) Does the system adapt its presentation and functionality to the user's specific environment and device?
(Context of Use) Does the product account for potential interruptions or distractions in the user's environment?
(Context of Use) Does the product adapt to different user skill levels and technical proficiency?
(Context of Use) Does the product account for different input types, such as touch, keyboard, or voice.
Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Does the system provide clear "undo" and "redo" options?
Can users easily exit unwanted states or actions?
Does the system allow users to recover from errors without unnecessary frustration?
(Content) Can users easily modify or delete their own content contributions?
(AI & Automation) Can users easily interrupt or stop AI processes?
(AI & Automation) Can users provide feedback or corrections to the AI's outputs?
(AI & Automation) Does the user have control over the level of automation?
(Task Flows) Are task flows logical, efficient, and minimize user effort?
(Task Flows) Does the system provide clear and intuitive navigation throughout the task flow?
(Task Flows) Does the system minimize the number of steps required to complete common tasks?
(Task Flows) Does the system provide clear feedback and progress indicators during multi-step task flows?
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Are consistent terminology, design elements, and functionality used throughout the system?
Does the system adhere to platform conventions and industry standards?
Are similar actions performed in similar ways across the interface?
(Content) Is content formatted consistently (e.g., headings, lists, links)?
(AI & Automation) Does the AI maintain consistent behavior and responses across interactions?
(AI & Automation) Does the AI adhere to established ethical guidelines and standards?
(Interoperability & Integration) Does the product integrate smoothly with other relevant tools and platforms?
(Interoperability & Integration) Are data import and export options provided in common formats?
(Interoperability & Integration) Does the product support APIs or other integration methods?
Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Is the system designed to minimize the likelihood of errors?
Are clear and informative error messages provided when errors do occur?
Does the system provide helpful suggestions to prevent errors?
Are there confirmation steps for actions that cannot be easily undone?
(Content) Does the system prevent content errors (e.g., typos, broken links) through validation?
(AI & Automation) Does the AI have safeguards to prevent the generation of harmful or biased content?
(AI & Automation) Does the AI provide warnings or suggestions when it detects potential errors or inconsistencies?
(Security & Privacy) Are sensitive actions (e.g., password changes, financial transactions) protected with appropriate security measures?
Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Are objects, actions, and options visible or easily retrievable?
Does the system minimize the user's memory load?
Are instructions and help readily available when needed?
(Content) Is key content highlighted and easily scannable?
(AI & Automation) Does the AI provide contextually relevant information and suggestions?
(AI & Automation) Can the AI recall previous interactions and user preferences?
(Onboarding & Learning Curve) Is the onboarding process clear, concise, and engaging?
(Onboarding & Learning Curve) Does the product provide adequate guidance and tutorials for new users?
Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Does the system provide accelerators or shortcuts for experienced users?
Can the system be customized to meet individual user needs?
Does the system allow for efficient task completion by all users?
(Content) Is content easily searchable and filterable?
(AI & Automation) Can the AI adapt to different user skill levels and preferences?
(AI & Automation) Does the system implement an appropriate level of automation or augmentation for each task, maximizing efficiency while maintaining the necessary level of human control?
(Performance & Responsiveness) Does the product consume an excessive amount of device resources?
(Performance & Responsiveness) Is the product optimized for mobile and tablet devices?
(Micro-interactions) Are micro-interactions (e.g., button states, hover effects, animations) clear, responsive, and contribute to a positive user experience?
Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Is the interface clean, uncluttered, and visually appealing?
Is unnecessary information or functionality avoided?
Does the visual design support the primary tasks and goals of the user?
(Content) Is content concise and focused, avoiding unnecessary jargon or fluff?
(AI & Automation) Are AI responses concise and relevant to the user's query?
(AI & Automation) Is the AI's presence and interactions integrated seamlessly into the user interface?
(Emotional Design) Does the product create a sense of trust and confidence in (and between) its users?
(Emotional Design) Does the product evoke a sense of delight or engagement through its design and interactions?
(Emotional Design) Does the product’s design, content, and interactions reflect the intended brand personality and values?
Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Are error messages expressed in plain language (no codes)?
Do error messages precisely indicate the problem?
Do error messages suggest a solution?
(Content) Are content-related error messages clear and actionable?
(AI & Automation) Does the AI provide clear explanations for its actions and decisions?
(AI & Automation) Does the AI offer suggestions for how to correct or improve its outputs?
(AI & Automation) Does the AI offer the user a way to understand why it responded in the way that it did?
(User Feedback & Iteration) Is the company able to effectively discern, prioritize, and respond to critical user feedback across relevant teams?
(User Feedback & Iteration) Is user feedback actively collected, analyzed, and prioritized for product development?
Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Is help documentation easy to find and access?
Is help documentation concise, task-oriented, and easy to understand?
Does the system provide context-sensitive help when needed?
Does documentation provide examples of how to complete common tasks.
(Content) Does help documentation address content-related questions and tasks?
(AI & Automation) Does the system provide clear documentation on how the AI works and how to interact with it?
(AI & Automation) Does the AI provide on-demand help and guidance?
(Security & Privacy) Are security and privacy policies easily accessible and understandable?
(Security & Privacy) Does the product clearly communicate how user data is collected, used, and protected?
(User Feedback & Iteration) Does the product provide easy ways for users to provide feedback?
Resources attributed in this checklist: