Analyzing Digital Tool User Acceptance Testing
URN:
Summary
Building a digital tool is only half the job. A technically functional system does not automatically mean that end users will adopt it or find it useful. In the previous article, we explored strategies for implementing digital tools for basic users and found that user acceptance is the decisive factor. User Acceptance Testing, UAT, is a systematic way of determining whether a tool meets real user needs in a real-world setting before it is officially released.
What Is User Acceptance Testing?
UAT is the final testing phase of software development, conducted by the actual end users of the system. Otaduy and Diaz (2017) define UAT as a process where software is validated in a real setting by its intended audience. The goal is not so much to check defined requirements, but to ensure that the software satisfies the customer’s actual needs. This distinguishes UAT from technical unit or integration testing: where developers check whether the code works correctly, UAT determines whether the tool works correctly from the user’s perspective.
Poston, Sajja and Calvert (2014) describe UAT as a critical phase that typically occurs after the system has been built but before the software is released. Modern business systems are more complex and decentralized than ever before, making UAT more challenging, and more important, to carry out effectively.
The UAT Process
User acceptance testing is not a single event but a structured process. Poston et al. (2014) outline the typical steps. First test scripts are written that describe the scenarios to be tested from the user’s perspective. Users then carry out tests both by following the scripts and by exploring the system freely. Identified issues are reported to a business analyst, who logs the relevant defects for the development team to address. This cycle repeats until users sign off that the system works as needed.
In agile development, UAT recurs with every sprint. Otaduy and Diaz (2017) point out that the tight iteration cycle places particular demands on UAT: traditional in-person testing sessions do not always scale well when validation is needed every few weeks. This has driven new approaches such as asynchronous testing methods, which allow testers to participate at their own pace without requiring everyone to be available at the same time.
Key Metrics
Clear indicators are needed to measure the success of UAT. Key metrics include task completion rate (how many test scripts were completed successfully), the number and severity of defects identified, and user satisfaction. Bobrova and Perego (2025) developed a design toolkit based on the UTAUT2 model, evaluated using both task completion rates and qualitative user feedback. Their approach is directly applicable to UAT contexts as well.
Davis (1989) identified perceived usefulness and perceived ease of use as the two critical factors in technology acceptance. These dimensions sit at the heart of any UAT measurement framework: testing should answer whether users genuinely find the tool valuable and approachable in the context of their daily work.
Analysis Methods
UAT results can be analysed using both qualitative and quantitative methods. Qualitative analysis involves coding user feedback, defect reports, and session observations into themes: which features caused the most difficulty? Where did users make mistakes? What worked well? This type of analysis surfaces patterns that raw numbers alone cannot reveal.
For quantitative analysis, Davis’s (1989) Technology Acceptance Model (TAM) or the broader UTAUT2 framework can serve as theoretical lenses. These models allow researchers to use questionnaires to measure user attitudes and behavioural intentions before and after UAT, and to compare results statistically. This provides evidence-based support for go-live decisions and helps identify which factors most strongly influence whether users will integrate the tool into their daily routines.
Best Practices
Research consistently points to several UAT best practices. First, involving real end users is non-negotiable. QA specialists or developers cannot substitute for actual users who bring knowledge of business processes and work flows (Poston et al., 2014). Second, scheduling is critical: UAT sessions should be planned to minimize disruption to users’ normal duties while ensuring they can be fully present and focused when testing. Third, feedback channels must be clear and responsive. Users need an easy way to report issues, and they need to see that their feedback leads to action.
Otaduy and Diaz (2017) identify three root causes of poor customer engagement in UAT: lack of time, lack of motivation, and lack of knowledge. All three can be addressed through thoughtful process design. Clear test scripts lower the knowledge barrier; flexible scheduling helps users find the time; and visibly communicating how feedback has been acted upon builds motivation to keep participating.
Piia Lukkaroinen
Researcher, Maritime Logistics Research Center
Piia Lukkaroinen is a UX/UI Designer and researcher at SAMK Maritime Logistics Research Center, where she specializes in user-centered design for maritime digital solutions. Her work focuses on bridging the gap between technological innovation and practical user adoption.
This article was written as part of the Sustainable Flow project, which is part of the Interreg Central Baltic program. The project is creating an app that reduces carbon dioxide emissions from ports.