One of Deque’s products is a lightweight web app for accessibility testing aimed at development team members who have little to no accessibility experience. The web app includes automated accessibility testing as well as wizard-like tools which guide the user through some basic manual testing.
The second guided tool we wanted to include in the web app was the “forms tool,” which had previously been developed and implemented in a product where it didn’t fit as well. My task was to update the forms tool’s logic and questions so we could include an improved version in the new web app. I worked with various team members to redesign the logic and rewrite questions, and then built prototypes to test the understandability of the questions and accuracy of the logic with end users.
Colleagues I worked with for this project: Harris Schneiderman, Dian Fay, Aaron Pearlman
Honorable mentions to Matt Isner and Dylan Barrell, who worked on prototypes and logic for earlier versions of this tool. Dylan was the architect behind the original proof of concept.
Logic Mapping
The “forms tool” has existed in multiple versions. The main improvements between versions have been in the logic and question text of the tool, which we’ve changed as we’ve discovered more about how users interpret and answer the questions. The logic is very complicated, so when I started work on the new version, the first thing I did was map out the logic for the previous version of the tool, which hadn’t been done when that version was finalized.
Brainstorming Session
I reviewed notes from usability sessions that had been done on previous versions of the tool. I also reviewed outstanding tickets for improvements we hadn’t had time to make. I pulled together a quick list of areas I thought we could improve and brought it to the team for a brainstorming / discussion session. We spent a couple hours as a group talking through different options, and I came away with a list of logic improvements to try.
Logic Flowchart
As I was relearning and working to improve the logic, I needed a better way to communicate the new logic to the developers on the team. We had also had problems while working on previous versions of the tool because the logic hadn’t been documented well, which meant we had to spend a lot of time relearning the logic before we could actively work to improve it. I used draw.io to create a digital flowchart, which I was able to iterate on much more quickly than prototypes. The flowchart also led to better team discussions about the logic, and helped me to rewrite the questions more easily.
Test Feasibility Mapping
One exercise I wanted to do this time around was to test the logic and the questions with users before coding it. Previous versions of the tool were coded first and tested later, with the assumption that users wouldn’t be able to understand the tool without being able to interact with real forms on a real site. I wanted to try making simple interactive prototypes that would allow me to test the wording of the questions and the logic flow. I started by breaking up the logic into chunks, then ranked each chunk based on how easy or difficult I thought it would be to make a prototype, and how critical it would be to test the chunk.
Paper Prototypes
Based on my feasibility matrix, I chose two chunks of the logic that were high in “critical” and no more than medium in complexity to turn into paper prototypes. At this point, I wasn’t sure whether I would be able to prototype the logic well or not, so I created quick paper prototypes. I tested the prototypes I made with coworkers, and made improvements to the questions between tests. I ended up with multiple versions of each prototype.
Digital Prototypes
The paper prototypes proved to me that my approach would work, so I set about converting them into a digital format so I could test them with a lot more people. I used Adobe XD for the first time for this project, which worked out well, since I wasn’t modelling any intricate interactions. As with the paper prototypes, I tested the digital prototypes with coworkers to make sure they worked and to catch any glaring mistakes. The prototype evolved greatly between the first paper version and the final digital version.
Unmoderated Testing
When testing my prototypes with coworkers, most of the testing I did was moderated – that is, I was in the room or on the phone with them as they went through the prototype. One of the primary things I wanted to try with this prototypes was unmoderated testing, which I had never done before. Much of what I wanted to test with these prototypes was user success – were users able to answer questions correctly? I felt if I could get a larger sample size of people using the tool, I would be able to get a better idea of where most people would go wrong when answering the questions. I knew the results wouldn’t tell my why they were answering the questions wrong, but it would tell me which questions were the most problematic. The unmoderated testing was a success in that I was able to pinpoint several questions which needed improvement.
Reporting Findings
I put together a quick-and-dirty slide deck of my findings from the unmoderated testing, mainly reporting on two things: 1) which questions were most problematic and 2) whether any of the tools I’d used in trial mode for my research would be worth purchasing. I used the slide deck to communicate my findings to my team, and to inform future research plans. I find I often refer to older research when making decisions later on, so it was important for me to put all of the findings in a single document that I knew would be easy to skim through later.