Here you will find little stories about a variety of methods I have used. You will also see some lessons learned.
- Getting ‘whooped!
- A failure am not I
- Traditional Usability Testing
- Information Architecture
- Low-Fidelity Paper Prototype Co-creation Sessions
- Conducting Un-moderated Interviews
- Co-participation: 2 for the price of 1… sort of.
Getting ‘whooped!
In order for me to tell you what I learned at Ascension, I need to tell you this story first. I took second in the wrestling state tournament at my weight my junior year of highschool, with a torn ACL. I was able to achieve a high performing outcome, in the face of adversity. Much of the reason I give to being able to compete at a high level that year is the work I did the summer before. I spent my summer getting whooped by the Ukrainian national champion at wrestling camp. I performed high that following season because I was expected to perform high at camp. Couple an expectation for high performance with the training and resources to achieve it. What comes out of that is growth and high performance. I liken my time at Ascension much like my time getting whooped by the ukrainian. Being held to such a high expectation has felt humblingly uncomfortable. Much like my Ukrainian master, my managers have coached and guided me, while also offering the critique needed to perform. I am a better professional because of them. The lesson I’m drawing here is although my time at Ascension had come to an abrupt end, I am curious what new perspectives I can bring to new problem spaces, challenges, and my teams. This new perspective is driven by a vision for a finely tuned research practice, much like a finely tuned wrestler does to themselves, that I learned from my time at Ascension, and a value I want to bring into my next role.
A failure am not I
Looking at what I didn’t achieve that I wish I had is not something I like to do much. It’s quite uncomfortable to say the least. When I state what I didn’t achieve, it sounds like I failed. My app got sunset, I got rejected from promotions, even some of my smaller, more everyday ideas wouldn’t get traction. Learning to view my failures as outcomes I was trying to avoid (don’t get me started psych majors), I stopped reading and philosophising and started getting things into the tangible world… well at least digitally tangible. Meaning, after being pressed to be concrete with my action plan, rather than lofty, unmeasurable outcomes, the actions to get to the outcome I wanted became clearer. Throwing that same action plan against a gantt chart and we have a bit of capacity planning. In anycase, it became clear that I didn’t fail and I didn’t need to let it deter me, but rather use it as information, like a good scientist, to make a decision. I’m not saying I’m emotionless, but rather, I have a new perspective on what to do when I fail.
Traditional Usability Testing
One of my very first clients at the agency needed some traditional usability testing. They had the analytics that showed what customers were doing on their site, but they needed to know the why behind the data. They came in with a conversion rate below 1%. Ordinarily, this would be a perfect setting for formative usability testing, iterating after every 5 or so participants. Or maybe even RITE testing (Rapid Iterative Testing and Evaluation), iterating after every 1 or 2 participants. But we didn’t have that luxury because time and money were a factor. So 30 participants would do the trick, we determined. Together, we developed an accurate prompt for participants, to set the stage for each session. With the team, we determined the most important questions we needed answers to, resulting in a moderator guide that also allowed for organic exploration and probing.
After all the data were collected, we got to work synthesizing the information. We produced a journey map about how participants made decisions; even defining a few possible customer segments based on their real-world settings. The team’s executive gave praises to the journey map. It visualized a complex decision making process.
The client team went back home, made iterations based on the information we collected together. In the end, they wound up with a conversion rate just above 8%. What a cool thing!
Information Architecture
Information architecture (IA) and navigation can be erroneously misinterpreted for one another… so I would learn. I had a client who asked for an evaluation of their IA. So, naturally, I indexed their website, threw high level labels into a card sort, and came out on the other side with a new IA for their website. I then tested this new IA using tree testing, finding an improvement of findability and discoverability from 23% to 78%… pretty close to 80% I thought. I presented my findings, only to find I had not answered their question. You see, I misinterpreted their question and I did not ask for clarity. I was young, and dumb. “How can I put all that information in a mega menu?” they asked. “A mega menu?” I thought, “why a mega menu?” Then it hit me. Oh my goodness, I messed up big time. What I did not clarify with the client was what they needed their IA evaluated for. If I had, I would have learned they never even mentioned the word “information architecture”, and what they needed was “architecture” for their navigation menu.
I dug in. Article after article, I learned information architecture is the system over which navigation lays. My client had no interest in redesigning their site’s architecture. They needed a navigation menu. So, I went back to the drawing board, designed a closed card sort based on their current site’s architecture, walking away from over 80% success in findability and discoverability after tree testing. Lesson learned.
Low-Fidelity Paper Prototype Co-creation Sessions
Some of the most fun I have had with research is during workshop settings. I was in my 2nd year at the bank. We had gotten our feet wet with low-fidelity prototyping; paper prototypes to be exact. We learned to love the immediate results we got by iterating in real-time with participants. To date, we had only conducted these with 1 participant at a time. Think traditional usability testing, but in a room, on paper, with a designer and product manager, and me, the researcher. I’d lead the conversation, and my design partner and product partner would quietly discuss iterations to make to the prototype to align with the participant’s feedback.
But one day, I had multiple (3) product teams come to me requesting research in this manner. “Uh oh,” I thought, “there’s no way I can get all these teams their data in the time they need.” So I decided to run a workshop. “Customer day” we called it. We assembled 3 teams around the prototypes. Each team had a designer or two, a product partner, or two, and a developer. Moderators were assigned, “computers”, or people who would act as a computer with slow loading time (they’d just switch out the screens as participants navigated the paper prototype) were assigned, as well as a note-taker. So you can imagine 3 tables, each with 3 to 5 people around it, plus a participant. Your’s truly got to be the MC, helping keep the teams on track for time, and answering any questions (never miss an opportunity to teach).
The whole session lasted 3 hours, with 3, 40 minute sessions, with 15 minute breaks. Participants would rotate after each session to a new table. Each table got to see 3 participants that day. Then another 3 on the second day running the same session structure. In 2 days, we got each team 6 participants looking at their prototype, producing tons of insights and iterations. I was hooked. Workshops were efficient, energetic, and so much fun! Not to mention, we got so many of our product, design, and developer partners exposed to customer research! Those customer days became a norm for a bit. We must’ve run one or two a month for about 9 months or so. So cool!
Conducting Un-moderated Interviews
A lot of times, we are under tight time constraints, so traditional methodologies aren’t suitable. There are a few remote testing tools out there, I had access to one. There are also a plethora of survey tools. I have my favorite because the survey logic is extremely robust. For this story, I want you to imagine you are four days from the end of a two week sprint. You just found out your team was lacking a foundational understanding that came up because of a deep conversation about a potential future state. You are pleased you and the team are thinking holistically, but are stopped in your tracks because, to get the answer, you’d need deep conversations with at least 12 participants from a representative sample. Well, getting the participants isn’t the issue. It’s the amount of time to conduct each interview, analyze the data, draw insights, and deliver recommendations to the team.
That’s the situation I was in. So I decided to see if I could combine two tools together. I could use the logic capabilities of my survey tool, with remote testing capabilities of my online testing tool. A test inside a test. Inception style. But why? It was easy enough to have participants answer open-ended questions, but we had two segments that could start with the same questions. Based on the way they identified themselves, the survey needed two branches of logic that would present separate sets of follow-up questions. Remote testing tools, at least qualitative ones, do not have branch logic; but survey tools do. So I set up a screener in the remote testing tool, ported participants over to the survey URL, they recorded themselves answering each question. The logic worked great.
In the end, I got my team the data in the time they needed. And, I learned to use two completely disparate tools together in a single testing environment. I had a lot of fun with that one.
Co-participation: 2 for the price of 1… sort of.
We were flying fast. Pouring out new designs every sprint. But I felt the team was feeling a bit of monotony. The same method, low fidelity paper prototyping, with participant rotation workshops, was being used over and over. 2 days of research plus synthesis time was getting to the team. So I looked for a way to break up the monotony. By this point in time, we had our customer segments pretty well defined, beyond demographics, teetering on true behavior segmentation. There were 3 segments. Well, on this day, we had three teams needing research done. That was pretty typical. I decided to try a new method with them, with very familiar mechanics to what the teams knew how to do. You see, the teams were now well versed in how to conduct paper prototype sessions. They were iteration champions; extracting feedback from participants in a workshop like they had been born and raised to do it. So I told them, “what if we could get a different flavor of data, without compromising the amount of feedback we feel we need to be confident in the data in the first place?” Head tilts and optimism.
Co-participation. I’d never tried before. But the idea of having two participants at a table at once seemed exciting to me, and my teams. On the day of the workshop, I paired like participants together, randomly assigned them to a table, and instructed them they were to solve problems as a pair.
Well, best laid plans. Right? We plan for no-shows. But what I didn’t plan for was for those no-shows to be from completely different customer segments. I had 1 congruent pair of participants, the rest were from different segments. Luckily, our knowledge of customer segmentation was multi-dimensional. I could get close, albeit not perfect, to having matched pairs of participants. The results were quite interesting.
Conversations at each table started like normal. But, as participants began to work together, differences began to emerge. They were arguing! How cool! Not arguing in the sense that they were mad. They were engaged in deep conversation about why each would solve the problem differently! There isn’t a method out there that could have gotten us this kind of data like we had gotten by pure chance of having the wrong no-shows.
In the end, the teams walked away not only with rich data and iterations to their prototypes, but a deeper understanding about the differences between our customer segmentation. We got to witness the differences with our own eyes.