About Me

I’m a mixed-methods researcher and measurement specialist with over 10 years of experience in industry and academia.

 

I take a user-centric approach to measurement, incorporating the population of interest into scale construction and validation to ensure that users’ beliefs and behaviors are accurately represented. This gives designers and industry leaders the power to make evidence-based decisions on a large scale, improving the quality and usability of their products.

 

I am passionate about making quantitative methods accessible and about improving data quality through thoughtful research design and open conversations with stakeholders.

 

I am comfortable and experienced with both qualitative and quantitative methods, and I always use the best method to answer the research question.


Quantitative Methods

Linear Regression, Logistic Regression, Structural Equation Modeling, Factor Analysis, Item Response Theory, Mixed Effects Regression, Mixture Modeling, Network Analysis, ANOVA, Causal Inference, Data Visualization, Card Sorting, Benchmarking, A/B Testing, Cluster Analysis

Qualitative Methods

User Interviews, Cognitive Interviews, Focus Groups, Usability Testing, Ethnography, Diary Studies, Concept Testing, Grounded Theory, Brainstorming

Software

R, SPSS, Mplus, Shiny Dashboard App, Excel, Stata, Qualtrics, Survey Monkey, Google Analytics, Figma, MTurk, UserTesting, Python, HLM, GPower, GitHub

   

Study 1: RPE Method

Problem: Surveys are useful insofar as survey items are understood by users as intended by researchers and designers. However, we rarely test item iterpretability before administering surveys and analyzing survey data.

 

Solution: To remedy this, I developed an iterative, mixed-methods approach to survey item validation called the Response Process Evaluation (RPE) Method. This innovative, new approach to survey design is quick, cost effective, and results in evidence-based item development to help us understand, on a large scale, (1) what we’re doing well, and (2) where we have room to improve.

 

How it works: The RPE method turns user interviews into meta-surveys through the use of open-ended meta-questions. Users are asked to paraphrase survey items and explain why they selected a particular response option. Responses are then coded for interpretability, and misunderstood survey items can be rewritten and tested again in a new sample of users. This also generates rich qualitative data which can be used to better understand our users’ wants and needs.

 

Situation: As a Measurement Consultant for the John Templeton Foundation, I guided an interdisciplinary team to develop survey items for English-speaking Americans and Hindi-speaking Indians that were understood cross-culturally.

If we had not used the RPE method to test the interpretability of our items and revise them given user feedback, we would have drawn incorrect conclusions from the data analysis.

 

Dissemination: By request, I have led several instructional workshops on the RPE Method in California, Arizona, London, British Columbia, and the Netherlands, attended by several hundred participants in total.

 

Study 2: Measuring UX

Study 3: Shiny App

As a quantitative researcher, one of my goals is to make complex statistical models more user-friendly and accessible. To do this, I built a simple point-and-click app that computes custom model cutoffs to help researchers determine if their model fits their data.

 

This app takes advantage of modern computing to make Monte Carlo simulations accessible to all researchers. A task that used to require hundreds of lines of computer code can now be accomplished with a simple click. Importantly, this app helps reseachers make unbiased, data-driven decisions.

 

Reducing Bug Reports: I incorporated feedback from users to generate helpful error messages to help users resolve mistakes independently. Since implementing these messages 5 months ago, I have only received one bug report from more than 1,800 users.

 

Reacting to User Feedback: I connected with app users to learn more about how I could make the app easier to use. While users reported that the UI was simple and easy to navigate, I found that some still struggled to understand the methodology that was deployed on the backend. This led me to create a help page tutorial (structured in a Q & A format) which I am currently testing out with users.

 

My app has been visited by more than 4,500 users from 79 countries. The corresponding journal article has been viewed more than 5,000 times, cited 50 times, and ranks in the top 5% of all research outputs scored by Altmetric.

   

Study 4: Think Alouds

UX studies often begin with inclusion criteria questions and important instructions for participants. If users misread these directions, we risk ending up with an inaccurate sample and unusable data, or drawing incorrect conclusions from our research.