Krzysztof Gajos

Information For Harvard Undergraduates Interested in Joining a Research Project


 

If you are a Harvard undergraduate student interested in doing a research project (or a senior thesis) in Human-Computer Interaction (including topics like intelligent interactive systems, accessibility, crowdsourcing, creativity), please talk to me. I have a number of project ideas that you can contribute to, or you can propose your own.

In general, the best time to join a research group is at the end of your sophomore year or at the beginning of your junior year. By that time, you should have enough technical background to contribute to a project, and still enough time left at Harvard to see the fruits of your labor. Ideal candidates would have taken at least one of CS 179, CS 171, CS 181, or CS 182. If you are a junior or senior and you are serious about pursuing research in HCI, I encourage you to take CS 279, a graduate class that will introduce you to the current research topics and the main research methods in HCI. The final project in CS 279 is often a great first step toward your own independent research project.

Below you can see examples of projects led by undergraduates (some were done as senior thesis, others just for fun) and projects where undergrads made significant contributions:

Adaptive Click and Cross: Adapting to Both Abilities and Task to Improve Performance of Users With Impaired Dexterity

Adaptive Click-and-Cross, an interaction technique for computer users with impaired dexterity. This technique combines three "adaptive" approaches that have appeared separately in previous literature: adapting the user's abilities to the interface (i.e., by modifying the way that the cursor works), adapting the user interface to the user's abilities (i.e., by modifying the user interface through enlarging items), and adapting the user interface to the user's task (i.e., by moving frequently or recently used items to a convenient location). Adaptive Click-and-Cross combines these three adaptations to minimize each approach's shortcomings, selectively enlarging items predicted to be useful to the user while employing a modified cursor to enable access to smaller items.

Louis Li and Krzysztof Z. Gajos. Adaptive Click-and-Cross: Adapting to Both Abilities and Task Improves Performance of Users With Impaired Dexterity. In Proceedings of IUI 2014, 2014. To appear.
[Abstract, BibTeX, etc.]

Louis Li. Adaptive Click-and-cross: An Interaction Technique for Users with Impaired Dexterity. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '13, pages 79:1-79:2, New York, NY, USA, 2013. ACM.
[Abstract, BibTeX, etc.]


Curio: a platform for crowdsourcing research tasks in sciences and humanities

Curio is intended to be a platform for crowdsourcing research tasks in sciences and humanities. The platform is designed to allow researchers to create and launch a new crowdsourcing project within minutes, monitor and control aspects of the crowdsourcing process with minimal effort. With Curio, we are exploring a brand new model of citizen science that significantly lowers the barrier of entry for scientists, developing new interfaces and algorithms for supporting mixed-expertise crowdsourcing, and investigating a variety of human computation questions related to task decomposition, incentive design and quality control.

We expect to launch Curio soon. Sign up if you want to be notified when it comes on line.

Edith Law, Conner Dalton, Nick Merrill, Albert Young, and Krzysztof Z. Gajos. Curio: A Platform for Supporting Mixed-Expertise Crowdsourcing. In Proceedings of HCOMP 2013. AAAI Press, 2013. To appear.
[Abstract, BibTeX, etc.]


InProv: a Filesystem Provenance Visualization Tool

InProv is a filesystem provenance visualization tool, which displays provenance data with an interactive radial-based tree layout. The tool also utilizes a new time-based hierarchical node grouping method for filesystem provenance data we developed to match the user's mental model and make data exploration more intuitive. In an experiment comparing InProv to a visualization based on the node-link representation, participants using InProv made more accurate assessments of provenance and found InProv to require less mental effort, less physical activity, less work, and to be less stressful to use.

Michelle A Borkin, Chelsea S Yeh, Madelaine Boyd, Peter Macko, KZ Gajos, M Seltzer, and H Pfister. Evaluation of filesystem provenance visualization tools. IEEE transactions on visualization and computer graphics, 19(12):2476-2485, 2013.
[Abstract, BibTeX, etc.]


Predicting Users' First Impressions of Website Aesthetics

Users make lasting judgments about a website's appeal within a split second of seeing it for the first time. This first impression is influential enough to later affect their opinion of a site's usability and trustworthiness. In this project, we aim to automatically adapt website aesthetics to users' various preferences in order to improve this first impression. As a first step, we are working on predicting what people find appealing, and how this is influenced by their demographic backgrounds.

Katharina Reinecke and Krzysztof Z. Gajos. Quantifying Visual Preferences Around the World. In Proceedings of CHI 2014, 2014. To appear.
[Abstract, BibTeX, etc.]

Katharina Reinecke, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao, Jenny Liu, and Krzysztof Z. Gajos. Predicting users' first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pages 2049-2058, New York, NY, USA, 2013. ACM.   Honorable Mention  
[Abstract, BibTeX, Data, etc.]


Lab in the Wild

Most of what we know about human-computer interaction today is based on studies conducted with Western participants, usually with American undergrads. This is despite many findings that our cultural background affects our perception and preferences. Neuroscience research has even shown that cultural exposure leads to differences in neural activity -- a finding that might affect how we interact with computers. If people around the world perceive, process, and interact with information differently, then what should their user interfaces look like in order to be most intuitive for them to use?

With Lab in the Wild we are trying to shed light on this question. Our goal is to improve the user experience and performance for computer users around the world. But Lab in the Wild doesn't just help us answer our questions. It also provides participants with personalized feedback, which lets them compare themselves and their performance to people of other countries. Try it out :)

Here are some of the most recent papers that relied on the data collected on Lab in the Wild:

Katharina Reinecke and Krzysztof Z. Gajos. Quantifying Visual Preferences Around the World. In Proceedings of CHI 2014, 2014. To appear.
[Abstract, BibTeX, etc.]

Katharina Reinecke, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao, Jenny Liu, and Krzysztof Z. Gajos. Predicting users' first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pages 2049-2058, New York, NY, USA, 2013. ACM.   Honorable Mention  
[Abstract, BibTeX, Data, etc.]


Accurate Measurements of Pointing Performance from In Situ Observations

We present a method for obtaining lab-quality measurements of pointing performance from unobtrusive observations of natural in situ interactions. Specifically, we have developed a set of user-independent classifiers for discriminating between deliberate, targeted mouse pointer movements and those movements that were affected by any extraneous factors. Our results show that, on four distinct metrics, the data collected in-situ and filtered with our classifiers closely matches the results obtained from the formal experiment.

Krzysztof Gajos, Katharina Reinecke, and Charles Herrmann. Accurate measurements of pointing performance from in situ observations. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, CHI '12, pages 3157-3166, New York, NY, USA, 2012. ACM.
[Abstract, BibTeX, Authorizer, Data and Source Code, etc.]


PlateMate: Crowdsourcing Nutrition Analysis from Food Photographs

PlateMate allows users to take photos of their meals and receive estimates of food intake and composition. Accurate awareness of this information is considered a prerequisite to successful change of eating habits, but current methods for food logging via self-reporting, expert observation, or algorithmic analysis are time-consuming, expensive, or inaccurate. PlateMate crowdsources nutritional analysis from photographs using Amazon Mechanical Turk, automatically coordinating untrained workers to estimate a meal's calories, fat, carbohydrates, and protein. To make PlateMate possible, we developed the Management framework for crowdsourcing complex tasks, which supports PlateMate's decomposition of the nutrition analysis workflow. Two evaluations show that the PlateMate system is nearly as accurate as a trained dietitian and easier to use for most users than traditional self-reporting, while remaining robust for general use across a wide variety of meal types.

Jon Noronha, Eric Hysen, Haoqi Zhang, and Krzysztof Z. Gajos. PlateMate: Crowdsourcing Nutrition Analysis from Food Photographs. In Proceedings of the 24th annual ACM symposium on User interface software and technology, UIST '11, pages 1-12, New York, NY, USA, 2011. ACM.
[Abstract, BibTeX, Authorizer, Data, etc.]


PETALS Project -- A Visual Decision Support Tool For Landmine Detection

Landmines remain in conflict areas for decades after the end of hostilities. Their suspected presence renders vast tracts of land unusable for development and agriculture causing significant psychological and economical damage. Landmine removal is a slow and dangerous process. Compounding the difficulty, modern landmines use minimal amounts of metallic content making them very hard to detect and to distinguish from other metallic debris (such as bullet shells, wires, etc.) frequently present in post-combat areas. Recent research has demonstrated that the accuracy of landmine detection can be improved if deminers try to mentally represent the shape of the area where the metal detector's response gets triggered. Despite similar amounts of metallic content, mines and clutter results in areas of different shapes. Building on these findings, we have created a visual decision support tool that presents the deminer with an explicit visualization of the shapes of these response areas. The results of our study demonstrate that this tool significantly improves novice deminers' detection rates and it improves the localization accuracy.

Lahiru G. Jayatilaka, Luca F. Bertuccelli, James Staszewski, and Krzysztof Z. Gajos. Evaluating a Pattern-Based Visual Support Approach for Humanitarian Landmine Clearance. In CHI '11: Proceeding of the annual SIGCHI conference on Human factors in computing systems, New York, NY, USA, 2011. ACM.
[Abstract, BibTeX, Authorizer, etc.]

Lahiru G. Jayatilaka, Luca F. Bertuccelli, James Staszewski, and Krzysztof Z. Gajos. PETALS: a visual interface for landmine detection. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology, UIST '10, pages 427-428, New York, NY, USA, 2010. ACM.
[Abstract, BibTeX, Authorizer, etc.]


Automatic Task Design on Amazon Mechanical Turk

A central challenge in human computation is in understanding how to design task environments that effectively attract participants and coordinate the problem solving process. We consider a common problem that requesters face on Amazon Mechanical Turk: how should a task be designed so as to induce good output from workers? In posting a task, a requester decides how to break down the task into unit tasks, how much to pay for each unit task, and how many workers to assign to a unit task. These design decisions affect the rate at which workers complete unit tasks, as well as the quality of the work that results. Using image labeling as an example task, we consider the problem of designing the task to maximize the number of quality tags received within given time and budget constraints. We consider two different measures of work quality, and construct models for predicting the rate and quality of work based on observations of output to various designs. Preliminary results show that simple models can accurately predict the quality of output per unit task, but are less accurate in predicting the rate at which unit tasks complete. At a fixed rate of pay, our models generate different designs depending on the quality metric, and optimized designs obtain significantly more quality tags than baseline comparisons.
[Related paper]