If you are a Harvard undergraduate student interested in doing a research project (or a senior thesis) in Human-Computer Interaction (including topics like intelligent interactive systems, accessibility, crowdsourcing, creativity), please talk to me. I have a number of project ideas that you can contribute to, or you can propose your own.
In general, the best time to join a research group is at the end of your sophomore year or at the beginning of your junior year. By that time, you should have enough technical background to contribute to a project, and still enough time left at Harvard to see the fruits of your labor. Ideal candidates would have taken at least one of CS 179, CS 171, CS 181, or CS 182. If you are a junior or senior and you are serious about pursuing research in HCI, I encourage you to take CS 279, a graduate class that will introduce you to the current research topics and the main research methods in HCI. The final project in CS 279 is often a great first step toward your own independent research project.
Below you can see examples of projects led by undergraduates (some were done as senior thesis, others just for fun) and projects where undergrads made significant contributions:
Lab in the Wild
Most of what we know about human-computer interaction today is based on studies conducted with Western participants, usually with American undergrads. This is despite many findings that our cultural background affects our perception and preferences. Neuroscience research has even shown that cultural exposure leads to differences in neural activity -- a finding that might affect how we interact with computers. If people around the world perceive, process, and interact with information differently, then what should their user interfaces look like in order to be most intuitive for them to use?
With Lab in the Wild we are trying to shed light on this question. Our goal is to improve the user experience and performance for computer users around the world. But Lab in the Wild doesn't just help us answer our questions. It also provides participants with personalized feedback, which lets them compare themselves and their performance to people of other countries. Try it out :)
Accurate Measurements of Pointing Performance from In Situ Observations
We present a method for obtaining lab-quality measurements of pointing performance from unobtrusive observations of natural in situ interactions. Specifically, we have developed a set of user-independent classifiers for discriminating between deliberate, targeted mouse pointer movements and those movements that were affected by any extraneous factors. Our results show that, on four distinct metrics, the data collected in-situ and filtered with our classifiers closely matches the results obtained from the formal experiment.
[Related paper] [Source Code and Data]
PlateMate: Crowdsourcing Nutrition Analysis from Food Photographs
PlateMate allows users to
take photos of their meals and receive estimates of food intake and
composition. Accurate awareness of this information is considered a
prerequisite to successful change of eating habits, but current
methods for food logging via self-reporting, expert observation, or
algorithmic analysis are time-consuming, expensive, or inaccurate.
PlateMate crowdsources nutritional analysis from photographs using
Amazon Mechanical Turk, automatically coordinating untrained workers
to estimate a meal's calories, fat, carbohydrates, and protein. To
make PlateMate possible, we developed the Management framework for
crowdsourcing complex tasks, which supports PlateMate's decomposition
of the nutrition analysis workflow. Two evaluations show that the
PlateMate system is nearly as accurate as a trained dietitian and
easier to use for most users than traditional self-reporting, while
remaining robust for general use across a wide variety of meal types.
PETALS Project -- A Visual Decision Support Tool For Landmine Detection
Landmines remain in conflict areas for decades after the end of hostilities. Their suspected presence renders vast tracts of land unusable for development and agriculture causing significant psychological and economical damage. Landmine removal is a slow and dangerous process. Compounding the difficulty, modern landmines use minimal amounts of metallic content making them very hard to detect and to distinguish from other metallic debris (such as bullet shells, wires, etc.) frequently present in post-combat areas. Recent research has demonstrated that the accuracy of landmine detection can be improved if deminers try to mentally represent the shape of the area where the metal detector's response gets triggered. Despite similar amounts of metallic content, mines and clutter results in areas of different shapes. Building on these findings, we have created a visual decision support tool that presents the deminer with an explicit visualization of the shapes of these response areas. The results of our study demonstrate that this tool significantly improves novice deminers' detection rates and it improves the localization accuracy.
Automatic Task Design on Amazon Mechanical Turk
A central challenge in human computation is in understanding how to design task environments that effectively attract participants and coordinate the problem solving process. We consider a common problem that requesters face on Amazon Mechanical Turk: how should a task be designed so as to induce good output from workers? In posting a task, a requester decides how to break down the task into unit tasks, how much to pay for each unit task, and how many workers to assign to a unit task. These design decisions affect the rate at which workers complete unit tasks, as well as the quality of the work that results. Using image labeling as an example task, we consider the problem of designing the task to maximize the number of quality tags received within given time and budget constraints. We consider two different measures of work quality, and construct models for predicting the rate and quality of work based on observations of output to various designs. Preliminary results show that simple models can accurately predict the quality of output per unit task, but are less accurate in predicting the rate at which unit tasks complete. At a fixed rate of pay, our models generate different designs depending on the quality metric, and optimized designs obtain significantly more quality tags than baseline comparisons.