If you are a Harvard undergraduate student interested in doing a research project (or a senior thesis) in Human-Computer Interaction (including topics like intelligent interactive systems, accessibility, crowdsourcing, creativity), please talk to me. I have a number of project ideas that you can contribute to, or you can propose your own.
In general, the best time to join a research group is at the end of your sophomore year or at the beginning of your junior year. By that time, you should have enough technical background to contribute to a project, and still enough time left at Harvard to see the fruits of your labor. Ideal candidates would have taken at least one of CS 179, CS 171, CS 181, or CS 182. If you are a junior or senior and you are serious about pursuing research in HCI, I encourage you to take CS 279, a graduate class that will introduce you to the current research topics and the main research methods in HCI. The final project in CS 279 is often a great first step toward your own independent research project.
Below you can see examples of projects led by undergraduates (some were done as senior thesis, others just for fun) and projects where undergrads made significant contributions:
Adaptive Click and Cross: Adapting to Both Abilities and Task to Improve Performance of Users With Impaired Dexterity
Adaptive Click-and-Cross, an interaction technique for computer users
with impaired dexterity. This technique combines three "adaptive"
approaches that have appeared separately in previous literature:
adapting the user's abilities to the interface (i.e., by modifying the
way that the cursor works), adapting the user interface to the user's
abilities (i.e., by modifying the user interface through enlarging
items), and adapting the user interface to the user's task (i.e., by
moving frequently or recently used items to a convenient location).
Adaptive Click-and-Cross combines these three adaptations to minimize
each approach's shortcomings, selectively enlarging items predicted to
be useful to the user while employing a modified cursor to enable
access to smaller items.
Curio: a platform for crowdsourcing research tasks in sciences and humanities
Curio is intended to be a platform for crowdsourcing research tasks in sciences and humanities. The platform is designed to allow researchers to create and launch a new crowdsourcing project within minutes, monitor and control aspects of the crowdsourcing process with minimal effort. With Curio, we are exploring a brand new model of citizen science that significantly lowers the barrier of entry for scientists, developing new interfaces and algorithms for supporting mixed-expertise crowdsourcing, and investigating a variety of human computation questions related to task decomposition, incentive design and quality control.
InProv: a Filesystem Provenance Visualization Tool
InProv is a filesystem provenance visualization tool, which displays provenance data with an interactive radial-based tree layout. The tool also utilizes a new time-based hierarchical node grouping method for filesystem provenance data we developed to match the user's mental model and make data exploration more intuitive. In an experiment comparing InProv to a visualization based on the node-link representation, participants using InProv made more accurate assessments of provenance and found InProv to require less mental effort, less physical activity, less work, and to be less stressful to use.
Predicting Users' First Impressions of Website Aesthetics
Users make lasting judgments about a website's appeal within a split second of seeing it for the first time. This first impression is influential enough to later affect their opinion of a site's usability and trustworthiness. In this project, we aim to automatically adapt website aesthetics to users' various preferences in order to improve this first impression. As a first step, we are working on predicting what people find appealing, and how this is influenced by their demographic backgrounds. Although it is not yet known what exactly influences this first impression of appeal, colorfulness and visual complexity have been repeatedly found to be the most noticeable design characteristics at first sight. We have therefore developed perceptual models of perceived visual complexity and colorfulness, which we then used to predict users' perception of appeal. Our approach is based on the assumption that this first impression can be adequately captured with the help of a low-level image analysis of static website screenshots. In our upcoming CHI paper, we show that these models can account for approximately half of the variance in the observed ratings of aesthetic appeal. With that, we demonstrated that it is possible to quantify users' initial impression of appeal based on the models of perceived visual complexity and colorfulness. Our results pave the way for larger endeavors to improve the user experience on the web, because the first impression counts.
[Related paper] [Data]
Lab in the Wild
Most of what we know about human-computer interaction today is based on studies conducted with Western participants, usually with American undergrads. This is despite many findings that our cultural background affects our perception and preferences. Neuroscience research has even shown that cultural exposure leads to differences in neural activity -- a finding that might affect how we interact with computers. If people around the world perceive, process, and interact with information differently, then what should their user interfaces look like in order to be most intuitive for them to use?
With Lab in the Wild we are trying to shed light on this question. Our goal is to improve the user experience and performance for computer users around the world. But Lab in the Wild doesn't just help us answer our questions. It also provides participants with personalized feedback, which lets them compare themselves and their performance to people of other countries. Try it out :)
Accurate Measurements of Pointing Performance from In Situ Observations
We present a method for obtaining lab-quality measurements of pointing performance from unobtrusive observations of natural in situ interactions. Specifically, we have developed a set of user-independent classifiers for discriminating between deliberate, targeted mouse pointer movements and those movements that were affected by any extraneous factors. Our results show that, on four distinct metrics, the data collected in-situ and filtered with our classifiers closely matches the results obtained from the formal experiment.
[Related paper] [Source Code and Data]
PlateMate: Crowdsourcing Nutrition Analysis from Food Photographs
PlateMate allows users to
take photos of their meals and receive estimates of food intake and
composition. Accurate awareness of this information is considered a
prerequisite to successful change of eating habits, but current
methods for food logging via self-reporting, expert observation, or
algorithmic analysis are time-consuming, expensive, or inaccurate.
PlateMate crowdsources nutritional analysis from photographs using
Amazon Mechanical Turk, automatically coordinating untrained workers
to estimate a meal's calories, fat, carbohydrates, and protein. To
make PlateMate possible, we developed the Management framework for
crowdsourcing complex tasks, which supports PlateMate's decomposition
of the nutrition analysis workflow. Two evaluations show that the
PlateMate system is nearly as accurate as a trained dietitian and
easier to use for most users than traditional self-reporting, while
remaining robust for general use across a wide variety of meal types.
[Related paper] [Data set]
PETALS Project -- A Visual Decision Support Tool For Landmine Detection
Landmines remain in conflict areas for decades after the end of hostilities. Their suspected presence renders vast tracts of land unusable for development and agriculture causing significant psychological and economical damage. Landmine removal is a slow and dangerous process. Compounding the difficulty, modern landmines use minimal amounts of metallic content making them very hard to detect and to distinguish from other metallic debris (such as bullet shells, wires, etc.) frequently present in post-combat areas. Recent research has demonstrated that the accuracy of landmine detection can be improved if deminers try to mentally represent the shape of the area where the metal detector's response gets triggered. Despite similar amounts of metallic content, mines and clutter results in areas of different shapes. Building on these findings, we have created a visual decision support tool that presents the deminer with an explicit visualization of the shapes of these response areas. The results of our study demonstrate that this tool significantly improves novice deminers' detection rates and it improves the localization accuracy.
Automatic Task Design on Amazon Mechanical Turk
A central challenge in human computation is in understanding how to design task environments that effectively attract participants and coordinate the problem solving process. We consider a common problem that requesters face on Amazon Mechanical Turk: how should a task be designed so as to induce good output from workers? In posting a task, a requester decides how to break down the task into unit tasks, how much to pay for each unit task, and how many workers to assign to a unit task. These design decisions affect the rate at which workers complete unit tasks, as well as the quality of the work that results. Using image labeling as an example task, we consider the problem of designing the task to maximize the number of quality tags received within given time and budget constraints. We consider two different measures of work quality, and construct models for predicting the rate and quality of work based on observations of output to various designs. Preliminary results show that simple models can accurately predict the quality of output per unit task, but are less accurate in predicting the rate at which unit tasks complete. At a fixed rate of pay, our models generate different designs depending on the quality metric, and optimized designs obtain significantly more quality tags than baseline comparisons.