Lydia Chilton is an Assistant Professor in the Computer Science Department at Columbia University. She is an early pioneer in crowdsourcing complex tasks on Mechanical Turk. Currently she leads the Computational Design Lab, whose goal is to build AI tools that enhance people's productivity. The three main approaches are to:

  • discover principles of successful solutions
  • design better solutions through brainstorming, synthesis, and iteration
  • communicate complex ideas more easily with visual symbols and well-grounded prose
In Computational Design, we first understanding the mechanisms of design, then build tools that combine the abilities to people and computers to solve complex and creative tasks that neither can do alone.

Computer Science Department
chilton@cs.columbia.edu
Google Scholar Page
Office CEPSR 612
CV

Courses

I teach user interface and design classes at Columbia University.
  • COMS 4170: User Interface Design 2025 2024 2022 2020 2019 2018
  • COMS 6998: Designing with Generative AI 2024, 2023
  • COMS 6998: Advanced Web Design Studio 2020 2019 2018
  • IEOR 4574: US Census Design Challenge: A Human Centered Design Approach (co-taught with Prof Harry West)

Publications

See publications on the Computational Design Lab homepage.

Past Projects

Visual blends are an advanced graphic design technique to draw attention to a message. They combine two objects in a way that is novel and useful in conveying a message symbolically. This paper presents VisiBlends, a flexible workflow for creating visual blends that follows the iterative design process. We introduce a design pattern for blending symbols based on principles of human visual object recognition. Our workflow decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. An evaluation of the workflow shows that decentralized groups can generate blends in independent microtasks, co-located groups can collaboratively make visual blends for their own messages, and VisiBlends improves novices' ability to make visual blends.
Visual blends are an advanced graphic design technique to seamlessly integrate two objects into one. Existing tools help novices create prototypes of blends, but it is unclear how they would improve them to be higher fidelity. To help novices, we aim to add structure to the iterative improvement process. We introduce a method for improving prototypes that uses secondary design dimensions to explore a structured design space. This method is grounded in the cognitive principles of human visual object recognition. We present VisiFit – a computational design system that uses this method to enable novice graphic designers to improve blends with computationally generated options they can select, adjust, and chain together. Our evaluation shows novices can substantially improve 76% of blends in under 4 minutes. We discuss how the method can be generalized to other blending problems, and how computational tools can support novices by enabling them to explore a structured design space quickly and efficiently.
Cascade: Crowdsourcing Taxonomy Creation
Taxonomies are essential for getting a big picture view on large datasets. Often human insight is needed to find the connections in data, but people find large organization tasks overwhelming. Cascade is an algorithm that crowdsources taxonomy creation by distributing the task into hundreds of easy subtasks. Each worker makes local judgements about data items without needing a global view of the data.





Frenzy: Collaborative Data Organization
Frenzy is a communitysourcing tool that builds on the ideas of Cascade, but which affords more transparency and communication that community members require. We deployed a production version Frenzy to a group of 60 domain experts to categorize conference papers, then group them into sessions. This organizational task involves both crowdsourcing a cohesive picture of a large dataset as well as collectively meeting a global constraint.

Frenzy was deployed at the CSCW 2013 and CHI 2014 program committee meeting to organize the accepted papers into conference sessions.


TurKit: Human Computation Algorithms on Mechanical Turk
TurKit was the first demonstration of crowd algorithms on MTurk. It could solve hard problems like handwriting recognition by allowing workers to build on the insights of others. TurKit inspired and was used to implement Michael Bernstein's Soylent and and Jeff Bigham's VizWiz.

Job Materials 2016

If you are on the job market and looking for examples of application materials, you are welcome to mine.

Job Talk Slides (download)
CV
Research Statement
Teaching Statement
List of References