Thursday, July 14th, 2011

moodMixer, for mp3 player and library

Music is an intensely social and emotional medium, yet the typical interface design for a digital music library (such as iTunes) is text-heavy, list-based, and closed to social interaction. moodMixer is a prototype visual/emotional/social interface for an mp3 player and personal music library, that dynamically leverages crowd-sourced, social data from The software acts on a user’s existing music library, compiling a list of songs and accessing a data set of user-generated tags for each song from, via that site’s API. For each song in the user’s library the set of tags is evaluated for keywords relating to emotion. Each song is sorted into one of 8 emotional categories: aggressive, chill, gloomy, melancholy, hyper, happy, romantic, and sexy. Each emotion is assigned a color code. In the interface, by clicking on color-coded blocks, the user simultaneously defines his or her mood (current or desired) and creates a novel playlist of randomly generated songs, reflecting his or her mood. The feedback is immediate and highly visual, and includes a clear temporal element: the color arrangement within the playlist indicates how the mood of the music will change over the length of the playlist. Within the player, the individual tags from the data set scroll across the screen, adding a social, almost conversational feel to the listening experience. moodMixer combines the popular random shuffle feature with mood categorization and social data, enabling users to make a more satisfying mix without the effort of handcrafting a playlist.

This project was independently conceptualized, designed, and coded in Java/Processing during an internship with the Creative Systems Group at Microsoft Research, Summer 2009. Thanks to Shane Williams and Tom Bartindale for their support and assistance with this project.

Saturday, June 18th, 2011


This project seeks to strip away the dominant, semantic aspects of a text and explores and expresses its purely formal qualities, the underlying skeleton of the text. textApart is a Processing application that visualizes the structure, rhythm, and phonetic patterns of a collection of words. TtextApart displays the selected text in a series of views, where the text is abstracted into shapes and patterns. Stacks of word-shapes and identify repeaters, uncommon word-shapes swell and common word-shapes shrink. By comparing text, we can discover the differences and similarities in what’s hidden under the message– in the mechanics of the text. In these examples, texts by Henry James and Brittany Spears are visualized and compared.

Tuesday, June 14th, 2011

The Semantic Thermometer

The Semantic Thermometer is a playful, visual way to display current local temperature data. The program takes a location and accesses up-to-the-minute weather information for that location, using Yahoo’s Weather API and a Where-On-Earth Id. The current temperature reading for that location is translated from a number to a corresponding descriptive word (for example 77 degrees Fahrenheit would be translated as “warm” and 24 degrees Fahrenheit would be “frigid”) from a set of words I defined. The program sends the word to the Flickr API, where it searches for and retrieves the latest user-generated photos tagged with that word. These photos are displayed in a grid on the screen, with a new photo added every few seconds to show the current local temperature in images. When the user rolls over an image, the descriptive word appears. The Semantic Thermometer invents surprising and baffling new metaphors, as it works in the gap between a specific physical experience and the imprecise, complex language we use to describe it.

Thursday, April 21st, 2011

All the Dairy Queens, Melting

A visualization of all the geographical locations of Dairy Queen Shops in the USA, displayed as drips of melted ice cream on a hot sidewalk. This program was written in Processing.

All the Dairy Queens, Melting from zannahlou on Vimeo.

Sunday, March 6th, 2011

Feedback Playback

Feedback Playback is a dynamic biofeedback action movie viewing and re-editing system. In the system, the users’ physical state determines the visceral quality of movie scenes displayed; immediate reactions to the scenes feed back to generate a cinematic crescendo or a lull. We used material that is rigorously narrative, formulaic, and plentiful: the action movie series Die Hard, starring Bruce Willis.

In FeedBack PlayBack, the cinematic converges with the physical present, exploiting the power of fiction to manipulate and alter our state of being at the most basic, primal level. We attempt to synchronize the media and viewer– whether towards a static loop or a explosive climax.

The system consists of 1) a panel for user input– we use Galvanic Skin Response, which measures arousal via skin conductivity. GSR is the same type of data collected in lie detector tests. In this case, we measure GSR across the fingertips. The panel contains a microcontroller which connects to a laptop inside the panel enclosure. 2) A library of short clips from the Die Hard movies, each about 10 to 5 seconds long, and sorted into high, medium, and low– or hard, medium, and soft–action/arousal categories. 3) An Open Frameworks application that manages the library, and displays clips according to GSR input. 4) A monitor on which the clips are displayed, as well as information (graph and score) of the user’s response, in real time. This project was developed in collaboration with Che-Wei Wang.

Friday, June 13th, 2008

Web Design: Katie Peterson

A custom WordPress site designed and built for the poet and writer Katie Peterson, whose poems explore interior and exterior landscapes, exposure and shelter. Below are design treatments and mock-ups for the site.






View the site at .

All content © Copyright 2024 by Zannah Marsh.
Subscribe to RSS Feed – Posts or just Comments

301 Moved Permanently

301 Moved Permanently

nginx/1.18.0 (Ubuntu)