Design lead, prototyping, fabrication, software


Benjamin Lichtner (sound design), Michael Wagman (physics advisor), Adam Coogan (physics advisor)


Processing, Supercollider (sound design)


Named after abstract painter Wassily Kandinsky, the Wassiliscope analyzes the frequency of incoming light waves and converts them to their corresponding audible frequency. Audio is then sent through a triangle wave oscillator and out to the headphones in real time allowing you to listen to color.

A Key Realization

Conversations with Benny, Adam, and Mike, lead us to realize that light and sound are both waves. The brain just happens to interpret these frequency ranges in different ways. Without our collaboration, I never would have made the connection or realized this mapping from one frequency to another was not just theoretical. Without me, they probably wouldn't have considered that crossing light and sound could cause such visceral interactions.

Function -> Interface

This translation also helped inform the physical interface. Pointing the telescope lens and hearing the change in real time makes the relationship between light and sound tangible. Eventually, you can begin to see the color in your mind's eye without needing to look.

Engineering Challenges

The biggest engineering challenge was, of course, the main mapping function that took pixel input from a camera, translated pixel color to light frequency, and then mapped a light frequency to its corresponding audible frequency. There are a few common ways computers can define color spaces:

  • RGB: Stands for red, green, blue and is the most common
  • CMYK: Stands for cyan, magenta, yellow, and blac(k). Commonly used for 4 color printing
  • HSV: Stands for Hue, Saturation, Value (also known as HSB). The hue value also happens to correspond with the visible light spectrum! (shown below minus the value or brightness dimensions)

After a few false starts, I learned more about color spaces, and decided to read the camera’s pixel data in HSB format which has a direct relationship to the color range visible to humans.

In HSB 300 corresponds to violet (789 Thz or 380 nm) which is the highest frequency of light we can see (and the shortest wavelength), and 0 corresponds to red (the lowest visible frequency of light and the longest wavelength).

After getting the input in a format that made sense, I had to work on a function to map it to a visible range and then an audio range, which isn’t very complicated math, but was very messy in code. Turns out this also had a much more simple solution than I initially thought. I could just map the hue color range from HSB (the other values correspond to saturation and brightness not frequency) directly to an audio frequency range rather than mapping twice.

Practical Applications

The frequency mapping technology behind the Wassiliscope also has potential utilitarian/commercial applications. My favorite so far is using it to make sonar for the blind. Some blind people use an echolocation technique called FlashSonar where they make a series of clicks with their mouth, listening for their reverberations to give them information about their environment. The person then listens for the reverberations which change depending on the proximity and material of the objects the sound reflects off of. Surprisingly, with practice, FlashSonar can activate the visual part of the brain. Every click gives a better picture, and can even allow people to detect low-resolution, 3D geometry.

One of the problems with sonar is that some blind people feel self conscious about walking around clicking in public, so much so, that some choose not to use it.

My idea is to emit an ultrasonic ping and use the frequency mapping from the Wassiliscope, to listen for its reverberation, and shift it back down to an audible range. The blind could use this device to scan their environment without feeling self conscious. The pings could be more frequent than a person would care to click which would also give a more accurate picture of their environment. Hopefully, because the device would still use audio echolocation, it would eventually activate the visual centers of the brain like the FlashSonar technique.