Optik is a camera-headphone prototype designed for Samsung in 2014 by the UX Mobile Lab team in SF, CA. The design concept explores ideas posed by Souvaillance theorists(1) (the opposite of surveillance) in the form of a mass consumer product. The project combines technologies commonly associated with surveillance apparatuses, such as computer vision and sound detection, to enable new use cases such as human-augmentation, live streaming (civic tech), and quantified-self. The design discovery process was done in collaboration with activists from Brazil and the occupy movement in NYC around the year of 2014. In both events, protesters were threatened by the abuse of power by the state. Easily accessible live stream cameras were crucial in protecting civilians from police brutality, carrying a very different form of camera-subject relationship within them.
Discipline: Open-source, Web3 development
Role: Co-founder, Interface Design, Data Visualization
Team: Felipe Duarte (Co-founder)
Year: Work in progress
Optik doesn’t propose anything majorly revolutionary, instead, it simply acknowledges how users were already strapping cameras onto things: helmets, bikes, chests for documentation or live streaming purposes. As a result, the final design only iterates on those ideas to create a more robust version of something “already invented”, confirming the idea that "the street finds its own uses for things”(2).
How it works
Machine learning and voice assistant chips weren’t as nearly developed as they are now. As a result, the early prototype designs relied instead on phone data (sensors, GPS, and accelerometer) and sound detection from the camera's microphone (loudness, pitch detection, FFT analyses) to classify events—instead of data from the image sensor. After classified, events triggered different actions: initiate live-stream, translate image to text, capture photo, etc. Aside from "pre-baked" actions, the system featured a very simple training algorithm that enabled users to create customizable events triggers by interpreting new data patterns such as “take a video whenever cycling speed is higher than 20mph, arround this geographic location".
Interaction model and affordances
Despite major advances in ways of processing and automating various actions through algorithms and sensors, most cameras today are still tightly coupled with a model invented decades ago. From the interaction model itself to the skeuomorphic icon that illustrates the app on most home screens. The thesis our research sought to explore, relied on the assumption that computer vision's potential to enable new use cases could only be harnessed by rethinking the camera form factor, and how one might interact with it. In other words, it’s not enough to have powerful algorithms to process image data from the camera if the interaction model remains the same. Departing from the well-known model: a user unlocks her phone, activate the camera, choose an action (photo, video), capture, Optik’s main design question envisioned form factors, and how they might be augmented by state-of-the-art software?
Optik was one of the many concepts explored by the design team Samsung’s UX Mobile Lab in SF, CA. As a lead designer, my role in the project consisted in proposing the original concept and overseeing collaboration with hardware and software engineers to develop working prototypes. Earlier prototypes were designed to validate proof of concept, following interface design and companion software. The industrial design concept was done in partnership with Astro Studios.
1. Steve Mann, Jason Nolan, Barry Wellman, Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments, Surveillance & Society (2002)
2. William Gibson, Burning Chrome (1982)