Most applications of AI today are based on using data to train a machine learning model to predict an outcome. While this interaction model works well where a task can be easily defined (automated), it fails to work when the task is not clearly defined. Music creation is one of many examples where the current blackbox-driven models don’t work, and existing state-of-the art algorithms are unable to perform well. For example, a musician looking to compose a new piece of music using existing machine.
I started the project by mapping out the space of AI research, evaluating available tools (apps, plugins, frameworks), their usability problems, and in general gathering insights about their designs. In addition, I worked with 6 composers throughout the design and ideation process to validate ideas and understand their needs when interacting with generative systems.
While machine learning works well in applications where a task can be easily generalized from training data, it fails at augmenting users in tasks where the goal is diverse, personal and not clearly defined. For example, current tools fail at: delivering novelty (new combinations not seen in the data) and offering granular control to users. This in turn, makes it difficult for musicians to find desired musical outcomes, overwhelming them with an infinite number of ideas, all very unrelated to the artist’s initial intention. In summary, the current design paradigm that machine learning applications are based on needs to be challenged if AI is to be a tool that can augment human capabilities beyond what either a human or machine can achieve in isolation.
Gradient introduces an alternative approach to designing with AI by focusing on making its algorithms transparent and easy to understand by its users. As a result, the interface allows composers to express their ideas by interacting with the algorithm to steer it towards novel outcomes. Instead of trying to create a system that composes for them, Gradient builds a way that musicians can compose with. By mixing simple components in a node-graph interface, new patterns can be reconfigured to create infinite new musical permutations. While Gradient focuses on music generation, the learnings acquired here can be used in different applications such as text generation or scientific discovery.
Product Design
User Reserach
Data Visualization Design
Conception and Design
User-centered Design
Iterative Prototyping & Wireframing
User Research & Insights
Berlin
4 months
I began my research by evaluating existing tools through looking at their interaction models (how they work and what metaphors are they based on) and their information architecture. I used these tools myself to compose and explore musical ideas and see how they work. I followed a process similar to heuristic evaluation and task analysis, with the exception of not engaging a group of specialists. This step helped me understand how these applications differ, the spectrum of possibilities I could focus on, and what problems I was interested in solving moving forward.
My domain-specific knowledge in music composition and production allowed me to simulate the path a user might take from exploring websites, comparing value propositions, to installing and using tools.
I was interested in mapping out:
The evaluation enabled me to understand the whole gamut of users different tools were targeting, and how certain user groups are currently being served by the tools available. By defining what each group needs and the tasks they are trying to solve, I was able to see if the solutions available could be improved upon. Specifically, I was able to uncover that composers were the least served by existing tools. Their needs are very specific and are the most challenging to design for. Composers need tools that are offering fine-grained controls, are deeply customizable, while also being easy to use. This in particular seems to be the focus in machine learning design over the next years: how do we make machine learning easily accessible not only to data scientists and programmers, but to everyone who wants to be augmented by what the technology can offer?
Another major insight I had was how the concept of augmentation was being used as a buzzword without considering what it might entail from an interaction design perspective. A lot of applications that claimed to augment composers in music creation tasks were in fact automating tasks for them.
My interview process was focused mainly on qualitative understanding of their workflows, tasks and emotions regarding the creative process.
Once I defined my user group, I set out to understand how different composers approach the task of music composition. I visited them at their studios instead of meeting them remotely to get a better sense of their tools and their environment. I was also interested in understanding how they thought about music conceptually, and how concepts were translated into how they approached their tools (software, hardware, instrument).
I centered the interviews on a few open-ended questions to guide the discussion:
Entry points and axioms
Conceptual models: How do they think about musical events
In addition to the user interview, I also led a panel discussion with 5 different musicians about their experience in working with machine learning in their creative process. The recording of the panel can be seen in full length here:
User problems with pre-trained models in artistic context
Shallow learning: a new approach to machine learning design
Design principles are a good way to define the principles that the design solutions should capture. They work as safeguards for quickly validating the different ideas that might come up along the way. Good design principles describe the most important elements of the solution without being prescriptive and work to align future solutions with themes found earlier in the ideation phase.
MML is a visual language that enables the model to express the concepts it learned from training data visually. By exposing the model to a visual interface, users can further manipulate the output by changing the weights, operators, connections, and parameters. MML works in two ways: it tells users what the model sees, and offers musicians a way to steer the algorithm towards novel outcomes the model has never seen. Instead of exposing the neural network as it is to users, Gradient abstracts the parameters through simpler components to achieve the best balance between granularity, usability and control.
MML draws heavily on the abstract musical scores experiments done in the 60s and 70s by electronic musicians and avant-garde composers. In these scores, artists explored ways to encode musical ideas through visual abstractions as opposed to composing notes on a page (music notation). I was interested in exploring ideas that lived in-between music notation and abstract visual representation, but that could also be used to generate ideas instead of only representing them visually (two-way connection). I spent a month trying to find a solution that achieves such balance, and that could be robust enough to encode a variety of ideas. Once I found a potential way to solve the problem, I set out to test the solution implementing a proof of concept in Max/MSP. The proof of concept featured very few operators, but was able to work seamlessly as a no-code programming language to assist the composition process.
Once a midi file is imported into Gradient, the software builds up a map of the relationship between note events and how instruments are related to each other. The system looks into all the instruments at once and tries to find groups and relationships between them. For instance, two instruments could be linked together or in opposition to each other, either by events (one only happens when the other is mute) or by interval and harmonic relationships. Once analyzed, these relationships are plotted onto the visual language.
I was able to validate the main concepts regarding the interface design, user flows, and managed to build a working prototype in Max/Msp to validate the technical viability of the system. I’m currently looking for research partners, funding, and ML engineers to join the research exploration, and to continue developing the project into a working prototype to further iterate on it’s most challenging aspects. Please get in touch if you’d like to hear more about the project, or if your interests are aligned with the research outlined here.
XaiPient
Building a machine learning platform aimed at increasing transparency in AI systems through human-friendly explanations
I worked with the funding team to help them turn years of technical research into a tangible working product. I worked on UX research, I produced prototypes, gathered insights from real users and designed and tested the beta release.