Product Design / Native Instruments
The Preset Explorer
Navigating Sound in 3D Space
A 3D spatial navigation system for audio. Using AI dimensionality reduction to turn a flat list of filenames into a playable constellation of sound.
Beyond the List
Synthesizers traditionally force users to browse sound using language: "Bass," "Pad," "Dark," "Bright." But words are poor descriptors for timbre.
The Preset Explorer was an experiment to break this paradigm. Instead of reading a list, what if you could navigate the sound itself?
The Data (AI Collaboration)
This project was a close collaboration with Dr. Eng. Ninon Devis, an AI Researcher at Native Instruments. She trained a machine learning model on thousands of audio snippets, analyzing them across hundreds of timbral dimensions (brightness, harmonicity, attack, noise, etc.).
Using dimensionality reduction algorithms (t-SNE/UMAP), she flattened this hyper-dimensional data into a 2D/3D coordinate system.
The Clustering Dilemma Ninon experimented with training the models to group sounds into distinct clusters (e.g., forcing all "Brass" sounds together). I conducted early user tests where unclustered models actually caused confusion: some users couldn't understand why a metallic drum might sit next to a bell in a system that was forcing rigid categories.
We decided to abandon strict clustering. By allowing the raw timbral data to dictate the map, we unlocked Serendipitous Adjacency. A snare drum could live next to a plucked synth if they shared the same sonic character. This shifted the UX from "Organizing" to "Exploring."
Prototyping the Galaxy
Fascinated by this data, I reached out to Ninon to supply the raw coordinates. I didn't wait for the C++ team to build an engine; I started building it in the browser using React and Three.js.
The Physics Problem Ninon introduced me to D3 and the concept of magnetic pull to organize nodes. I created a prototype to simulate this gravitational pull, making the map visually pleasing rather than a chaotic scatter plot.
However, replicating this physics simulation in JUCE (C++) was a performance killer. The synth engine already consumed most of the CPU; we couldn't afford to calculate gravity for 2000+ nodes in real-time.
The Solution: I used my web prototype to bake the physics. We ran the simulation in the browser, found the optimal shape, and exported the static coordinates. The final JUCE implementation just rendered these pre-calculated points. This hybrid workflow made the feature possible on consumer hardware.
Filtering by Character
We didn't just stop at the map. Ninon identified specific data dimensions—like "Noisiness" or "Attack Time"—that we could map to UI controls.
The team tested these abstract parameters, and we refined them into 6 dual-handle sliders. This allowed users to filter the map not by name, but by character: "Show me sounds that are Evolving and Bright." This made the map feel alive, shrinking and expanding based on sonic qualities.
From Web to Product
The biggest hurdle was getting this into the actual plugin. The Absynth codebase was 20 years old, and we lacked a UI engineer with OpenGL experience.
So, I did it.
I spent my spare time translating my JavaScript logic into JUCE (C++), utilising AI coding assistants to bridge the gap in my low-level knowledge.
By building the first prototypes and a standalone Electron version, I was able to write a detailed spec for how browsing, filtering, and similarity search should work based solely on coordinate data. This work deepened my system thinking and de-mystified the "Black Box" of modern browsing features and database efficiency.
The Reception
The feature launched in Absynth 6 to a polarized response.
- The Critics: "Why do I need a map? Just give me a list and search." (Efficiency users).
- The Fans: "I found sounds I've owned for 10 years but never heard." (Exploration users).
It wasn't a universal solution, but it was a successful proof of concept: Discovery requires a different interface than Search.