Not the first Google project on the Raspberry Pi but this one sounds better to me than most of Google AI’s hyped up “proof of concept” work.
It does include its own DAC.
As the Synthtopia writeup had it, it could be a pretty involved build. But maybe it could inspire some pisound-based projects?
yeah, its a pretty tough build, as you need to get the pcb made, then there is SMT soldering to be done… so out of my comfort zone
however… for me, this is actually more a demonstration that the NSynth playback engine, is light enough to run on a rPI3 (and still have room for fancy graphics) , so that means it would be possible to integrate it will your own controller…
think about it, you don’t need the dac, and you done even really need the tactile grid its really just a (very pretty) x/y map, so you could do with 2 pots, sure you lose the aesthetics, but its the same functionality… so really this interface can be functionally done with about 12 pots.
anyway, ive a different idea, a more tactile interface , what would be super fun…
its great its open source, means i can fish out the bits of code i need to drive the Nsynth engine, and bin the interface code.
anyway, i have another project to finish this week, but might have a look at this afterwards
This does look like it should be possible to use with Pisound and a MIDI controller Hopefully it shouldn’t take much effort to configure and remap it appropriately.
To me, this is the mark of a successful creative project: gives you ideas to do things in other ways. Maybe the plain Magenta NSynth project would have done the same in your case, but this hardware project opened my ears to something more than the Machine Learning hype.
The X/Y grid is interesting, to me. Sure, pots can be fun. But moving in two dimensions (or more) at once is viscerally fun. Which is why the Pimoroni Skywriter and Leap Motion attracted me.
Speaking of which, should probably go back to this kind of exploration, as it’s been a while.
3 dimensions is even more fun
my plan is simple, I would replace the grid , with my Madrona Labs Soundplane - an expressive controller.
But id make an NSynth variant that support MPE and OSC input, so could work with any expressive controller.
the only thing when i looked at the code that was not entirely clear, is if it work well in a polyphonic context - as id want to be able to morph individual notes/touches.
(the demos ive seen all appear to be monophonic, and I didn’t see any evidence that the controller shown is multitouch)
i think its probably ok to be polyphonic, as essentially the ‘player’ side, is really such morphing samples, so hopefully its not too cpu intensive.