I took my pisound as my main effects and control system for my group’s improv session this week. It worked like a champ doing all this:
USB MIDI routing so the faders on the UC44 can control the master volumes on the Digitakt
Ableton Link sync, so I’m in sync with my bandmates (and we can all control the tempo)
Beat sync’d gate & delay effects
Reverb & HP/LP sweeps
Mixing all of the above
(I’d upload a picture, but I keep getting an error when I do.)
My only nit:
The knobs stick out too much when I packed it in my gig bag. Made me nervous I was going to damage the pisound. Do you think it would be possible to trim the shafts down and use lower profile knobs?
Oh btw. since you mention that. using MIDI through ALSA feels a bit weird in Pd, I guess mostly due to the UI. When i use OSS MIDI I get to choose the actual interface(s) that I want to use, in ALSA MIDI only the amount of interfaces… It’s fine as long as you only have Pisound as a MIDI interface, but as soon as you plug in a controller it gets weird. But maybe I’m missing something?
Well… remember that I want to run this headless, and that I’m bringing all the gear to a gig and plugging it in there. SO - I’d like it to be as fool proof as I can make it, and in particular it should work even if I plug the USB cables into the wrong ports, or power up in a different order.
With OSS, it seemed that I could only specify the MIDI ports via a hw: number. And it seemed to me that this numbering follows the USB port, not the device. So, I’d have be sure to plug the devices into the same USB ports each time.
With ALSA, pd just had two ports of its own. Then I use aconnect to link the USB MIDI devices to those ports. The advantage here is that those devices can be specified by name, like so:
Orange box w/light is a PBMIX3, battery powered mixer. The fader/knob box is a FaderFox UC44. The red thing with the hexagons is a du-touch S, a French instrument I’m still learning to play. Black box is Elektron Digitakt.
Is there a midi connection from the du-touch to the pisound?
With the set-up as above, I’m assuming that you’re routing channel 1 and 2 to aux, then to pisound, then back to channel 3 which goes straight to main. Correct?
@riodaditroppo - There is a Pd external for it: abl_link~. I packaged up the build for it for the RaspberryPi, and you can find it with Pd’s built in package finder (deken), in the Help menu, under “Find externals”. Be sure you load the armv7 version (as deken will show you all versions, even for processors that don’t match your system!)
Then, I built a Puredata patch that uses that external. It does three things:
Maps an encoder on control surface (UC44) to tempo bump up & down. Acutally, I have two encoders, one coarse and one fine. And the patch also puts out the tempo to the UC44’s display.
The tempo and “the one” are extracted and used in various parts of the patch - like the beat sync’d looper and stutter effects.
MIDI clock is generated, and sent out the MIDI ports so that the Digitakt is sync’d.
@RevOtus - You’ve got it spot on for the version of the set up in that picture. The “two trips through” the PBMIX means that I can use my head phones to check things either pre or post Pisound.
However, in newer versions of the rig I dispense with this, and go straight from Pisound out to the multi-track recording mixer (which gets the whole band before we send it to the house). Now I don’t spend a channel on the Pisound. Now I can have four stereo audio sources mixed into the Pisound - and I can use the cue out channel to route three of them to my headphones before turning the volume to Pisound (and the audience).
du-touch doesn’t have both audio and MIDI at the same time because… the design is flawed and the USB leaks way too much into the audio. Saddly, they don’t think this is a serious problem - so I’m forced to use it either as a MIDI controller, or as a sound source, but not both. As I now have a PreenFM2, I’m using the du-touch soley as a MIDI controller.
All in all - I sure wish I could have a Pisound with four inputs… then I could process some inputs differently. Since I thinking incorporating a Chapman Stick into this rig - I’d really want to process that differently than the Digitakt.
Do you ever experience dropouts/underruns? How much can the RPi/pisound handle in terms of processing? What are your buffer settings? And are you using a low-latency/rt kernel? If so, what version?
I was excited by the pisound, purchased one, but am a bit disappointed by how easily it gets maxed out by running Pyo scripts, etc. that present no problems on my 2013 Linux Mint MacBook Pro. I running a minimal non-Gui (headless) RPi raspbian, too. I’m wondering if there is still system tuning I need to do…
Hey, Pisound is not doing any audio processing itself, so it’s up to the Raspberry Pi to keep up with your demands.
Are you using RPi 3?
As for the settings - you should find what works best for your situation, the optimal ones largely depend on the complexity of the audio processing going on. The 2 main things to balance are the sampling rate and buffer size. The larger the sampling rate, the more processing power is necessary, but also the lower the latency. The smaller the buffer, the lower is the latency, but also has higher chances for processing underruns to occur.
Also make sure to be using one of 48000, 96000 or 192000 sampling rates. If you’re using wave samples, it would help to convert them to the sampling rate you’re using, so no live resampling has to take place.
Add -nogui if you don’t need the display, or when running from a button script.
The key is making sure you are processing at the same rate as the PiSound: 48kHz. If your app is connecting to the driver and sending audio at 44.1kHz, the driver eats a lot of processing power and latency converting it.
I’m running a stock Raspian Stretch Lite - with minimal added packages to get an X environment when connect in to do development, but not otherwise. Stock kernel.
With this set up - with all the effects listed in the first post, plus now an audio looper, and a MIDI note and control routing matrix - and with the GUI - I never exceed 12% CPU!
In short, you should have tons of headroom for audio.
Thanks! I am well aware of buffer factors and latency, etc., but I wonder now about upgrading to Stretch and doing Lite as well. Right now, I’m doing the previous version and manually stripping things down.
I’ll let you know. Maybe It’s just the nature of the computation-heaviness of some of my processes.