Multiple PC as DSP Offloading Idea

Hello,

I run out of DSP too quickly, perhaps many of you are in the same boat. Every time imagination kicks in, horsepower runs out.

Perhaps I just to toss out the idea of being able to use multiple PCs connected via Ethernet, could be used as Master and Slaves for GP3. A feature like Vienna Ensemble Pro, but integrated into GP3. This would be an amazing feature.

I know there are Waves and UAD accelerators like this, but imagine an integrated Live Host GP3 being able to offload any 3rd Party VST, and being able to add as many PCs to have nearly infinite processing power, used in real-time.

Please consider this idea, I think the world is ready for something like this!

Thanks

1 Like

what system are you currently running?

I’m currently on Windows 10, HP Prodesk with I7-4770. Yesterday I realized how good 88.2/96k sounds with Helix native and other plugins I had in the chain, but it sure does gobble cpu twice as fast.

44.1KHz was chosen as a sample rate for CDs for example because it’s a little more than 2x what a human can hear (approx 20KHz range).

Higher sample rates - like 96KHz are great when capturing things into your daw and later manipulating it because you have more data to work with, but if you’re playing live - 44.1KHz or at most 48KHz should be more than good.

Actually when I build my sounds, they are all in GP3, so I just output out of GP3 and go into Pro Tools which is either on a different computer or sometimes the same, so only things that can run smoothly 100% like the original capture would be saved as one of my “sounds”. Im still going back and forth with the higher sample rate, but I still run into CPU blocks at 44.1K when I get my mojo going lol. I find my current situation with 88.2/96k very unusuable right now, but for the minimalistic patches, the sound is definitely better, since I do use saturation plugins, and Helix native is an amp sim, so I would think that is the reason it sounds better up there. My main idea of tone is to have the exact same recording chain as a live chain, which is exactly why I got GP3.

Which buffer size do you use?

Currently I tested all of them, at 44.1K or 48k, I can do 256 samples, but still have a little bit of latency using the Analog Heat Overbridge, so I cant go higher than that. I tested the 88.2/96k at 512 samples. Any higher, I noticed the slap delay latency thingy too much. Hopefully I can go lower whenever I upgrade my machine, but even so, I am afraid that there will still be limitations with the latest CPUs. This idea of using other computers as DSP accelerators was to reach infinity, or as someone might put it… go to 11. :smiley:

Absolutely - that’s why we have thisScreen Shot 2020-07-08 at 4.02.16 PM - it goes +1 over anything else - it’s awesome! :slight_smile:

3 Likes

This makes anything louder than anything else…

2 Likes

Well, it’s 1 louder isn’t it…

1 Like

I think doing what you are describing is pretty doable with GP with existing technology. There are some caveats, though, and any time you’re doing anything on multiple machines one of them is always “it depends.”

I haven’t done exactly what you’re doing, but I have used rtpMIDI to send midi data over ethernet from PC to PC, and when I want to do audio machine to machine I usually prefer to do it over ADAT. That requires each PC to have an ADAT interface, of course. You can set ADAT up in a ring (since every device has an in and out) and thus get as many computers as you want to talk to each other (with some caveats) or get something like a RME Digiface to serve as a hub.

If you want to step up to much higher price points you have MADI and Dante solutions, and a number of audio over ethernet solutions (none useful and free that I’m aware of).

You could certainly do a lot of this in the analog realm as well, like it was done old school.

A lot depends on how you want to set up your sounds and your ability to hear and tolerate latency.

Let’s suppose you’re just using two machines, call them M and S (master and slave). Let’s say your MIDI (or analog instrument in) in comes into M, runs into a dozen VSTi’s, then you take the output of that and send it over ADAT to machine S to add a bunch of effects, then take the output of that and send it back to M over ADAT to do the final mixing and output. This would work, but I don’t think it’s going to buy you anything in practical use. The two systems are operating in series, so if you’re running 256 samples on each then this combination is going to be either 512 (+ a few for the ADATs) or 768 (+ a few) depending on whether that final mix is done in purely in the interface (using TotalMix FX or whatever).

To get benefit without added latency M and S would have to do their processing completely in parallel. So your MIDI in would get routed to your VSTs on M and immediately sent to S over rtpMIDI or hardwired or whatever. Then S does all its own VSTs, and that’s happening in parallel with M. You could run analog outs from M & S to a mixer, or over ADAT to whatever you want to be the master.

If your use case is guitar (I’m inferring from Helix) then you’d probably want an interface that can send the raw guitar signal over ADAT to the second system, or have an analog split (via mixer or DI box) sending the dry guitar to both systems at the same time.

You could rig up as many systems as you want to do this, but caveat will always remain that if you want to do any part of it in series then you’re adding your sample rate worth of latency to it.

Also, bear in mind that adding 256 samples @ 48Khz is the same audible latency as moving five feet further from your speakers. You can structure any multi-machine setup so that every note event hits your speakers at the same time (thus no unwelcome slapback echo kind of effect) but you might feel the added lag as a performer. (e.g., press a key and not hear the sound for 15 ms.)

I agree it would be cool if GP or some other software could figure out how to split VSTs across multiple machines in an optimized way, but that’s a very complex undertaking because it’s highly dependent on the signal path you set up and real time audio over a vanilla local area network has a lot of limitations.

I have made huge rack systems with tons of hardware in Parallel processing and pretty much did it in that manner as, but with rack units and all. Without a centralized host like GP3, it was incredibly difficult to control all of the units the way I wanted to. For instance, few of those hardware pieces had dedicated VST controllers, so controlling them would have required too much extra hardware. I needed to streamline everything into an HQ center like GP3 and have a single button change everything.

My rack was controlling over 20 hardware units, and when you run a system that complex, things go wrong at many points. Midi messages not going through (sending 20 program changes to 20 different units can sometimes be a little glitchy especially if you use midi thru, even with a short 3 unit daisy chain), a faulty cable in the rack taking hours to locate, no EQ on every single component because there was not enough EQ channels to go around, non recallable units like tube preamps etc., and not having infinite midi modulation capabilities on every single parameter. This last reason was why I needed GP3. Anything can control anything.

The benefits of my rack system was, there was no DSP limit to be under, what you got works within its set boundaries, but I was limited by 2 things, complex creativity, and speed. It is so fast to work in GP3. With my rack system, I constantly have to flip pages on a tiny LCD screen between units, and the sound quality degradation with long serial chains which I was using, not to mention the multiple AD/DA cycles when you go In and Out of multiple FX Processors. And also, the rack systems are hardwired in a specific path, there is no free rearranging, unless of course you own a Switchblade switcher, but it still is a slow process that needs a PC screen tied to a bunch of rack equipment already.

The speed of GP3 allows for creative focus, and let me stop worrying about tone in the mix path. What I mean by that is the Tracking signal chain and the Mixing signal chain can both be united and recalled easily with GP3. All the plugins settings in the DAW session can be copied over to GP3, giving you the same neat tone you’ve crafted during mixing. I’ve always wanted that finished polished tone to play through, like preamp emulations, saturation and tape plugins, complex send and returns, compression, etc.

Going back to this Master Slave idea, everything would still be visible from one single window, all control on the same screen. We wouldn’t have to stare at a different LCD screen, and only need to change the one preset change on the GP3 with alter the whole ecosystem. Yes, latency in a serial path would be higher, however with more DSP, we would be able to raise the Sample Rate, and lower the buffer size, resulting in a higher sound quality (especially for distortion algorithms without oversampling) at practically the same latency.

We can have dedicated slave machines as DSP accelerators, ones specifically assigned for loading huge Kontakt libraries, ones to load only synths, and one for your darn Bass player to play through on your same machine, maybe a drummer too if he has an electronic kit. :smiley:

I layer a lot of my guitars with synths, and sample libraries. That takes tremendous processing power which has been unable to withstand my passionate complex wirings.

And one more thing, if the entire Master Slave ecosystem was controlled from the same GP3 window, you can have a very important thing that a completely parallel process would lack, and that is Delay Compensation. All signals end up at a single stereo output at the same time.

This is a dream workflow, with boundless creative potential.