Understanding Multithreading

Ah ok. Well, maybe that’s the technical reason that I don’t understand?

I’m a bit puzzled though Paul. Are you saying that ideas aren’t welcome unless they come with a full and working programming solution? If that was the case I’d just write my own software… I’m an end user, making suggestions as to how the program could develop and how that might be useful to end users. I’m not attacking the software or the devs, I’m just throwing some ideas out there in case they are useful and trying to understand the software better in the process so that i can make better use of it for gigs. I’m really happy to be told by a dev why the idea I’m proposing isn’t possible/useful, but I haven’t seen anything on this thread that suggests that yet. Maybe I’ve missed something?

2 Likes

While multicore support is certainly on our radar, it is not the panacea that people think.

As Yoki Berra once observed, “In theory, there’s no difference between theory and practice. In practice, there is!”

There is a presumption that using twice as many cores will double processing power — but that’s generally not the case due to dependencies. I’ve explained this one a few times in the past but (I suppose) here goes again :slight_smile:

Consider a graphical editor like Photoshop and you want increase the brightness of every pixel by 5%
It’s clear that this operation can be carried out on multiple pixels at the same time and so a CPU (or more likely a GPU) can use vector operations to apply this transformation to multiple pixels at the same time.

However, now consider the operation where you want to make the brightness of each pixel to be the average of the brightness of the previous two pixels. Now, multicore won’t help you at all because you have no way to know what value to apply to a specific pixel until you’ve seen the values for the previous two pixels.

In other words, in practice, almost everything you want to do will have dependencies and so you have to wait and so multiple cores often doesn’t help as much as people think. What’s probably more beneficial is that quite a few plugins have internal multicore support.

This is one of those things that, like latency, is perceived to be a much bigger issue than it actually is in practice.

3 Likes

No - @pianopaul is not saying that at all. We are always interested in ideas and suggestions for new features, but I think the issue here is we see the multicore “suggestion” often (as you observed with your comment “I have seen others on the forum expressing this concern”).

It’s a seriously non-trivial issue, particularly for a graph based system such as GP and, more importantly, as mentioned earlier, the benefits will generally not be nearly as much as people expect.

That said, multicore support is on our list and it’s not like we don’t know how to do it – but other stuff is simply much higher priority.

1 Like

Thanks dhj, Yeah, I read your photoshop analogy above and I get it. However you also said that many Daws make use of multiple cores because they have discrete channel strip type routings. All I was suggesting is that GP could make use of both topologies, to cater to different usage needs. I get that it’s never going to be as simple or efficient as ‘two cores twice as much power’ any more than you get to use 100% of the power on any core, but utilising more cores does mean more available power and thus more processing headroom? Just a thought, and, as always, if it’s not useful, all good.

Also totally hear you saying that better multicore support within vsts may be a better solution, and one that’s out of your hands.

While I have your attention, could I ask another technical question. If I set up a send routing in GP (Ie i split the signal coming from my guitar, take one line directly to the output and the other via a vst) and the vst on the parallel routing is one that adds a latency, does gp latency compensate for that (ie will the entire audio chain be delayed by the built in latency value of the vst) or do the two routes manage their own latency (so my clean guitar sound goes through at system base latency, but my vst processed sound is delayed by the amount the vst demands)?

Oh and thanks for your second response. Yeah, totally get that it’s not a simple thing and not suggesting you don’t know how to do what you do (the software itself speaks for your competence). I’m probably a bit behind on the general forum conversation as I’m realtively new here. Sorry if I’m repeating stuff that’s already out there

Don’t be sorry – just enjoy GP –

1 Like

Cool thanks.

Should I ask this somewhere else? I appreciate I may be guilty of hijaking a thread.

Latency compensation makes zero sense in a real-time environment.

1 Like

So GP doesnt’ do it?

ie in the example above, my direct routing will be unaffected by the latency introduced by the plugin on the other routing?

Again, to be clear, that’s the behaviour I want, I’m just trying to clarify that that is how GP works as I haven’t managed to find that info anywhere else.

G̶i̶g̶ ̶P̶e̶r̶f̶o̶r̶m̶e̶r̶ ̶d̶o̶e̶s̶ ̶n̶o̶t̶ ̶d̶e̶l̶a̶y̶ ̶a̶u̶d̶i̶o̶ ̶a̶l̶o̶n̶g̶ ̶o̶n̶e̶ ̶s̶i̶g̶n̶a̶l̶ ̶p̶a̶t̶h̶ ̶t̶o̶ ̶c̶o̶m̶p̶e̶n̶s̶a̶t̶e̶ ̶f̶o̶r̶ ̶r̶e̶p̶o̶r̶t̶e̶d̶ ̶V̶S̶T̶ ̶l̶a̶t̶e̶n̶c̶y̶ ̶a̶l̶o̶n̶g̶ ̶a̶ ̶d̶i̶f̶f̶e̶r̶e̶n̶t̶ ̶s̶i̶g̶n̶a̶l̶ ̶p̶a̶t̶h̶.̶.

[Edit: that appears to be incorrect.]

1 Like

I am not sure.
I used Adaptiverb which introduced about 40ns latency and my piano in a parallel path was delayed also

I stand corrected. I just checked this and you are correct.

It appears to calculate this separately for the Global Rackspace and the local rackspaces.

a interesting read, this thread !
i understand now the multi-CPU situation and the logic behind it. Thanks

i find the idea a very interesting one, when GP would load the “next activated” Rackspace into a own core.
Maybe that would be a intelligent step for future GP versions towards using multi CPU possibilitys ?

i for example have many patches/rackspaces above 50% CPU load, (on a M1mac, which is no slouch)

Have you had any problems with them?

No !
i can go up to 90% or above with 44khz sampling rate.
i think 93% is the limit where crackles begin to come in.

With 192khz is it different ! some FX do sound entirely different with 192khz, so thats now new a thing for me to set things to 192khz ( and 4x bigger buffer) . And: it brings the latency down that these FFT based FX would introduce with high filterband counts. (even with the bigger buffer)
INA-GRM, you shurely know them. Much used in my patches.

But: i can´t switch a rackspace while a sound is playing, or ringing out, with these CPU loady patches.
so, for example, i could not record a jam, and switch a rackspace as part of the jam.
Not a problem for me, but,…if there´s some further grow “in that direction”, would that be a very welcome addition :wink:

Hi Paul and Vindes,

Ok. I’m still just as unclear on this question then. Maybe one of you can clarify for me.

I totally get, as dhj says, that latency compensation wouldn’t be useful in a live patching environment, only in a daw where you want to ensure that channels all align perfectly for rendering a mix.

From what you are saying though it sounds like any latency inducing plugin will introduce a lag into the whole audio stream? Is that correct?

Why do I care? I use the latency tracker in Ableton to pick plugins for live processing that have zero latency, but I do have a few nice delays and reverbs that do induce a latency, and that it would be nice to use live. If I could put them on a ‘send’ routing and know that my direct routing was unnafected, that would be great, as the additional latency really wouldn’t matter on the delayed sound or the verb, but if they are going to delay the entire audio chain, I will stick to avoiding them.

Further to that, does anyone know if the latency introduced by such plugins is cumulative. ie, if i insert 4 plugins that have an inbuilt 32ms latency, do I end up with 128ms additional latency, or do they all just need that additional 32ms headroom and so I just add 32ms? I appreciate that this may somewhat depend on the plugins, and may be a question for the manufacturer, so I suppose I’m really asking how GP handles requests for additional latency from plugins?

Yes, and if you think about this it is the only option that makes sense for live use. EXAMPLE: If you deliberately delay the sound to certain outputs with a delay plugin - you want to be absolutely exact. This is used frequently with larger halls where you have to delay speakers in the middle of the room so that the front+mid speakers produce a consistent sound for the back of the room.

How would a delay that uses say 1/8th of your beat as the delay value be properly aligned with your performance in this case if it could be playing its audio delayed again by some random value?

Yes. Consider a delay that plays just the delay (wet is set to 100%). If you place 2 delays that delay the sound by 100ms - you will hear the first sound after 2x100ms

This thread has been created in 2019 and its title is very misleading. I will close this thread for future discussions. Please feel free to open a new topic if you have some specific questions so we don’t end up with such long threads containing stuff that’s unrelated to the original topic.

Thanks.