More Multithreading Discussion

I think it depends on how many parallel instances you want to run, how much you’re going to load into them, and what the realistic stable clockspeed on the processor is. And whether money is an issue, of course.

I’m on an i7 with six cores (twelve virtual cores with hyperthreading on) that I run overclocked at 4.8Ghz. I chose that processor because it’s known to overclock well to 5Ghz for most people without adding any special cooling. I have no idea how well the Ryzens overclock, or how to think about something like a 16 core @ 3.8 GHz vs. an 8 core at 4.8GHz.

Which i7?

It’s true that I may never use the 10 or 12 cores that even a more modest threadripper would include… although it would be comforting to think that there would always be an idle core just waiting to take up any additional load I throw at it.

Then again, I may…

Here’s my current line of thought:

Synth Bank Instance: Very high processing requirements for lush sounds, sometimes with three simultaneous players (up to 4 hands at a time playing unique patches, sometimes with an additional 4 patches ready-to-go using keyboard splits or variations). Will sometimes remain unused during a song with no synths. 4 x Stereo Outputs to mixer (one stereo channel per band member plus synth bass)

eDrum Instance: Will run either Superior Drummer or MODO drums. Triggered by Roland V-Drums. May occasionally get triggered from another member’s foot pedal or keyboard, but seldom. 8 channels to mixer

Guitars Instance(s): Will handle plugins and amp emulations for six guitar-ish inputs, but these are handled by two players… no more than three inputs active during a given song. Could split this by player, and again by instrument if necessary into as many as 5 separate instances. Number of outputs determined by how much separation I want to allow for simplification of FOX mixing. Likely 3 x mono + 3 x stereo for 9 outputs.

Vocal Instance - Three mono voices into basic transducer and modulation effects here and there. Would separate into three instances only if necessary. Three stereo outs.

Looping Instance - Fed from four stereo busses, with one stereo output

Show Control Instance - Handles OSC communication to lighting, control of digital board via MIDI, and remapping of MIDI footpedals to functions on a song-by-song basis. No audio, unless perhaps clicks and tracks

Master Instance - Doesn’t do much… just takes my complex networked MIDI rig inputs and translates them into usable loopMIDI input groups for the use of other instances.

So 10-12 instances of GP is possible, although I may never get around to implementing all this. I wonder whether a 16-core processor might better handle OS system processes whilst still allowing 12 cores for assignment to instances.

That’s a pretty ambitious setup.

I’m running a core i7 8700K. I’m not really pushing it to the max, so it doesn’t run hot at all. With the kind of stuff you’re talking about, I’m sure I’d have to step up the cooling to some kind of liquid cooler or something.

Another consideration is how much latency you’re willing to put up with. I use Roland v-drums (TD-25) but I don’t use any of the onboard sounds. I just use it to trigger Superior 3. When playing like that, I find that latency matters a lot, and I find it unpleasant once total latency (stick hit to sound) approach 10ms. So I generally run my interface at 48khz, 64 samples. I don’t generally run into problems with that, even when running a several keyboard VSTs and guitar effects at the same time. But every now and then I will get some mystery glitching that I can’t seem to track down.

One thing that does it for sure is page faults. I have to be pretty careful with the VSTs I use. I used to run stuff like Keyscape with both a Rhodes and a Yamaha grand loaded, a 6gb kit in Superior, and who knows what in Kontakt. At 32gb that’s a no go. Even though I theoretically should have 10GB of free RAM or so. I don’t understand how Windows handles all that. All I know is that at some point I tracked my glitching to a rise in page faults, and switching to VSTs based on modeling (pianoteq, modo bass, blue 3) fixed a lot of those problem (and made my gigs load 20x faster).

With the kind of ambitions you have, I wonder if you’d actually get better performance and lower cost doing it on multiple machines. The incremental cost of a 16 core processor over, say, an 8 core processor might actually be more than building an entire second 8 core machine.

1 Like

Thanks for excellent rundown. It’s useful to see how a modern processor reacts to a heavy load. Also good advice re:modelling plugins. (Sounds like I should lean toward MODO drums instead of Superior 3.)

The cost (in dollars, weight, and/or setup time) for my rig would spike dramatically with two machines. A single machine connects via a single USB to my x32 mixer. A second machine would need to be connected to its own DA converter which would be connected to the mixer inputs by a snake. That makes for a very heavy rack (or a lot of time disassembling and reassembling) and costs a lot of extra cash.

Hey Celoranta,

I piece of advice. I recommend using tightVNC:
https://www.tightvnc.com/

for you remote connection. I tried RDP and it was slick (costs a $100 to upgrade from Home to Windows Pro ) but it disabled all my MIDI connections. This is a known issue with it. TightVNC is not quite as slick but more functional over all. It has multi viewer support and the android app works really well. Also let me know how your build goes as I was debating the AMD/Intel and went intel because have read a higher compatibility rate with more software however wanted to go AMD.

Chris.

1 Like

On mac, Mainstage has been upgraded to support multi threading.
We can easily make the same setup (number of buffers, sample rate etc…), with many plugins, in both, Mainstage and GigPerformer, and we can see a big difference:
Mainstage can handle many more plugins without audio artifacts.
I use mainstage live with Midi guitar 2, from JamOrigin, which controls different synthesizers.
Another strong point of Mainstage is that it correctly manages the ghost midi notes, whereas with GP I have to manage the panic midi during the live. I believe that there are tricks with GP to avoid this problem but I did not really succeed in implementing them.
These 2 major points force me to use Mainstage, but I would love to take advantage of GP’s strong points, which Mainsatge cannot manage:
patch management
the user-friendliness of its programming.

1 Like

Interesting many more plugins.
How does your patch look in Mainstage and how does it look in GP?

Ok, let’s build a demo on a macbook pro 13" 2020 (intel i7 with 32Goct RAM).
The audio buffer is set to 128, the sample rate to 48Khz:

I use a guitar directly plugged in a Helix from Line6 which sends the guitar signal to Mainstage or Gigperformer.

On Mainstage:

Same setup on GigPerformer:
Oups…Sorry I’m a new comer in this forum: so only one picture is allowed… :roll_eyes: I will send it on the new message

The neural DSP plugin “Plini” requires a lot of CPU processing as well as the reason rack plugin and the Midi guitar 2 plugin.
On Mainstage you have a really clean sound without any artefact.
On GigPerformer you have some…

On GigPerformer:

Looking at this topic just out of curiosity, I came across this issue with chrome.

When opening chrome, I can see at least 8 processes or more, but after closing chrome, no process remain. And no chrome process on startup.

Did you uncheck “Continue running background apps when Google Chrome is closed”
in chrome/settings/advanced/system ?
Did you delete unnecessary programs in the windows start menu as well ?

Or did I missed something ?

What about these other values?

screenshot_1030

1 Like

This is exactly what I have for Mainstage:

Yeah, that’s what I suspected — you have an extra output buffer - “Buffer de sécurité” (so you’re not “really” at 128 sample buffer size. Note also that your resultant latency is 16.9 ms

On the other hand, if you set Gig Performer to a sample rate of 48000 (by the way, why — 44100 is typically fine) and a buffer size of 128 samples, you get a latency value of 2.7ms (double that if you are using an audio input, so 5.4 ms). So your comparisons are not really equivalent, as far as I can tell.

screenshot_1031

To get the same latency in Gig Performer this would be the sample buffer:

Or when calculating Audio Input maybe this:

You’re right! I need to compare the equivalent latency time, I have to continue my evaluation…

The 48Khz sample rate is due to the fact that the Line6 Helix send data through the USB port using this sample rate.
I’ll come back to this discussion when I’ll have some results relating to the latency time.

A quick trial with the same latency on both gives me the the same result: no more artefact on GigPerformer.
I’ll, nevertheless, continue to investigate this subject to be sure that I can safely use GP during a live performance. I’ll post the results in some days, once I’ll have rebuild my full set on Gigperformer.
Thank you, Paul, for your sharp look!

3 Likes

Hi @xtian82 glad to hear it is working now!
Thx to @dhj to show the right direction

What issue did you face with ghost notes?

What is a ghost note?

If you need to use Panic often, there’s something else wrong.

It happens when you quickly switch from one variation to another with a lot of Midi notes in progress.
For example, for “Comfortably numb”, I have a patch with a guitar sound + an organ which I have to change the speed of the leslie while I play. In this case, GP do not received the Midi note off signal and the sound continue forever until you start the Panic midi function. I never have this behaviour with Mainstage…