I’ve set up a “home made” multitrack solution for playing backing tracks using multiple instances of the Streaming Audio File Player, and tied them all to the global playhead.
The setup itself (mixing and muting tracks on the fly, keeping the tracks in sync, etc) works nicely, but I’m experiencing some audio glitches when the tracks are playing. So I’m wondering what other people’s experiences are with this setup - Is this an inherent potential issue when GP is trying to sync up and play a bunch of tracks at once, or might there be something about my configuration that’s not quite right?
I’m using a total of 15 audio players, running on a Mac Mini M2 Pro with 16GB RAM.
What version of GP are you using?
What buffer size and sample rate?
What Audio Interface?
What type of Audio Files?
Can you upload a small gig file, so I could check with my Audio Files?
That can make quite a difference: Compressed formats take more cpu, where .wav for example asks for more I/O.
That eases buffering and caching (although it probably takes out the processing side). Maybe also a test with 16 different MP3 Files is needed and/or 16 .wav? (Of course I can’t decide what you must do )
I have it that low to keep latency to a minimum when playing guitar and bass through GP. No glitches when playing those through my plugins.
I’ll play around with the sample size to see if that fixes the glitching, but if I need to go above 32 samples I’ll probably look for another solution for playing tracks. At 64 samples, the latency starts getting annoying for guitars.
I think 16 or even 32 samples buffer size are not necessary.
A simple calculation:
When your Amp is 1 m away then you have natural latency of 3 ms
On my system 16 samples buffer have a latency of 0.4 ms
Buffer size of 128 has a latency of 2.9 ms.
Do you really hear the difference between 0.4 ms and 2.9 ms ?
When you really need that low latency for playing guitar, you can use a 2nd instance of GP and set a higher buffer size.
On your main instance you can play the guitar and the 2nd instance is playing the backing tracks.
Both instances can easily be synced via enabling OSC.
No need for another solution for playing backing tracks
Probably not, but those aren’t real world numbers.
Each plugin adds its own latency too. From experience I know that at 64 samples I will generally start noticing it with my main plugins (Neural DSP and/or Helix Native), and at 128 it starts getting really annoying.
I’m aware of the example of sound travelling vs latency (I’ve used it myself many times), but I think there’s something that happens when playing with headphones or in-ears where your brain seems to expect immediate sound, and any latency is extremely noticable. When playing through monitors or a PA, it’s definitely not as much of an issue.
But the tip about a second instance is a good point, and definitely a backup strategy I can play around with. I’m relatively new to Macs still (been using them for less than a year), so I keep forgetting that different applications can have different sample sizes (unlike Windows)
Unless they have a dry through function and only add a wet signal in parallel (like a delay or reverb plugin), I don’t see how that’s possible. It might be low, but any plugin that processes and alters the dry signal coming in will need at least some amount of time to do its processing.
But regardless, even if that’s the case for some plugins it is definitely not the case with mine. The Neural DSP plugins will generally add 5ms of latency, and I assume Helix Native is about the same. My bass rackspaces generally only run through one plugin, but my guitar setups will often chain three of them, so it does add up.
Of course this is personal taste/feel/experience and I’m not telling you what to do, but I’m a guitar/bass player as well and I’m fine with everything between 128 and 256 samples (at 48Khz). For me lower is nice, but not necessary. Using ear-phones, latency is lower than using an analog setup and me standing 4 meters away from the speakers.
(16 samples is really giving your system hell, so kudos anyway, that you can use that on a regular basis )
(So if I’m not telling you what to do, then what am I telling you anyway? I’m not sure )
As long as a plugin gets the job done of processing each buffer well within the time-length of the buffer, a plugin normally doesn’t need to add latency. A chain of plugins all process an incoming buffer in a sequence, but within the amount of time that the buffer size grants. If you use a buffer size of 16 samples, then that time is < 1ms, which means that a chain of 10 plugins need to process the buffers well under 1ms.