I’ve been using the Babyface Pro set to 256 samples at 44.1 as recommended on this forum. However, I’ve noted response lag from the Ujam Virtual bass to the point where I have to advance the chord input by nearly 1/2 second to get it to follow the chords reliably. This morning, I cut the samples back to 96 and the response seems better and quicker. I haven’t had any glitching or stutters in the audio so far, and was just wondering how reliable my system may be with this new sample rate. Any thoughts?
Have you checked with the developers of that plugin to make sure that it’s designed to work in real time?
Reliability is not a function of buffer size - if your computer can’t keep up with *all the plugins you need to use, you’ll get glitching.
In live performance, 256 or 128 should be fine for most things, but it sounds like the plugin may not be able to handle stuff fast enough
So does a lower sample rate improve the response time i.e. is the response quicker? I am talking to the developer at present.
No, you need a faster sample rate to improve response time — or a smaller buffer size — you probably shouldn’t change the former though — 44.1k is generally good enough for live work.
Personally, I run everything at 44.1k and with a buffer size of 256 — occasionally I have used 128 but the difference has never been that significant for what I do. But others have different experiences.
I just had a quick look at the website for that Ujam virtual bass — unless I"m missing something, it seems that that plugin generates bass lines depending on the chords being played – if that’s the case then there most certainly will be some delay because the chords have to be recognized. Lowering the latency probably won’t help with that, the recognition system still has to “see” a sufficient amount of the chord to know what it is.
Understood. I’m still awaiting a response from the Ujam people. Thanks very much for taking the time to investigate for me.
I’ve used tools like this before so have some knowledge of how they work. The other thing is thst if they’re using MIDI to detect chords then they will have to define a threshold, I.e. a minimum amount of time they wait to determine what notes are actually on and that will cause a delay, completely independent of your same rate and buffer size. In a DAW environment, those things can look ahead to see what notes are down but that won’t work instantly in a real time environment.