Before switching my new keyboard rig to GP, I gathered some information beforehand. One of the questions was: which sample rate should I use? I came across a report, which I unfortunately can’t find anymore, that said 48 kHz would be better. Not just the sound quality, which nobody notices anyway, but also that the computer runs more efficiently. I don’t understand this, since more data is being transferred.
Can anyone provide a well-founded opinion on what’s actually better?
I switched everything to 48 kHz back then. I even seem to think I selected 48 kHz when installing the plugins.
I think the computer doesn’t give a whatever. 48K will take more cpu.
The only use I see for it is to use it while setting up your gig file. Then, if everything works fine (cpu- and dropout-wise), revert back to 44.1K and enjoy the extra headroom. Not that I think that that’s very useful…
I would NOT recommend that at all - it’s guaranteed you’ll get bitten when some sample that was recorded at 48k drops down by almost a semitone when you switch to 44.1k
I didn’t think about that, but I agree. (As a guitarist, samples are not the first thing that springs to mind). But shouldn’t sample players not compensate for that (and spill some cpu)?
I initially switched to 48kHz because I’d heard a report that found this sampling rate superior to 44.1kHz. Only now have I wondered if the speeds are different. I’ve done a test and played back samples I recorded at 44.1kHz. They sound the same at 44.1kHz, 48kHz, and 88.2kHz. So I’ll stick with 48kHz for now and hope I come across that report again. If so, I’ll post it here.
So that was my observation, which I didn’t quite understand. The higher the sample rate, the lower the latency. I can probably just set it back to 44.1kHz, because I don’t notice the difference between 2.7ms and 2.9ms of latency while playing.
So if you have a buffer size of 128 and a sample rate of 44.1k, then your latency is 2.9m
That means that the audio driver runs 1/2.9ms per second which is about 345 times per second.
So if you increase your sample rate or you decrease your buffer size, then the latency decreases and the reciprocal increases - hence your computer works harder.
I think I’m misunderstanding something. I mean the playing feel. I use in-ear monitors, and when the latency increases, I hear the note in my ear later than when I played it on the keyboard. Acoustically, it’s not a problem because the sound generation is in the ear. And in terms of playing feel, I don’t notice any difference at 0.2ms.
My sweet spot is at 128 samples—that’s 2.7ms. At 256 samples, I do notice a delay (because I know that 256 samples are set). That is, if I didn’t know, I probably wouldn’t notice it. Therefore, I’ve set my system to 128 samples.
Yes, we know what you meant - I use 128 sample buffer as well though I’m fine with 44.1k
If you go to 48k you will have slightly less latency 128/48000 = 2.66ms though I’m not convinced that one can detect a significant difference between 2.66ms and 2.90ms
44.1kHz = 44,100 samples/sec (441 × 100, less “clean”)
While modern CPUs handle both fine, 48kHz’s cleaner math theoretically aligns better with binary computer systems
4. Video/Broadcast Standard
48kHz is the standard for video, broadcast, and most modern multimedia
Better compatibility with video sync and streaming applications
Practical Recommendation:
Check your audio interface’s native sample rate in its control panel or specifications. Use that rate in your DAW/Gig Performer to avoid conversion overhead.
For Gig Performer specifically, matching your hardware’s native rate will give you the lowest latency and best CPU efficiency. Most modern interfaces for live use default to 48kHz.
Bottom line: 48kHz is generally the better choice for modern systems unless you specifically need 44.1kHz for CD mastering workflows.
Sigh - the answer you get from AI depends totally on how you ask the question.
For example, ask ChatGPT why should you use 44.1k instead of 48k for live performance
and you’ll get a difference answer!
For live performance, 44.1 kHz often makes more practical sense than 48 kHz—not because it sounds “better,” but because it optimizes latency, CPU headroom, and ecosystem compatibility under real-time constraints.
Yeah, I was just curious as the OP mentioned he had read an article indicating as much. I was interested in what universe that would be true! I especially like the theoretical efficiency of cleaner bit alignment .