No problem! I’m a software developer myself. So I’m aware of this
Here’s how I’ve configured my setup, and everything has been solid for a good while now. I have 40-50 songs in my .gig . Actually, if any GP experts out there have advice about making this more efficient, I’m all eyes
I’m using a Macbook Pro 2017 laptop, 16GB RAM. All my .gig files and sample libraries are on a Samsung portable SSD (2TB, thunderbolt 3). I use just about every plugin available, the list is long (EastWest, Kontakt, Arturia, Pianoteq, Waves, etc.)
I have 3 instances in my setup. I use loopback to send the data around to each instance:
Main gig - Keyboard rig. Separate rackspace for each song, 5-10 plugins in each. I also have the File Player with the backing tracks for each song, as well as Toontrack EZPlayer for sending MIDI to both my harmonizer plugin and to run lights/video. For medleys, I have rackspace variations for each song, so I can mute/bypass plugins and Audio File Player lanes not in use for the current song in the medley. Input/Output is a loopback virtual device, no audio interface. I also use IAC for sending MIDI to the other instances. One IAC port sends program changes to my instrument instance for changing rackspaces as needed, and another sends MIDI out to the harmonizer/vocoder in the instrument instance.
Instance 1 - my Instrument rig. I use an aggregate device containing two loopback devices: The loopback from my keyboard rig, and a second one (audio loopback) for communicating between the the instrument rig, and the actual audio interface. This instance takes input both from the audio interface (mic and guitar) and from my keyboard rig. This is where the everything is mixed and processed. It’s got a mixer for all inputs: Keys, guitar, mic, and backing tracks (mostly bass, drums and click) . I have one Antares harmonizer plugin and a vocoder plugin, a stompbox for the guitar(s), and then EQ, reverb and delay for the vocals. Mixer outputs go to the audio loopback (to the 2nd instance)
Instance 3 - Audio interface. This has no plugins. I use this because depending on where I am - stage, studio, work - I use a different audio interface (RME, 6i6, 2i2). So, I actually have 3 different instances, but I only load the one I need depending on where I am . I have an aggregate device for each case, audio loopback+the audio interface. Inputs from the interface TO the instrument instance for mic / guitar are routed out to the instrument instance via the audio loopback. Inputs FROM the audio loopback (sent from the instrument instance) are routed directly out to the given audio interface.
Sounds complicated, but it’s actually pretty straightforward.
I use predictive loading, and set the number of rackspaces to 12, which is about the size of a set. I have my songs organized by set, so when I load a set, all those songs are loaded.
The only time I’m experiencing crashes these days is when I’m programming a bunch, and it’s infrequent. I was having crash problems on load for awhile, but it seems that was due to Kontakt. I’ve gone into the settings and turned off multiprocessor support in Kontakt everywhere I use that plugin, and the problem seems to have disappeared.