Apple Silicon Chip Comparison

I was doing a bit of research trying to decide which new Mac to buy and ran across this video. Thought some of you might be interested.

Very illuminating to me. I wonder what the CPU usage of efficiency vs performance would be for GP? It was very different for the DAWs he tested in the video.


This video has a lot of great info and strongly recommend watching if you’re in the market for a mac.

I recently bought a maxed out Macbook pro. I wanted to make sure I had enough power for GP3 and 4. It’s always been muscle memory for me that I always the biggest machine possible (without question) to support all the plugins I want to run. I assumed that I would crush any CPU with plugins. It’s predecessor was a 2014 Intel i7 (2.5 Quadcore, 16GB ram) and it was my GP4 machine with a lot of gigs on it. I pushed it to its limits, it ran hot and I had to be resource conservative with my plugin settings. It was a solid machine and GP never crashed during a gig.

Well, I can honestly say that I “over bought” this time. It’s primary use will also be for GP4 and it barely makes on dent on system resources. I got a M2 Max, 96Gb memory and 4T ssd. 32Gb would have been enough memory and M1 or M2 pro is enough CPU. I spent $2500 for 3x the power that actually needed. I guess I could start making and editing videos to take advantage of the power, but the last thing the world needs is another youtube channel. That said, there is a feeling of power when you open up the system specs and see “96 Gigabytes” of RAM…LOL.

I bought an M1 with 32 GB and 2TB SDD when the M1 MacBook Pro laptops were first released. I’m very happy with the purchase. The one time I had some crackles was with the Audio Modeling Horns, and I’ve since read that others have had that problem, so it was probably the library, rather than a lack of CPU.

1 Like

@bartley99 that is great to hear. I generally do the same as you. Im looking at a Mac Studio with 32Gb RAM and a 1Tb SSD.

@JonFair amd @bartley99 have either you looked at GP CPU core usage like in the video? I’m curious to know if GP is taking advantage of the Efficiency Cores.

I don’t believe that GP has any specific multi core solution. It’s not like a DAW, which could process tracks independently and intentionally add latency where required to ensure the end result is synchronised.

I believe that we can launch multiple instances of GP, each of which could use a different core. I haven’t done it myself, so I don’t know if it works, if it’s easy or difficult, or if there are unexpected complications.

Yes, this does work. It is easy to setup. It can be challenging managing things like songs/setlists in multiple instances, but overall it depends on how you want to use them.

Are we saying that GP only uses one core per instance? If so, I would guess there’s a reason, but wow you leave a lot of horsepower unused. I hope I’m misunderstanding.

GP uses 1 core.
The reason for that is the free audio routing.
It is nearly impossible to use multi core with a free routing without making compromises.
A DAW which uses a strict channel strip can use of multi core.


Single core usage was true for Intel Macs, is it still true for the ARM-based Apple silicon? It is a totally different architecture after all.

If so, I think there’s not as much gain as you might expect in upgrading an Apple Silicon Mac. Because…

One of the main differences between the M2, M2 Pro, M2 Max, Ultra, etc is just the number of cores (there are of course other differences but for this discussion they’re largely irrelevant). I believe the speed of the cores is even the same across all the processors. They just keep adding more cores for more performance. If GP is in fact using just 1 core and the speed of the cores is the same, then the performance between an M2 base chip and an M2 Ultra (which basically 2 Max chips crammed together) should roughly be the same. That’s insane considering the price difference and performance gains on other tasks.

To take it further, the M2 cores are 18% faster than the M1 cores. That would mean that there wouldn’t be a huge difference between an M1 and an M2 Ultra.

I’m sure any Apple Silicon machine would outperform my current Intel Mac Mini, but it’s a bummer that so many cores are going unused.

I will caveat all of this by saying I am far from an expert on this and could be completely wrong. I actually hope I am.

For live setups, I’ve found one instance to be enough. I’m only doing so much at a time, compared to a large studio mix.

I’m wondering if it might be possible for GS to use a separate core for the global vs. local rackspace. That’s the one division in GS that might be similar to DAW channels.

I understand that different GP instances use different cores, so in a large setup, one could have an instance with the setlist, audio player, and MIDI Player. You would then want a system with good, virtual MIDI and Audio routing. A large RME interface with TotalMix software could work well. Macs have built-in virtual MIDI.

Next, you could have an instance for MIDI drums and another for MIDI bass, if you do backing sessions that way. Add dedicated instances for MIDI keyboard libraries, guitar sims, etc.

Finally, you could have a mixing instance, including front-of-house and monitors.

Frankly, this could be very cool for one-man-bands where you want your own signature sound, given that you might have only a few drum and bass sounds, etc. The Main instance would just call up whatever you need for each song.

Currentlhy, I make a new Rackspace for each song, but this can be a lot of work if I get a new drum library and want to update everything. On the other hand, with a “drum instance”, I could just update a small number of rackspaces for drums, and I’d be good to go.

Maybe someday, I’ll upgrade to a bigger interface and try it out. For now, I have everything working from one instance on an M1 MBP, so I’ll avoid complicating my life.

I could envision a future “Gig Performer Ultimate” product (at a higher price) that would have another level added, including routing audio and MIDI between multiple internal, instances. You could then connect your Controller to many separate instruments, and then to the FOH and MON mixers. In other words, you would have many Local “Wiring/Panel” boxes that interconnected and select their Rackspaces and Variations separately - each on a different core. You could run a whole band on one laptop. I’d pay step-up pricing for that. :slight_smile:

My old Intel Mac (4 Core CPU / 16GB RAM) is more or less unusable if i try to use an organ (B-3x) with a synth (Moog) and a Piano (Grand or Rhodes) together with a few effects for each. The Mac is getting hot like the sun the and fan is like a hairdryer!!

To much load for one Core - this will be better with a new M2 or M3. But the limitation to one core is a bottleneck - especially as all the new CPU are focusing on multicore workload.

For testing purposes I have split up my setup in 4 instances: Organ / Synth / StagePiano and one Control instance using Mac IAC / OSC direct to send Midi form the Control Instance to Org / Synth / Piano and using Blucat Connector to send Audio from Org / Synth / Piano instances back to the Control Instance. The Control instance controls the Midi assignments / Split / Layers and send the prepared Midi via IAC to the Sub-Instances. The Controlling-Instances is receiving the Audio back to the Sub-Instance and is mixing together the audio including additional effects - a bit like a Super Global Rackspace.

With this setup all 4 Cores a used and the MAC is stable and not overheating.

Another advantage is that you can select / change Rackspace within each instance without impacting other instances. Its like using 3 separate instruments. Each instance has its own set of sounds / rackspaces and your can combine them like modules.

The downside is that it is not a very user-friendly and foolproof set-up. What is needed is a better Midi integration / Bi-Directional OSC implementation to control the Sub-Instances from the Main-Instance and get Midi / OSC information back from the Sub-Instances to the Main-Instance and a “container” approach to combine / manage the 4 instances as one object.

This type of set-up and a (MAC) CPU with 10 or more Cores would allow to build a very flexible (JAM oriented) set-up with the favored instruments / sounds.

1 Like

@Charly great ideas! How ever after a little more reading on the subject it would seem it’s not possible. Unlike Windows, Mac OS handles all the CPU mangment and doesn’t allow a developer to do much. In other words there appears to be very little software developers can do to balance the CPU load or force certain tasks to use a certain amount or type of CPU. Mac OS simply doesn’t allow that level of control. In the stuff read it talks about a thing called “grand central” that manages all of this for the OS.

@rank13 I hear what you’re saying and I remember this being a discussion point a while ago (pre Apple Silicon) and what you said was correct at one point. I’m just curious if it still holds true now?

This might all be academic anyway because the new machines will still destroy my crappy Mac Mini.

i don’t think it is necessary to control the core assignment … in my case MAC OS has automatically balance the usage of the cores as long as you have independent GP instances

1 Like

I have not but will take a look. I’ve made some test racks with over 20+ keyscapes and 30+ convolution reverbs and it didn’t even flinch. I’m now running at 64 samples. I tried 32 but the latency was almost the same (1.2 ms). I did a similar test in Ableton with the same results. I also did a stress test with both Ableton and GP simultaneously active and no issues. The only issue has been some crackle when using Loopback when routing between ableton and GP.

1 Like

Attached are 2 screenshots showing the CPU usage and load distribution between the 4 CPU cores - running 4 GigPerformer instances.

The screenshots show the Main-Instance (covering also the Bass) and using 3 Sub-Instances for StagePiano / Synth / Organ.

The top-panel covers the audio mix of the 4 Layers within the Main-Instance.

The bottom-panel shows the available sources from StagePiano, Synth, Organ and Bass and their assignments to the 4 layers as well as the keyboard mapping and splits. Stage-Piano and Synth instances provide two independent usable instruments path, Organ only one Instrument path. All instruments are ON - independent if they are assigned to a layer. The B3 is IK Multimedia and the Doors-Organ is Arturia Vox Continental.

With this load distribution the system is usable with 384 samples compared to a single instance setup that was more or less unsable with 512 samples. The Mac CPU load is between 25 and 40 % for each core. Fan is running on low level.

I’m thinking of going to a similar architecture with multiple instances. I have an M1 MacBook Pro and an RME BabyFace, so it’s not for performance reasons; it’s more to make my one-man-MIDI-band more modular.

Currently, I run one Local Rackspace per song and use variations and song parts for the changes. I’ve got vocal processing and mixing in the Global Rackspace.

The challenge I’ve found is in mixing everything constantly. FWIW, I’m using various studio techniques in my mixes, including dynamic EQ, side chaining compression, etc. The idea is to make it work in smaller venues with modest sound systems.

Anyway, if I decide to upgrade an instrument with my current setup, I might then need to update many songs. Or if I get a mix I really like, I might then need to apply that concept across many songs.

But with multiple instances, I might have a preferred drum and bass Rackspace across 70% of my songs, including EQ, and compressing the bass with the kick drum. If I upgrade the samples or the submit for that one rackspace, I avoid a lot of cloning across song after song. It makes the whole band-in-a-box more modular.

Meanwhile, I would get even more headroom for CPU usage, so I wouldn’t fear designing more complex setups.

I’m thinking that my main instance would include the MIDI and Audio players, plus the final mixer. Other instances would handle vocal processing, guitar amps/effects, keyboard samples/synthesis, and drums/bass.

But… is this false simplification? Does the complexity of multiple instances outweigh the ability to have reusable instruments and sub mixes that I can use as components? Yeah, startup would be more tedious, but as long as it’s reliable, it should be invisible when performing.

A few remarks:

The set-up with the 4 instances works as expected related to CPU load and the ability to change rackspace selection within one instance without impacting / changing the other instances. And it is nice that you can make changes to a rackspace setup without touching all the other rackspaces.

But this is a proof of concept and the handling / workflow is NOT FOOL PROOF!!

I play and use GigPerformer at home for my personal fun and joy. I would not use this setup in public due to the fact that it is very easy to make a mistake within the set-up and we all make mistakes under pressure. It is not fool proof

As mentioned in another post: this type of set-up will be great when a few additional feature are available:

  • an easy to use “Container” with all the related instances
  • bi-directional OSC … eg to map a main widget with a SubGig Widget
  • automatic mapping of the MainGig songlist and song structure to the SubGigs