Separating Variation from Song parts?

How did this become your problem? :sweat_smile:

2 Likes

Huh – lots of guitarists out there using GP — we have not had any complaints from any of them - some are pretty high-profile performers as well.

Consider whether that ā€œwasā€ the standard because of the limitations of the hardware at the time.

You basically only had 4 setups you could use.

With GP you can individualize each song part as much as you like.

That’s because they’re like me, early adopters and willing to stretch pretty far to take advantage of new technology. Your average TMP/QC/Helix/AxeFX user is not like that. They come from a pretty static setup where they know, even in the dark, which pedal does what and that placement doesn’t usually change from song to song.

last time I tried to challenge a standard it was a pretty brutal uphill battle, and that was with people who were at least willing to do music on computers. In this case you are talking about people so steeped in tradition that they hold onto ancient and unsupported beliefs about wood having magic electrical properties, it’s not even a challenge worth taking on. They work a certain way, and good luck changing that.

We’e building a product right now that will likely bring a lot of guitar players into the world of computer based guitar and as usual with that community, I will be the one who has to answer for everything, so I’m trying to get all my ducks in a row for how they would be using these things.

I hope everyone here understands, I really appreciate all the help you guys have given me. I’m really trying to pave the way for a somewhat specific community who has been traditionally pretty reluctant to deal with computers for a wide variety of reasons into this side of the aisle. I will epic fail if I don’t have an easy way to present them with a familiar environment and workflow. Had this been any earlier year I wouldn’t even bother, except just to be able to use this stuff for myself, but because of the promises of the Helix Stadium putting your playback engineer onto your pedalboard, it really seems like this is the opportunity to bring this computer averse community into the fold.

I have a little bit of help in that the live mixing community is beginning to accept native processing in the inserts of their consoles in the form of Waves Superrack, or Live Professor or even people using their DAWs like the ReaInsert feature in REAPER. There is a lot of crossover in those two communities and they are the same YouTube influencers, ā€œgurusā€ and such pushing the guitar gear videos, live mixing videos and recording studio how tos.

I’ve dealt with these people for a long time, built products for them, done many different types of setups or consulting for them and generally coexisted with them in a mutually beneficial way. But now I’m kind of playing in their wheelhouse of live guitar where I’m not the expert so I am studying how they work and how they expect things to go and trying very hard to make sure I have setups, templates and roadmaps for them to be able to confidently take a computer to a gig and play their guitar thru it.

I can sell it on the positives that their existing systems CAN’T do, but there are certainly parts of their existing workflow that are must haves. Their pedals can’t be moving all over from place to place anymore than they’d put up with their wah pedal jumping from the left to right side of their pedalboards.

I’m sure I’ll eventually get this all figured out. It feels like I’m 99% there sometimes and others a bit further off, but nothing insurmountable.

And then I gotta figure out all the showcase/ playback engineer features Gig Performer can do! I did try putting events in the streaming audio file player and it actually worked easier than I thought it would so I’m not all that scared of that part of the journey.

Anyhow, thanks for all the help, I’m not trying to be obstinate, just realistic about what the community I am bringing expects. I’m sure I will have a billion more questions.

1 Like

I’m sure something can be done with scripting.
Is your current setup now to use Setlist mode, and will it make more sense for the panel label widgets to display the song part names?

1 Like

Actually quite a few high profile FoH teams are using GP now
E.g., https://www.youtube.com/shorts/Jv8zeVMJZqg

Gig performer has some things going for it over superrack and live professor. I’m actually mixing shows tonight using reaper for the inserts, as I’m also always doing a live recording but maybe next week I’ll give gig performer a try!

How are people controlling it remotely? That’s one area where Superrack really shines (assuming the copy protection doesn’t pop up during the show). I’m using vnc on an iPad to control reaper. I think I could do the same thing with gig performer

Setting up at Slack Key Lounge right across from Ala Moana right now running an SQ6 and our plugins for testing.

For FoH purposes, GP has a feature to record all incoming audio channels and all outgoing audio channels.
Each time you press record (or schedule it to record), the individual wave files are stored in a timestamped folder.

This feature was added in a very early version of gig performer. The idea was to make it trivial for a touring band (or their engineeer) to just push a button every night and at the end of their tour they’d have a collection of wave audio files from each performance, ready to drop into a DAW for post production.

1 Like

Oh damn! Ok now I just gotta figure how to control it remotely. Are most people using vnc?

OSC should do the job.

Not for the plug-in GUIs. For transport control and moving faders, I can use touch Osc on my phone with the reaper, but mapping the plug-in parameters would be a nightmare if even possible. As much as I want to die when I use waves products, they really got that right, and I would hate to see them waste the development time in Gig Performer to pull off the remote thing. These guys who are using it must be using something that reasonably emulates a desktop, I’ll try and figure out what

It still sounds to me like you are trying to bypass the GP paradigm. In a live situation, you should have widgets to control the parameters of plugins that you would want to change during a show. All other parameters should be preset and for significantly different sounds, one uses different rackspaces.

You can control almost any aspect of GP via OSC. That includes setting/getting widget values, switching songs/parts/rackspaces/variations, turning on recording, sending MIDI messages over OSC to actually play GP from an OSC device if desired. You can certainly control the parameters of all synth plugins directly from OSC (the synths themselves do not have to have their own OSC support)

FYI:

Yes, this is exactly what I’ve been talking about this whole thread. But the issue is, I don’t want which pedal calls up my clean sound to be moving around from song to song

I wouldn’t want to try and map thousands of parameters into a bunch of knobs that in no way close resemble the graphical user interface meters, and on screen measurements that make up the more well thought out plugin GUIs, you are giving up tons of instant information and usability that way. For a simple example, a dynamic eq where you want to see the RTA and the amount of gain reduction being applied in real time

I put a little explanation here of the three most common ways I see people using multi fx units that I am trying to translate into the Gig Performer paradigm. Hopefully it gives some insight into why I’m asking the questions I’m asking.

I’m hoping the grid selector rank13 is making can help with this problem, but it would still be nice if we could divorce these things from each other. Having parts and variations linked with an unbreakable bond in setlist mode brings up a bunch of nightmare scenarios