A question for those of you which are making recordings with a studio DAW (Reaper Cubase whatever), together with band mates where everyone makes his own contributions to the project offline at his home studio. Then you have jam sessions and rehearsals together to improve the ideas and song sketches, where you need to play your sounds live. Then eventually you go to a professional studio for recording and mastering at one side, and go for live gigs with the project, on the other side.
Now the question:
Do you have all your plug-ins in the DAW, as it would be the usual way for recording? And only finally, when the songs are ready and live shows are coming, then you reproduce the DAW sounds to GP for live keyboard performance?
Or are you using GP in first place with rackspaces for live shows in mind, linked to the DAW by virtual MIDI and audio ports, or GP Relayer? Basically playing live with the DAW in background for recording yourself (MIDI and or audio) and multitracking the rest of the band?
Being a live keyboardist for the last three decades, I have only limited studio experience from 20 years ago, using Logic and hardware synths at that times in the studio, and short before I was going purely VST and laptop.
One of my new projects might be more on the recording side as initially described.
For live shows I have 20 years of experience how to bring my VST on stage for live playing with my keyboard controllers, five years with GP now. But DAW and studio recording is a new workflow for me. I need to decide which workflow to go, soon.
I would think the live set up would be different enough that I would tend to keep them separate.
I would think, as you are putting together the DAW recording setup up, I would also separately set up GP (using many of the same plugins) around the same time you are doing that (perhaps for practicing, etc.)
But, in the DAW, I would suspect there will be additional “overdubbing” (which may require another set of of hands), different processing, and perhaps some other bells and whistles. (Alternatively, if you use a click track, you may use a recording for parts of a live performance while you are playing other parts live).
So, I would think, the two setups would be similar, but different enough that you would set them up separately.
My $.02 based on very little real experience. But, I figured I’d throw it out there.
I would strongly encourage you to not do it this way. You should think of your use of Gig Performer in a recording situation the same way you would think about using a hardware synth in a recording situation.
Imagine you play a traditional instrument (piano, hammond, Moog synth, Kronos workstation and maybe some effects pedals) and you show up at a recording studio where there’s a recording engineer. You will be totally responsible for creating the sounds you want using your gear and the engineer will simply record the results of what you play. The sound design (e.g, your piano, plugged into a phaser and a reverb, say) is your sound. The only processing that will be done (and should be done) by the recording engineer will be minor tweaks “EQ, compression, panning, etc” so that what was recorded fits properly in the mix.
Now replace that piano,hammond, Moog, Kronos and effects pedals with controller and GP. You still design your sound in GP and the results are recorded by the engineer who might then apply the same minor tweaks to fit in the mix.
But in that scenario, you now have a “ready for live” setup, complete with real-time pedal/sliders/button control that you can take on stage.
If you do everything in the DAW, then first of all, you will have a much harder time handling the real time stuff (e.g, in GP you have a widget controlled by a pedal that is linked to three other widgets each controlling plugin parameters with different scaling values) and you will have to completely rework what was done in the DAW to use it live.
That separation of concerns (the musician performs, the engineer just records) is really valuable.
However, I’m getting involved in a project where only recording and working track by track by the individual musicans at their hone studios are planned. Drums and guitars would use Android Cubasis (sigh), and me using Cubase Pro, exchange by DAWproject files, my Cubase Pro projects being the ground truth. Keyboards and sound FX would be MIDI recorded and arranged track by track. No playing live and recording audio. So the situation is different from the scenario you were suposing.
As the project is somewhat experimental, I have time for the same. I’m going to use my plug-ins in Cubase directly, and also test to relay it to GP with GP relayer and have all plug-ins and sounds in GP. The last option would allow to go to studio with the pre-produced MIDI tracks and audio sketches from the rest, import them into the studio DAW, link it to my laptop with GP the same as at home, and record keyboard audio from GP.
Following this thread as a new GP 5 pro user with the same general question - and not sure if I should post a separate topic. Situation: In my current workflow, I use Ableton Live for both playing my VST instruments through Ableton AND recording all MIDI AND audio sessions real time as I jam, write songs and/or perform - and I can edit both the MIDI instrument data and the Audio tracks during a given session or anytime thereafter
Objective: I hope to migrate from my complex Ableton Live 12 workflows and CPU intensive templates (due to instrument plugins) to a use case that combines GP to simplify the MIDI part AND Ableton (or any DAW. I also use Cakewalk Sonar) to record both MIDI data and Audio during the session
Current Setup / Use Case: I use Ableton Live 12 or Cakewalk Sonar in my studio, at remote spaces and on stage for jam and songwriting sessions and live performances / gigs with multiple musicians / vocalists. I also create multi-track recordings of these sessions in Ableton or Sonar. For each DAW, I create templates that include separate tracks for each VST/Plugin I use that are assigned to / controlled by one of 7 MIDI controllers; 2 Novation Launchkeys, 2 Launchpads (shared with other musicians to trigger loops, backing tracks, one shots etc) 1 hardware synth - used as both a midi controller for VSTs and an the synth instrument - along with other non-keyboard controles (foot controllers, sliders and whatnot).
Each keyboard controller is assigned to multiple VST/instruments - ie keyboard 1 are pianos, 2 pads, 3 horns/orchestral/leads. The DAW tracks with the VSTs will record the keyboard MIDI events and play the VST instrument (via the track monitor) - which I can then edit the MIDI after the initial recording. I also send the VST’s audio signal sent from the digital interface to the a mixer that sends audio to FOH speakers and back to a separate Audio track in the DAW (Ableton) using either a Zoom Livetrak 12 or a Soundcraft UI24r - both are digital mixers and interfaces). thereby capturing both the MIDI (in the track with the VST) and another separate Audio tracks. I also record the audio signals from the other musicians / vocalists going direct into the Zoom or Soundcraft Mixer - which are routed via their built in interface - back to their respective tracks in the DAW (Ableton or Sonar)
I can then use the DAW to play back and/or edit the recorded session with both audio and MIDI / VST, then add tweak MIDI data (add FX, quantize. change velocities, and all kinds of minutia), and do the usual fun DAW stuff with the audio signals from vocals and instruments.
So back to my question as it relates to this thread. I purchased GP so I don’t have to use a DAW to setup/ program and modify sounds in real time as I play my VST instruments in jam or songwriting collab, during live performances or tracking in a recording studio - BUT I still want / need to capture the MIDI data streams for each VST instrument in a separate MIDI track in a DAW as I am doing now with Ableton - since that’s the benefit of using MIDI controllers for VSTs when recording and editing in a DAW.
Do I simply dive in to the weeds and learn all about how to use the MIDI GP relayer feature - and that is the simple solution to this eye-glazing question? Or do I cross post this to a different section of the forum so I don’t hijack this thread?
Disk space is cheap, and options are good to have, especially when they’re cheap.
I record the midi and the output of GP into the DAW at the same time. If I’m playing guitar I record both the raw guitar and the processed guitar to separate tracks.
Not very deep weeds to use GP Relayer, so might as well take a few minutes to learn it. You can also use something like loopmidi to send your midi to both the DAW and GP at the same time. Or if you have a keyboard with a multi-client midi driver you can do that without loopmidi.