In April of this year I established a practice of building several ambient generative Gig Performer rigs per week, and live streaming the results on YouTube. Always starting from a blank document was my way of accomplishing some type of education and of training with my VST toolset. All my pieces are soft instruments, and I think I’ve accomplished some fluency with the process now. I’m ready to shift to a new gear.
I’ve live streamed over 50 original pieces in this way, (some more clearly reflecting my amateur level than others), resulting in over 350 hours of live generative music and soundscapes.
100% of these were live streamed on YouTube with Gig Performer and various combinations of the VSTs I use.
For the first several months I focused exclusively on using Gig Performer deliberately ignoring the visual aspect. For the past few weeks I’ve focused on developing more sophisticated generative graphics approaches, and I’ve realized I’ll be integrating Gig Performer with that.
My latest piece is live now at this moment as a tribute to her late Majesty Queen Elizabeth II ahead of the UK’s state funeral for her, scheduled several hours from now. The piece clings to a Lydian mode. It may not be your cup of tea so to speak if you do listen, but the relevancy to this topic is the exclusive use of Gig Performer to manage a great deal of MIDI control flying around in there. It’s quite a bit and I feel like I’ve learned how to avoid a lot of the mistakes I made earlier on.
If you do check it out, many thanks. And my own thanks to David and all the developers for creating what has amounted to a tank of a vehicle for this experiment. I feel like I have subjected Gig Peformer to a great deal of what turned out to be destructive testing, but Gig Performer doesn’t break. VSTs will break (and I’ve discovered legit VST bugs formally confirmed by their developers) but GP is the Energizer Bunny of VST hosts as near as I can tell.
Very nice…thanks for sharing!
Cool tunes, Kev! I really like what you’re doing. Got a good laugh out of “i kan haz lo-fi”.
I tried. Not working for me…
Thanks Ali that was a live stream, which I did stop after about 18 hours but apparently it hasn’t been fully processed by YouTube yet. I’ll be starting another one later today. Also there are all my 50+ other streams. But the latest one really is kind of my “opus” in that it really uses the most advanced techniques I think I’ve come up with in Gig Perforner. Thanks for mentioning this.
(Edit): I’ve started a new stream in the same vein, here’s the updated link
Please make sure to upload that link if necessary when the processing completes.
This is your channel: https://m.youtube.com/channel/UC2oF3fH5n1WsJJcy5TmV9mA/videos
Since when did you start using Gig Performer?
Click here to see the complete Kevin’s setup.
Hi @npudar I introduced myself to Gig Performer last fall, but really kind of knuckled down on learning how I wanted to use it starting in about January this year, and then when I was recovering from this virus in April, I came up with this perhaps insane plan to “train” in using it by starting out with new blank .gig file every day, and started live streaming the results since then. I probably need treatment, I know.
Edit: the live stream I have on now will continue until approximately an hour before the Royal funeral commences for HLM, but it is generating from the same .gig file as I used yesterday. Looks like I won’t be able to provide the live stream from yesterday. Turns out streams significantly longer than 11h55m are not editable in the Youtube Studio so cannot be reduced in length. I could download it and re-upload it on some other platform but the nature of this exact project probably precludes that. Also I have no plans to stop creating new streams. I’m starting to feel like Gig Performer has given me a way to pull together the bizarre odds and ends of my 50 years involvement in various aspects of music and performance at various times into a way for me to create custom instruments, kind of stochastic musical clocks almost, out of a blank document. And I’m still studying and training, and haven’t even begun to dip into the potentials for integrating my generative visuals engine which is a completely new aspect overall. Thanks for your interest.
OK, thanks for sharing your work
I’m listening it right now.
@npudar it seems it won’t be possible to enable that live stream from yesterday as a YouTube video. If you really want to see and hear it DM me and we can organize some alternative. But the one I’m running today is the same generation as yesterday. Differs of course because of my process.
Yet when watching I wished the words of the title messages would stop so the images can be fully seen.
Thanks for the feedback, Hans.
Hey just wanted to drop in and say thanks to Brett and David for hosting me today on “Backstage with Gig Performer” what a fine experience I had doing that.
A few expansions to note:
Scripting - here goes…
My rig files… stay tuned (email me firstname.lastname@example.org)
Dynamic mixing, Mastering and processing (we didn’t have time to go into that)
Sending data from MIDI files (I only showed tempo changes)
Using Voltage Modular to generate CC sequences
Using REAKTOR to generate CC sequences
Making super-powered controller knobs and turning pads into supra instrument controls
Making your entire Windows setup completely bulletproof against disasters and backup/restore able — and it actually really really really works I know this from hardest worst case experience.
I’ll come back with more later! Thanks again Brett.
David told me he wants me to start a YouTube channel - tell me what you think about that idea if you like… email@example.com
Thanks @npudar I hope I added something. Unfortunately a bad internet thing appears to have happened about 1/4 of the way through the hour and I turned into 2004 Nokia cell phone video. So maybe let me know what parts you couldn’t see and a I’ll see if so can make something for YouTube in proper resolution to explain that.
Well, every Gig Performer story is helpful, from the MIDI file and how you are using it to sync and change tempo, automations, to stories about Macrium Reflect
Hey @npudar thanks for watching that. Brett mentioned two gpscripts that you probably know about: a one-liner that will get MIDI from the global to a local rackspace, and another that allows using a suitable MIDI file to select rack/song/variation as the MIDI file plays… could you please direct me to these little gems?
(Fits reading glasses into place) thanks @npudar !
Edit: first thanks for pointing me in this direction and for writing the article. It’s perfect!
Second after my live stream with Brett, I’ve realized I have developed some very bad, hoary old 20th century MIDI 1.0 inspired habits that I probably need to blow up as with dynamite. Fortunately I always start a session with a blank gig file.
I did buy an Atari 520ST in 1986 in part because it had MIDI support on its motherboard and in firmware. But that design is older than my oldest son now and I think it’s time for me to move forward. Yes I know host support is the vastly superior way to talk to VSTs in VST hosts. Time to approach the scripting rabbit hole. So I need a strategy.
Many years ago I had a pair of literal Soviet era rocket scientists working for me as programmers. They had this gesture: You reach your arm over and around your head to grab the lobe of the ear on the opposing side. I get the feeling this is what I’ve been doing. Which is fine, because my initial impetus was to build something; anything; in order to forge some sort of pathway or something. If I had any hair left it would have been swinging all over the place for the last six months like any other 1970s hair farmer’s.
So how do I replace my lovingly crafted REAKTOR LFO setups generating elegant streams of CC values describing said LFOs? They’re so pretty. Using CC values this way is probably like trying to kill a mosquito by dumping a wheelbarrow full of bricks on it. Apparently that’s how I’ve rolled.
So I must now discover can GP Script simply translate my MIDI CC sequences into equivalent VST host changes? Is there a less dumb way to do this? How can I translate those into maybe 14-bit precision value changes, (where appropriate per VST implementation)?
This appears to be a new course for my “art.” Which may bear too much resemblance to welding pontoons onto a car instead of getting a canoe, but we shall see.