Re: [LAD] Rendering Softsynth Instruments To Sample Based Instruments - Automation

From: Philipp Überbacher <hollunder@email-addr-hidden>
Date: Tue May 11 2010 - 00:06:03 EEST

Excerpts from Guru Prasad's message of 2010-05-10 22:44:23 +0200:
> Hello everyone,
>
> This is my first post on this list, so please excuse me if I'm not
> following list etiquette.

Hi Guru, welcome to the list.

> I don't think I can be called a developer, although I can get things
> done with Python scripts. I'm much more of an end user, a musician
> using linux live - LinuxSampler being my 'bread and butter' tool.

I'm not a developer, much less than you probably, but I think your
python skills might be sufficient for this task

> I have a specific application in mind, and I dont think there's
> anything out there that does it:
>
> I want to 'sample' a softsynth (e.g. Yoshimi) / modelled software
> instrument. In other words, I want something that automates the
> following process:
> 1. Sequencially play 88 X N notes, where N is the number of velocity layers

Sounds like a script that sends midi should be sufficient. You need some
timing, and that might be hard to get right, since different synth
settings might sustain for a different amount of time.

> 2. Play them through a particular preset on the softsynth/modelled
> instrument (basically just send MIDI output to the softsynth/modelled
> piano, through ALSA/JACK)

If you want to automatically select those it might be hard or
impossible, but this totally depends on the synth (whether it provides
CLI options to load presets).

> 3. Record each sound (corresponding to each note/velocity) as a .wav
> file (at a specified sample rate, etc)

There's a whole lot of CLI applications that you could use to do that,
you just need to find one you can start/stop using python. And you need
to handle file naming of course.

> The sample files can then be converted into a .gig instrument sample
> using Gigedit.

If that can be automated I don't know.

> The advantage of this process would be that one can now
> use (more or less) the same sounds with MUCH less CPU usage. the
> disadvantage, of course, is the loss of tweakability, but I can live
> with that, especially in live contexts.

I do understand your intent and how this might help in a live setup.

> I searched for an application that does this, and the closest I got to
> was this feature in Renoise:
> http://tutorials.renoise.com/wiki/Render_or_Freeze_Plugin_Instruments_to_Samples
>
> But I don't use Renoise live. Which brings me to my question(s):
> 1. Am I right about there not being any application that does this? (I
> hope I'm not!)
> 2. If yes, can someone point me in the direction of how I might get
> about writing a script that does this?
> 2a. What Python modules would be useful? I just came across PyAudio
> and PySndObj, and checking them out.
>
> I'm sure there are *quite* a few people who would find such a script
> very useful, and I intend to make it available ASAP. My programming
> skills are limited, though, so any help in this regard would be
> appreciated.
> Thanks for your time and consideration!
> Cheers,
>
> Guru

I can't help you with the programming, but I can imagine well how this
would basically work. You could at very least save yourself the work of
playing and recording the notes manually. That shouldn't be too hard,
I'm sure you can do that ;)

-- 
Regards,
Philipp
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Tue May 11 00:15:06 2010

This archive was generated by hypermail 2.1.8 : Tue May 11 2010 - 00:15:06 EEST