On 31 May 2012 03:41, Kaspar Bumke <kaspar.bumke@email-addr-hidden> wrote:
> Hey,
>
> Just tested out drumreplacer. Seems to work well. I am going to go through
> the code and see if I can use it as a basis for a more advanced
> drumreplacer.
>
> For now I was just making an Arch Linux AUR package and was wondering
> about the license (have to put it in the package). Is it just public domain
> or did I miss something?
>
> Regards,
>
> Kaspar
>
On 31 May 2012 09:38, Marc R.J. Brevoort <mrjb@email-addr-hidden> wrote:
> Hi Kaspar,
>
>
> Just tested out drumreplacer. Seems to work well. I am going to go through
>> the code and see if I can use it as a basis for a more advanced
>> drumreplacer.
>>
>
> At present it's pretty basic. It does peak detection by looking if a wave
> goes over its treshold level, then (if I remember correctly)
> starts a counter to see how long it takes to get to another treshold
> level to extract MIDI velocity. In other words, at the moment it works
> entirely in the amplitude domain. This works pretty well for multi-track
> recordings, but for existing stereo tracks, doing the work in the frequency
> domain might work better.
>
>
> For now I was just making an Arch Linux AUR package and was wondering
>> about
>> the license (have to put it in the package). Is it just public domain or
>> did
>> I miss something?
>>
>
> I usually think of my packages as GPL'ish, but granted, in this case I
> probably forgot explicitly mentioning a licensing scheme, which means
> at the moment it's officially under copyright law. Obviously far more
> restrictive than I intended.
>
> I have a slant towards GPL as this will help guaruantee that the
> source code is going to remain accessible to the public to tinker with.
> So as far as I'm concerned you can release it as GPL and keep this
> email as evidence that I've given you written permission to do that.
> Adding the generic LICENSE.txt file to the package should suffice.
>
> Good luck. If you need any help explaining the code let me know. I'll do
> my best (though it's 3 years back by now!)
>
> Best,
> Marc
>
On 31 May 2012 14:43, Kaspar Bumke <kaspar.bumke@email-addr-hidden> wrote:
> Hi Marc,
>
>
> I have a slant towards GPL as this will help guaruantee that the
>> source code is going to remain accessible to the public to tinker with.
>> So as far as I'm concerned you can release it as GPL and keep this
>> email as evidence that I've given you written permission to do that.
>> Adding the generic LICENSE.txt file to the package should suffice.
>>
>>
> Cool, I marked it as GPL which means GPLv2 or later. Are you OK with that?
> Common licenses are available by default so they don't need to be in the
> package.
>
> I needed to add a stdlib.h include to src/lib/convertlib.h to make it
> compile with gcc 4.7 by the way.
>
> Good luck. If you need any help explaining the code let me know. I'll do
>> my best (though it's 3 years back by now!)
>
>
> I have started looking through the code. The FLTK stuff is a bit confusing
> to me so I think I will start out by trying to extract the Jack process and
> plugging that into the simple command line jack client. I want to make an
> OSC controlled back-end seperate from the GUI so that one day maybe I could
> put it in an embedded system to make an open source drum brain! I can see
> that you started out with a frontend and backend directories but looks like
> you ended putting everything in the frontend.
>
>
> At present it's pretty basic. It does peak detection by looking if a wave
>> goes over its treshold level, then (if I remember correctly)
>> starts a counter to see how long it takes to get to another treshold
>> level to extract MIDI velocity. In other words, at the moment it works
>> entirely in the amplitude domain. This works pretty well for multi-track
>> recordings, but for existing stereo tracks, doing the work in the frequency
>> domain might work better.
>>
>
> Ah OK, cool. I am really glad I found your project as this is a basic
> enough example for me to start understanding just how to simply get audio
> in and MIDI out, once I have that down I will look at the signal processing
> in more detail, do FFTs etc and maybe a neural network.. haha who knows.
> You wouldn't happen to have any recommended reading on the theory behind
> drum replacement techniques? Any tips on what you changed from 0.1 to 0.2
> that made that crucial difference in performance?
>
> Kind Regards,
>
> Kaspar
>
On 31 May 2012 22:21, Marc R.J. Brevoort <mrjb@email-addr-hidden> wrote:
> Hi Kaspar,
>
>
> Cool, I marked it as GPL which means GPLv2 or later. Are you OK with that?
>>
> Absolutely.
>
>
> I needed to add a stdlib.h include to src/lib/convertlib.h to make it
>> compile with gcc 4.7 by the way.
>>
>
> I guess it's already starting to show its age a bit ;)
>
>
> I have started looking through the code. The FLTK stuff is a bit confusing
>> to me so I think I will start out by trying to extract the Jack process
>> and
>> plugging that into the simple command line jack client. I want to make an
>> OSC controlled back-end seperate from the GUI so that one day maybe I
>> could
>> put it in an embedded system to make an open source drum brain! I can see
>> that you started out with a frontend and backend directories but looks
>> like
>> you ended putting everything in the frontend.
>>
>
> Correct, I based the empty application on another one I did earlier but
> couldn't be bothered to do proper frontend-backend separation in its early
> stages. That's probably a mistake.
>
>
> Ah OK, cool. I am really glad I found your project as this is a basic
>> enough example for me to start understanding just how to simply get audio
>> in
>> and MIDI out
>>
>
> You'll also notice that it's JACK audio in, but rather than JACK MIDI out,
> it's ALSA MIDI out instead. Reason is that when I wrote drumreplacer, JACK
> MIDI was basically unsupported, even by JACK tools such as qjackctl. Things
> probably have changed at least somewhat, three years down the line.
>
> Some explanation on how things work - do with it as you please.
>
> As you've noticed, most of the magic happens in
> UserInterface::jack_process().
>
> The peak scanning: As an input wave is being scanned faster than realtime,
> one can't simply send out MIDI at the moment a peak is detected. Peaks at
> the end of a wave snippet would be triggered too
> quickly compared to peaks at the start of a wave snippet. Instead,
> the output has to be scheduled so that the latency between wave
> peak and MIDI trigger remains constant. (This is why the MIDI triggering
> is done through Fl::add_timeout() instead of just playing the note).
>
> If I recall correctly, the previous, 1-track version of drumreplacer
> didn't schedule notes at all and therefore to keep beats steady,
> it needed to use very small buffers and always had to triggered its
> notes immediately. Obviously this would result in poor performance.
>
> More about the note triggering: One thing to keep in mind is that
> Fl::add_timeout() is really a user-interface function. The delay is
> specified as milliseconds, but in reality it's not quite that
> accurate. Ideally, instead of a user interface timeout one would use
> a sample-accurate MIDI note scheduler.
>
> User interface controls:
>
> - Sens. is sensitivity, the level at which the note will trigger.
> - Res, the resolution - how often a note is allowed to retrigger.
> Related to variable "retrig" in the code.
> - Mid ch, note, are the MIDI channel and note number being output
> when the audio surpasses the treshold
> - Min veloc and Max veloc are the minimum and maximum velocity settings at
> which the note is played. If a note only reaches treshold value, it will be
> played ad minimum velocity; if it reaches maximum value (+1 or -1 as
> float), it will be played at the maximum given velocity.
>
>
> One clever bit is that when a note is scheduled for playback, the actual
> velocity at which it will be played isn't known yet because that is only
> determined *after* the treshold level is reached.
> The note playback is scheduled, and at that time the velocity value is set
> to "minimum velocity".
>
> But meanwhile, before the MIDI is sent out, the wave scanning proceeds-
> and may update the velocity to the highest found peak, until either the
> resolution knob timeout occurs (after which peak detection is reset) or
> until the MIDI note schedule demands the note to be played immediately, in
> which case it will be played at the highest velocity found between
> triggering the note and the actual playback event.
>
> Hope this helps!
>
> Best,
> Marc
I copied the whole conversation to LAD just because I like lurking on there
and reading technical discussions I don't fully understand. Hope that's all
right with you.
You'll also notice that it's JACK audio in, but rather than JACK MIDI out,
> it's ALSA MIDI out instead. Reason is that when I wrote drumreplacer, JACK
> MIDI was basically unsupported, even by JACK tools such as qjackctl. Things
> probably have changed at least somewhat, three years down the line.
>
That's weird, because it appears as a Jack MIDI program/device in Jack
(qjackctl) which I noticed right away because my MIDI-USB devices appear
under ALSA and it is a (minor) annoyance to deal with the two different
MIDIs and get them to connect. Most things still seem to default to ALSA
these days for better or for worse (maybe someone from LAD could chime in
here with their wealth of knowledge--accurate to 1/1000 of a second it says
in your comments, is that still the case? is that bad?).
Some explanation on how things work - do with it as you please.
>
Thanks so much the explanation. I may hit you up with more questions as I
dive more into the code.
Kind Regards,
Kaspar
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Fri Jun 1 20:15:02 2012
This archive was generated by hypermail 2.1.8 : Fri Jun 01 2012 - 20:15:02 EEST