Re: [linux-audio-dev] peakfiles and EDL's

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] peakfiles and EDL's
From: Paul Davis (pbd_AT_Op.Net)
Date: Sun Feb 25 2001 - 00:48:32 EET


>not the previous peak files. At least I assume this because it does take
>a bit of time. Of course I have never "Inserted" sound data. I replace
>existing stuff but musically I have never had a need to insert sound.

   "Hmm, lets see. You know what would be cool ? Lets put that door
    slam sound effect right in between the brake squeal and the
    scream. Wait, what's that ? I have to create a new track just
    for that effect ? But I already have a track labelled "FX", why
    can't I just overlay the door slam right there ?"

I was "Insert" insert to include "Overlay".
 
>instances. Do you think that resampling the peak would hurt? It is hard

Well, it depends. The model at this point is that each raw audio file
has one corresponding peakfile. Since each raw audio file can be
used many times, with different potential "peak phase" choices, that
would mean generating (potentially) N-1 different peakfiles (where N
is the number of samples per peak). This seems like a bad idea.

>to try and compare samplitude at this because here is where they are so
>different. They have HDR files that are just raw audio, and these you can
>punch in and edit just like a continuous sound file.

I don't believe this. I would imagine that they operate much like
ProTools, in that every contiguous recording period (i.e. from
whatever starts the recording to whatever ends it) for each track ends
up in its own audio file. Inserting 10 minutes of audio in the middle
of an existing audio file that lasts 25 minutes is phenomenally
expensive. And as you note below, Samplitude lets you work with
"objects" defined within each "file", supporting my belief about what
its up to.

Ardour does the same thing - we use raw audio files as well. But since
you can edit each track so that it contains references to (parts of)
the same file, the 1:1 correspondence between a peakfile and the raw
audio is not really in place.

                                                       Then, you take these
>HDR files and select ranges to become objects, and these are what you
>align in the editor. Alot of this is automatic. Now that I think about
>if, if I rerecord in an HDR, then when its done it recalculate the peak
>info when I am done, and I think it does it all, since in this case,
>things are different at the boundaries.

Right, Ardour is doing this too: we compute the peakfiles whenever we
stop the transport after recording any amount of audio. But as I said,
I don't think this helps the problem with the peak phase alignment
problem.

However, in thinking about the "objects" (Samplitude) / "regions"
(ProTools) model, I can see how they get this too work: you compute
the peak data for each object/region, and it *never* changes because
the object/region is atomic (you can't subdivide it without creating a
new object/region). Hmm. There are other reasons for moving toward
this model, but this might be the killer.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Feb 25 2001 - 01:13:23 EET