[linux-audio-dev] Re: is this, or this not, RTP

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Re: is this, or this not, RTP
From: John Lazzaro (lazzaro_AT_eecs.berkeley.edu)
Date: Thu Feb 26 2004 - 20:09:15 EET


On Feb 26, 2004, at 8:31 AM, linux-audio-dev-request_AT_music.columbia.edu
wrote:

>> It appears to be ethernet, not IP-based.
>
> Ah. Silly me.
>
>> http://www.mkpe.com/articles/2001/Networks_2001/networks_2001.htm
>
> "so, where are the products?" (referring to RTP and RTCP). Silly
> author. Expecting people to productize[sic] publically owned protocols.

>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 26 Feb 2004 14:58:38 +0000
> From: Steve Harris <S.W.Harris_AT_ecs.soton.ac.uk>
> Subject: Re: [linux-audio-dev] is this, or this not, RTP?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev_AT_music.columbia.edu>
> Message-ID: <20040226145838.GD14603_AT_login.ecs.soton.ac.uk>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Feb 26, 2004 at 09:33:25AM -0500, Paul Davis wrote:
>>> It appears to be ethernet, not IP-based.
>>
>> Ah. Silly me.
>>
>>> http://www.mkpe.com/articles/2001/Networks_2001/networks_2001.htm
>>
>> "so, where are the products?" (referring to RTP and RTCP). Silly
>> author. Expecting people to productize[sic] publically owned
>> protocols.
>
> I guess he was talking about routers and the like, in 2001 there
> weren't
> any that I know of, now there are plenty, eg:
> http://www.cisco.com/en/US/products/hw/routers/ps221/
>
> - Steve
>
>
> ------------------------------
>
> Message: 5
> Date: Thu, 26 Feb 2004 10:00:07 -0500
> From: Paul Davis <paul_AT_linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] [ANN] Website
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev_AT_music.columbia.edu>
> Message-ID: <200402261500.i1QF07P6008166_AT_dhin.linuxaudiosystems.com>
>
>>> * two ogg files recorded using the current alpha version of Aeolus,
>>> the pipe organ synthesiser I'll present at the second LAD
>>> conference
>>> in Karlsruhe.
>>
>> these files sound incredible. i can't wait to hear your presentation
>> on aeolus!
>
> absolutely! i'm calling off my talk so we can spend extra time
> listening to Aeolus perform. incredible work!
>
> --p
>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 26 Feb 2004 16:03:11 +0100
> From: kloschi <linux-lists_AT_web.de>
> Subject: [linux-audio-dev] Announcement: Camp Music 2004, techlab,
> call for participiants/exhibitors
> To: A list for linux aduio developers
> <linux-audio-dev_AT_music.columbia.edu>
> Message-ID: <20040226160311.3b2060ea_AT_magrathea.funk.subsignal.org>
> Content-Type: text/plain; charset=US-ASCII
>
> Hi list,
>
> we are doing the camp music, a festival for electronic music on
> 14.o5.-15.o5.2oo4 in the 'motorpark' near magdeburg [germany].
> this event includes a usual festival with 2 stages and additionally
> a forum for musicians and independent labels [labelforum] and also
> the so called 'techlab'.
> techlab will be a place where music-software and -hardware producers
> show their [new] stuff, give workshops and so on. So far companies like
> native instruments and emagic and some more commercial
> manufacturers/developers will show up.
> I would also like to invite free software developers with showable
> products, to present and give workshops, to introduce musicians and
> producers in the possibilities of the free software world.
>
> please contact me _soon_ at kloschi_AT_seekers-event.com or
> kloschi_AT_subsignal.org feel free to forward this announcement.
>
> kloschi
>
>
> ------------------------------
>
> Message: 7
> Date: Thu, 26 Feb 2004 10:04:47 -0500
> From: Paul Davis <paul_AT_linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev_AT_music.columbia.edu>
> Message-ID: <200402261504.i1QF4lwA008182_AT_dhin.linuxaudiosystems.com>
>
>> It was, however, the automatic merits that I wished mainly to explore.
>> Freezing has it's merits, but it requires that you dedicate some
>> brain cycles
>> to deciding when and where you wish to freeze/unfreeze something. I
>> could
>> sure use those cycles for keeping creativity flowing.
>
> remember: freezing has no merits at all unless you need to save CPU
> cycles.
>
> the problem is that in a typical DAW session, you can potentially
> freeze most tracks most of the time. so how can you tell what the user
> wants frozen and what they don't? More importantly, freezing consumes
> significant disk resources. Can you afford to do this without it being
> initiated by the user? A typical 24 track Ardour session might consume
> 4-18GB of audio. Freezing all or most of it will double that disk
> consumption (and its not exactly what you would call quick, either :)
>
> thus, either you have s/w smart enough to figure out what to freeze
> and then do it, which does not come without certain costs, or the user
> has to play a significant role in the process.
>
>> ... and I do think you can freeze a bus, but it requires that the app
>> has full
>> knowledge of the connection graph. Mmmm, I see a jack extension
>> forming ;))
>
> sorry, but i don't think so. if i have a bus that is channelling audio
> in from an external device (say, a h/w sampler), you cannot possibly
> freeze it.
>
> --p
>
>
> ------------------------------
>
> Message: 8
> Date: Thu, 26 Feb 2004 15:17:31 +0000
> From: Steve Harris <S.W.Harris_AT_ecs.soton.ac.uk>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev_AT_music.columbia.edu>
> Message-ID: <20040226151731.GG14603_AT_login.ecs.soton.ac.uk>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Feb 26, 2004 at 10:04:47AM -0500, Paul Davis wrote:
>> the problem is that in a typical DAW session, you can potentially
>> freeze most tracks most of the time. so how can you tell what the user
>> wants frozen and what they don't? More importantly, freezing consumes
>> significant disk resources. Can you afford to do this without it being
>> initiated by the user? A typical 24 track Ardour session might consume
>> 4-18GB of audio. Freezing all or most of it will double that disk
>> consumption (and its not exactly what you would call quick, either :)
>
> No, but you can do it (semi-)transparently when the user presses play.
> I
> dont know if that would be acceptable or not, but if you imagine
> adding a
> few effeects, auditioning, rinse, repeat it might work out ok.
>
> I'd always go for more CPU if possible - I'm not a huge fan of multiple
> code paths :)
>
> - Steve
>
>
> ------------------------------
>
> Message: 9
> Date: Thu, 26 Feb 2004 09:06:59 -0600
> From: Benjamin Flaming <lad_AT_solobanjo.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev_AT_music.columbia.edu>
> Message-ID: <200402260906.59583.lad_AT_solobanjo.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Thursday 26 February 2004 09:04 am, Paul Davis wrote:
>> the problem is that in a typical DAW session, you can potentially
>> freeze most tracks most of the time. so how can you tell what the user
>> wants frozen and what they don't?
>
> I want my plug-ins frozen the instant I close the parameter
> editor. ;)
>
>> More importantly, freezing consumes
>> significant disk resources. Can you afford to do this without it being
>> initiated by the user? A typical 24 track Ardour session might consume
>> 4-18GB of audio. Freezing all or most of it will double that disk
>> consumption (and its not exactly what you would call quick, either :)
>
> Agreed, it's a very definite trade-off - storage space for CPU
> cycles.
> It is my observation, however, that storage space is cheap, and
> readilly
> available.
>
>> sorry, but i don't think so. if i have a bus that is channelling audio
>> in from an external device (say, a h/w sampler), you cannot possibly
>> freeze it.
>
> However, buses which simply contain a submix of several audio
> tracks can
> be safely frozen, saving both processing power and disk bandwidth.
>
> The purpose of my project is to create a working environment which
> encourages songs to be organized in such a way that offline rendering
> can
> *usually* be done transparently. Thus, the hierarchal tree structure.
>
> When I finish comping the vocals for a chorus, I want to be left
> with 1
> fader, and 1 editable audio track, for the chorus. If I need to make
> one of
> the voices softer, I can bring up the underlying tracks within a second
> (which is *at least* how long it usually takes me to find a single
> fader in a
> 48-channel mix). While I'm making adjustments, Tinara will read all
> the
> separate chorus tracks off the disk, mixing them in RT. When I move
> back one
> layer in the mix hierarchy (thereby indicating that I'm finish
> adjusting
> things), Tinara will begin re-rendering the submix in the background
> whenever
> the transport is idle. When the re-rendering is done, Tinara will go
> back to
> pulling a single interleaved stereo track off the disk, instead of 6-8
> mono
> tracks.
>
> The basic idea is to turn mixing into a process of
> simplification. When
> I'm finishing up a mix, I don't want to deal with a mess of tracks and
> buses,
> with CPU power and disk bandwidth being given to things I haven't
> changed in
> days. I want to be able to focus on the particular element or submix
> that
> I'm fine-tuning - and have as much DSP power to throw at it as
> possible.
>
> This will also make the use of automated control surfaces much
> nicer,
> IMHO. Since there will be fewer elements in each layer of the
> hierarchy,
> fewer faders would be needed. Additionally, it would be easier to
> keep track
> of what's going on. I've worked extensively with Digi Design's
> Control|24,
> and my feeling is that things start to get messy when there are more
> than
> about 12 faders (not to mention how easy it is to get lost when there
> are two
> or more banks of 24 faders!).
>
> Just for the record, please understand that any negativity I may
> express
> toward conventional DAW systems is *not* directed toward Ardour. It's
> just
> pent-up frustration with Pro Tools ;)
>
> |)
> |)enji
>
>
>
> ------------------------------
>
> Message: 10
> Date: Thu, 26 Feb 2004 11:25:33 -0500
> From: Paul Davis <paul_AT_linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev_AT_music.columbia.edu>
> Message-ID: <200402261625.i1QGPXbe015706_AT_dhin.linuxaudiosystems.com>
>
>> I want my plug-ins frozen the instant I close the parameter
>> editor. ;)
>
> oh, you don't want to do any graphical editing of plugin parameter
> automation? :))
>
>> Agreed, it's a very definite trade-off - storage space for CPU
>> cycles.
>> It is my observation, however, that storage space is cheap, and
>> readilly
>> available.
>
> not my experience.
>
>>> sorry, but i don't think so. if i have a bus that is channelling
>>> audio
>>> in from an external device (say, a h/w sampler), you cannot possibly
>>> freeze it.
>>
>> However, buses which simply contain a submix of several audio
>> tracks can
>> be safely frozen, saving both processing power and disk bandwidth.
>
> sure, but thats a subset of all busses. its not a bus per se.
>
>> When I finish comping the vocals for a chorus, I want to be left
>> with 1
>> fader, and 1 editable audio track, for the chorus. If I need to make
>> one of
>> the voices softer, I can bring up the underlying tracks within a
>> second
>> (which is *at least* how long it usually takes me to find a single
>> fader in a
>> 48-channel mix). While I'm making adjustments, Tinara will read all
>> the
>> separate chorus tracks off the disk, mixing them in RT. When I move
>> back one
>> layer in the mix hierarchy (thereby indicating that I'm finish
>> adjusting
>> things), Tinara will begin re-rendering the submix in the background
>> whenever
>> the transport is idle.
>
> have you actually experienced how long it takes to "re-render"?
> steve's suggestion is an interesting one (use regular playback to
> render), but it seems to assume that the user will play the session
> from start to finish. if you're mixing, the chances are that you will
> be playing bits and pieces of of the session. so when do you get a
> chance to re-render? are you going to tie up disk bandwidth and CPU
> cycles while the user thinks they are just editing? OK, so you do it
> when the transport is idle - my experience is that you won't be done
> rendering for a long time, and you're also going to create a suprising
> experience for the user at some point - CPU utilization will vary
> notable over time, in ways that the user can't predict.
>
> you also seem to assume that the transport being stopped implies no
> audio streaming by the program. in ardour (and most other DAWs), this
> simply isn't true. ardour's CPU utilization doesn't vary very much
> whether the transport is idle or not, unless you have a lot of track
> automation, in which case it will go up a bit when rolling.
>
>> The basic idea is to turn mixing into a process of
>> simplification. When
>> I'm finishing up a mix, I don't want to deal with a mess of tracks
>> and buses,
>> with CPU power and disk bandwidth being given to things I haven't
>> changed in
>> days. I want to be able to focus on the particular element or submix
>> that
>> I'm fine-tuning - and have as much DSP power to throw at it as
>> possible.
>
> the focusing part seems great, but seems to be more of a GUI issue
> than a fundamental backend one. it would be quite easy in ardour, for
> example, to have a way to easily toggle track+strip views rather than
> display them all.
>
> the DSP power part seems like a good idea, but i think its much, much
> more difficult than you are anticipating. i've been wrong many times
> before though.
>
> and btw, the reason Ardour looks a lot like PT is that it makes it
> accessible to many existing users. whether or not ardour's internal
> design looks like PT, i don't know. i would hope that ardour's
> development process has allowed us to end up with a much more powerful
> and flexible set of internal objects that can allow many different
> models for editing, mixing and so forth to be constructed. the backend
> isn't particularly closely connected in any sense, including the
> object level, to the GUI.
>
> --p
>
>
>
>
> ------------------------------
>
> _______________________________________________
> linux-audio-dev mailing list
> linux-audio-dev_AT_music.columbia.edu
> http://music.columbia.edu/mailman/listinfo/linux-audio-dev
>
>
> End of linux-audio-dev Digest, Vol 5, Issue 57
> **********************************************
>
>

---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Feb 26 2004 - 20:17:17 EET