Re: [linux-audio-dev] ALSA vs OSS/free

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] ALSA vs OSS/free
From: Jussi Laako (jussi.laako_AT_kolumbus.fi)
Date: Mon Mar 11 2002 - 01:50:17 EET


Paul Davis wrote:
>
> Apologies for the error. I forgot about SoftOSS because when I tried
> to use it 2 years ago, it routinely paniced my system, had appalling
> latency characteristics, and generally seemed like a half-baked

I haven't got any problems as long as the SoftOSS has existed.

> solution to a genuine problem. The design is the same as the win32
> kernel mixer, which even MS now admits was a bad idea.

DirectSound still has software mixer and it's integer mixing routines sound
really awful. (S/N of about 40 dB)

> and a lot of extra CPU cycles when all the high end applications use
> non-interleaved data so that editing is easy.

You can effectively process interleaved data with SIMD as you can
parallelize operations for 2-4 channels.

> of work to interleave *two* streams is pretty trivial. this would
> suggest that if you are going to force a particular format,
> non-interleaved should be it. that way, high end apps that are
> streaming 48 non-interleaved data streams to a device that is
> non-interleaved don't waste cycles converting then re-converting the
> data.

All streaming interfaces that I know of are interleaved. S/PDIF / AES/EBU,
ADAT. And just about all file- and network stream formats.

> hardware, and they can both move back and forth between the hardware
> types with the inevitable extra cycles, but no change in the
> application code.

CPU load of interleaving/deinterleaving streams is so small that I can't
notice it without running CPU cycle counter. My app even uses network
byteorder in IPC (TCP link) and doing that conversion back and forth doesn't
show up in total CPU load either (all of it could end up at ~0.1%).

> no device that i know of supports 24 bit mode. S32_LE or S32_BE are
> the correct formats. jack/ardour/ecasound and others have supported 24
> bit I/O via ALSA for a long time.

With 24-bit format I mean 24-bit MSB aligned data in 32-bit words. As I
haven't seen any 32-bit A/D converters yet... ;)

Here's part of my code, both of these succeed as calls but neither of these
works:

        case 24:
            if (snd_pcm_hw_params_set_format(sndpcmPcm, sndpcmHwParams,
                SND_PCM_FORMAT_S24) != 0)
                return false;
            break;
        case 32:
            if (snd_pcm_hw_params_set_format(sndpcmPcm, sndpcmHwParams,
                SND_PCM_FORMAT_FLOAT) != 0)
                return false;
            break;

> And if the correct answer is 26? or 52, if I've merged two Hammerfalls
> together? I'm not even sure OSS has even bits available for really
> high channel counts.

Dunno about driver internals, but it's signed int. So that limits it to
2^31.

> Only if you want the same value for *both* streams. If you want to run
> with a large blocksize for capture, but a small blocksize for
> playback, that doesn't work. ALSA separates out hardware parameters
> from software parameters, and the point at which you wake a sleeping
> application because of available data is a software, not hardware
And that doesn't have impact on latency?
It will end up in nasty effects if output buffer size is not multiple or
integer fraction of input buffer size or multiple of hardware buffer size.

Btw. ASIO2 doesn't support that either.

> The application can't handle it if the problem is caused by scheduling
> delays. The point of having the driver do it is that barring stupid
> calls to sti/cli, it knows there is a problem immediately that it
> happens. This makes it possible to avoid silly buzzing noises with
> ALSA, since the driver itself can silence the audio when an xrun takes
> place.

You can check the over/underrun counters before reading data. When writing,
it's up to driver to decide in OSS, true...

> ALSA0.5 is obsolete and should not, in my opinion be supported by
> anyone or anything at this time. i have never heard of comedi. If you

There are still many installed distributions out there with ALSA 0.5, most
of those are SuSE.

http://stm.lbl.gov/comedi/
http://stm.lbl.gov/comedi/doc/index.html#AEN37

> jack's alsa "driver" works on a per-channel basis. i don't know what
> you mean by channel scan sequences or channel filter parameters. if
> you explain what you mean in more detail, i can probably tell you how
> to do it.

Most DAQ cards have single high quality, high speed A/D converter and input
channels are connected to input of that converter via multiplexer (circuit
may also contain sample-and-hold chips). Scan sequence is programmed channel
scanning sequence for the input multiplexer. Let's say that I like to have
channels 0, 3, 1 and 5 digitized at 10 kHz in this order with gains of 1, 1,
10 and 10.
I can create list scan list: 0 3 1 5
and gain list: 1 1 10 10
Channel switch interval is then programmed to 0.025 ms and scanning interval
to 0.1 ms. That way channel's samples are 0.1 ms apart and
channel-to-channel delay is 0.025 ms. I can also program scanning to happen
as fast as possible (could be something like 10-40 MHz) so I leave channel
switch time to 0 and only program scan interval. There is also scan count
register which can be used to program buffer size, eg. interrupt happens
every N scans. Start of scanning can be either software or hardware
triggered. It's also possible to create level triggered start by programming
the card to connect one of it's D/A converters to comparator and feeding
that D/A with reference level.
(http://stm.lbl.gov/comedi/doc/x139.html#AEN163)

There can be also programmable antialias filters in channel inputs with
programmable filter slope and frequency.

> And you wonder why there is no documentation?

Only programmers who wrote the code know exactly how it works. They should
write documentation at time of writing the software. That way it's also
easier to verify that the software does what it should.

> Absolutely wrong. It doesn't work that way at all, unless you add
> loads of buffering thus destroying low latency opportunities.

I'm doing this in user space. There is shared block in memory and all
threads write their channel data to that block. If one thread doesn't fill
it's channel in time then that channel is all zeroes. The block goes out
when it's scheduled to. No extra latency or buffering here.

> Linux is not microkernel-ish in any way shape or form. Microkernels do
> not generally run dynamically loadable modules with full kernel
> priviledge, and they use message passing of some form between
> modules. In Linux, a loaded kernel module is indistinguishable from

I don't start to argue about this.

> >I don't think RT-Linux and RT-AI as different operating systems. Only as
> They need new drivers. In my mind, thats as good/bad as a new OS.

Comedi uses same drivers for user space, RT-Linux and RT-AI.

> The process is woken from the interrupt handler, and with SCHED_FIFO
> or SCHED_RR on the sleeping process, it will be set running when the

Btw. there are bad latencies in O(1) scheduler with try_to_wake_up(). I'm
trying to find and fix the bottleneck but haven't succeeded yet.

I see about 50 ms worst case latencies with pthread_cond_*().

        - Jussi Laako

-- 
PGP key fingerprint: 161D 6FED 6A92 39E2 EB5B  39DD A4DE 63EB C216 1E4B
Available at PGP keyservers


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Mar 11 2002 - 01:40:05 EET