On Mon, Dec 19, 2011 at 03:21:19PM +0100, Johan Herland wrote:
> Yeah. After some thinking it's now obvious to me why I shouldn't
> upsample (as long as the rest of the pipeline can do the sampling
> rates I throw at it). However, I do want to use as many bits as
> possible in processing, to make sure that I don't loose detail by
> shifting bits into oblivion.
This is a non-problem. 24 bits gives you a S/N ratio better than
140 dB. Now assume that you adjust the volume of you power amps
such that -20 dB digital corresponds to +100 dB SPL. This is pretty
loud and you still have 20 dB headroom for peaks. That means that
the digital noise floor of your system corresponds to -20 dB SPL,
that is 20 dB below the hearing threshold and at least 40 dB
below the ambient noise level of any normal living room. So
won't hear it. Even if a low-level signal would use only 8 bits
or so, you will not hear any distortion as a result of that.
The thing that doesn't work is first reduce a signal to e.g. 8
bits and then amplify it again digitally to make it play louder.
But that won't happen unless you really use the volume controls
in the wrong way. If the amp's volume control is fixed (e.g. as
described above) this problem does not exist.
> I don't see how the _entire_ pipeline can run off a single external
> clock. Say, I have a simple setup: A CD player connected to the audio
> PC doing a simple digital volume control, and then outputting the
> signal to a (Power)DAC. Obviously there will be some (small but still
> non-zero) delay between the audio PCs input signal and its output
> signal. How can both signals run from the same clock?
The will simply be a fixed number of samples delay between input
and output. The thing that matters is that the sample frequencies
are exactly the same. If they are not you have to resample.
> Or does the audio PC sync the output to a _later_ pulse form the clock
> generator (i.e. all that matters is that the signal is synced with _a_
> clock pulse, not necessarily _the_same_ clock pulse with which it was
> originally transmitted)?
Indeed, that is what happens.
> But if so, couldn't I have one clock between the CD player and audio
> PC, and a different clock between the PS and the DAC?
(I assume you mean _PC_ and DAC, or PC and digital amp).
In that case the PC has to do sample rate conversion. It will also
considerably complicate your SW.
> And is self-clocking somehow inferior to an external clock? If an
> external clock is better, how come ethernet/USB/firewire and all other
> digital communication protocols run without external clocks? Sorry for
> being thick, but I haven't worked with these things before...
In most digital formats the clock can be regenerated from the
digital signal. There's no quality loss involved in that - clock
jitter at a digital input doesn't matter as long as the clock is
good enough to recover the data without error. Professional audio
installations use a separate clock because 1) that is more reliable
when long cables are used, and 2), it makes the synchronisation
independent of the signal routing which in e.g. a studio isn't
just a simple linear chain as it would be in a domestic player
setup, and usually it isn't fixed either.
> > but this also means that if you switch from blue-ray player a to blue-ray
> > player b, your sound card must change its clocking source from input a to
> > input b. which might or might not cause an audible click or thump.
>
> Can't you "fix" this by quickly muting the signal, then switch
> sources, and unmute after you've "locked" onto the new signal? If the
> switch is software-controlled (via RS-232) wouldn't that be fairly
> simple to do?
Yes, you could mute the amps for a short time. Anyway, when the PC
input clock is switched, the output clock will follow, your amps
will detect that and probably mute automatically until they are
resynced. That is at least what I'd expect from something costing
$6000.
> > as to "maximizing fidelity", this is digital pcm. it either works perfectly
> > or not at all. the only way to slightly degrade the signal is to have a
> > lossy codec in between (such as ac3 or dts), or when you're forced to insert
> > a SRC. but the latter should be pretty close to perfect if it's a good one.
> > no longer bit-transparent, obviously.
>
> Ok, so if I save money by not using external clocking, and using
> consumer-grade equipment (but stick to lossless PCM), I will either
> get a perfect signal, or get no signal at all. And if I get no signal
> at all, it's likely to be because my cheap devices don't agree on
> where the clock pulse is. Correct?
> I don't know whether it's cheaper/easier to
> convert AES/EBU or ADAT to SPDIF coax/toslink...
AES/EBU and SPDIF are (almost) the same. The main differences
are in the electrical signal level and impedance. There are some
small coding differences as well, but in general an SPDIF input
will work with an AES signal, and an AES input *may* work with
an SPDIF signal provided the signal level is high enough, but
don't count on that.
> Then again, there are some digital amps that take USB input (e.g. the
> miniSTREAMER/miniAMP combo <URL:
In that case the PC's output sample rate is set by the amplifier.
Which means you can connect only one such amplifier unless they
are synchronised with external clocks (which they probably can't
do). And even if you can get them synced the PC will still have
to sample rate conversion if it plays from an external source.
So I'd say 'don't go that way'.
Ciao,
-- FA Vor uns liegt ein weites Tal, die Sonne scheint - ein Glitzerstrahl. _______________________________________________ Linux-audio-user mailing list Linux-audio-user@email-addr-hidden http://lists.linuxaudio.org/listinfo/linux-audio-userReceived on Mon Dec 19 20:15:02 2011
This archive was generated by hypermail 2.1.8 : Mon Dec 19 2011 - 20:15:02 EET