[linux-audio-dev] Denormalisation considered harmful?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Denormalisation considered harmful?
From: Steve Harris (S.W.Harris_AT_ecs.soton.ac.uk)
Date: Wed Nov 29 2000 - 14:14:58 EET


Hi,

I've been going though critical parts of plugins and looking at
denormalising the floats (especially in the audio inputs), then it occured
to me that if the host is running realtime then its almost certainly
denormalised its floats anaway, and I'm wasting cycles.

So, should I assume that all input streams have been pre-denomalised (on
architectures where it matters), or do it anyway (denormalising is cheap,
normalising really isn't).

- Steve

PS For those who don't know, there is a problem when you try to do maths
   with floats that are very small on some chips. The FPU generates an
   interrupt, and normalises the bits of one float so that that maths
   won't loose precision. It's the best way to meet IEEE precision
   requirements aparentnly. Anyway, the interrupt is very expensive, so to
   aviod it people do nasty things like adding a DC offset (erk) or noise
   (Erk!) to the signal, or disabling interrups. There is a better hack
   that fixes it:

   #define DENORMALISE(fv) (((*(unsigned int*)&(fv))&0x7f800000)==0)?0.0f:(fv)

   which I got from andy_AT_vellocet IIRC. I was... unhappy when I
   discovered that plugins using small input signals ran much
   slower than with 0dB ones.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed Nov 29 2000 - 15:09:38 EET