On Sat, Jul 08, 2006 at 01:34:44PM +0100, James Courtier-Dutton wrote:
> Is there a standard way of converting a 24bit sample to 16bit?
> I ask because I think that in different scenarios, one would want a
> different result.
As others have said - dither and drop the low 8 bits. But...
> returned to the user. The problem here is what are the "most useful
> 16bits"? I have one application where just using the lower 16bits of the
> 24bit value is ideal, due to extremely low input signals.
Normalize the 24 bit sample first?
Received on Sun Jul 9 16:15:02 2006
This archive was generated by hypermail 2.1.8 : Sun Jul 09 2006 - 16:15:03 EEST