this is largely misleading. it explains why it's a waste of time to convert from 16 to 24 (including "upsampling" your cds), but it doesn't really talk about recording in at 24 bit.
the more important question is whether your ears can hear the difference between how well the d/a converter can construct an audio source from 16 v 24 bits. the noise floor is the difference between the reconstruction and the "true analog wave", and it comes out in conversion back to analog. what it means is that the 24 bit file should have a lower noise floor, because it's closer to the true analog. that is absolutely a fidelity issue, mathematically speaking...
however, there's a lot of factors, there, in the signal chain. also mathematically speaking, and under reasonable assumptions, the noise floor on the 16 bit d/a conversion is almost always going to be low enough that your ears can't tell the difference between the 16-bit reconstruction and the "true analog signal". in theory, the 24-bit has more accuracy and therefore less noise. but if you can't tell anyways then you're not gaining anything from 24 bits.
now, i find most people get this part, but then argue that you should use 24 bit anyways because it creates more "headroom" for signals processing. i've experimented with this, and i think it's funny logic. i'd actually advise people record in at 16-bit, so long as 16-bit is the playback standard.
i mean, once you get something into the computer and start adding reverb and compression to it, you're no longer discussing fidelity or quality or reconstruction. it's now subjective. this discussion makes sense if you're talking about recording an acoustic piano; it doesn't make sense if you're talking about running a prerecorded guitar part through an amp simulator, or whatever other digital thing you're doing. rather, the whole thing gets complicated because you're taking in the specifics of each plugin, effect and general "process" on the sound coming out.
it follows that if you record and mix and master in 24-bit, it has to be with the expectation that people are going to listen to it in 24-bit. the moment you downsample to 16-bit, you're altering the production. now, you might actually like the downsample. but that's just the point - this all becomes subjective. to me, the key point is this: if you mix and master at 24-bit, and your audience listens at 16-bit, they're not hearing what you intended them to hear.
the converse, however, is not true: if you mix at 16-bit, and people listen at 24-bit, it's just empty data. it sounds identical.
now, can you make the reverb sound better in 24-bit than 16-bit? i'm going to struggle against this. you may have to use more reverb in 16 than 24. but i can't think of any kind of processing done in 24 that is unable to be recreated in 16.
it's just a question of whether people are listening to what you actually created, or a damaged replica of it. if you stick with 16, you're standardizing that presentation much more effectively.
lachlanlikesathing
+deathtokoalas As I understand it, the argument for recording in 24 bit is that if you start using multiple tracks or multiple plug ins, and you start increasing gain etc., the quantisation noise will very quickly become audible in 16bit. Whereas you have a lot more headroom to do all sorts of things with 24 bit. The final master is fine in 16 bit because you are only adding one generation of 16 bit quantisation noise (largely inaudible) as opposed to multiples. I don't think that audible quantisation noise is the 'subjective' perceptual intention people have when they start adding things like reverb or other plugins.
deathtokoalas
+lachlanlikesathing it's not that the noise will quickly become audible through effects, it's more of a safety mechanism. it's paranoia, really, unless you're recording, like, flutes at low volume. in almost any realistic scenario, quantization noise isn't something that you're ever going to deal with at 16-bit. you actually did a decent job of explaining this part in the video.
the reverb thing has a lot of factors. first, the reality is that most plugins downsample to 16-bit - and you're going to potentially get more noise out of the conversion than you'd get anywhere else. but, if you can find a legit 24-bit signal path, you're just going to get a lot more calculations on the waveform. more numbers. it doesn't matter when you're capturing something physical because it will reconstruct it the same way. but those calculations are less tied to any kind of reality.
downsampling from 24 to 16 will always create noise, by definition. the way they get around this is "dithering", which is shooting random noise at the target. due to some statistical realities, the noise will become inaudible.
but, it's still there. it's not magically removed, it's just really quiet. there's not any really good reason to introduce this noise. that's not what i'm getting at though...
the idea of moving from 24-bit to 16-bit without losing anything but quantization error is based on the idea of the physical waveform being basically the same. but, you have to be careful. for example: consider moving from 24 to 16 and back to 24. that process is going to convert the extra data into zeroes: it's lost moving down to 24, then just zeroed out moving back up. the dac is going to create a minimal difference, but this is a loss of data.
now, consider the effect of this if all that zeroed out data was run through a set of effects. now, it's not a physical waveform any more. it's a digital creation. and all these ideas about dacs constructing more or less the same wave form are lost in calculation. this is of course along with the dithering process...
in the end, it doesn't really matter in terms of fidelity, it's just a question of maintaining consistency. 24-bit masters should be listened to in 24-bit or higher. 16-bit masters should be listened to in 16-bit or higher. because you're almost always dealing with digital manipulation on the master, downsampling should be avoided - while upsampling is literally meaningless.
bottom line is this: 1) if you have a 16-bit source (like a cd), it will not sound differently in 24-bit - you're just adding zeroes. even if it was mastered in 24-bit, and downsampled. that data is wiped in the downsample.
2) if you have a 24-bit source, there is technically sound degradation in conversion to 16-bit whether it happens before or after you get it - you're converting data into quantized noise, then removing it through a statistical trick known as dithering. however, you're only going to hear that difference if the source has been dramatically digitally altered in a way that makes the playback "unnatural". convolution reverb would be one way to do this.
3) the way to avoid conversion errors is to avoid conversion. unless you're recording symphonies, 16-bit from start to finish is likely good enough for you. but if you insist on 24-bit, avoid converting to 16-bit as much as possible....
===
i was just thinking about this as i was second guessing mixing decisions. i pointed out that i've played around with it, and...
this is an anecdote. but i think it's worth sharing to demonstrate why the real issue here is downsampling, and why it shouldn't be done.
i recorded one song in at 24/48 and put it through just an army of guitar effects - pods and floor pedals on the way in and on the tracks, parts doubled fifteen times, guitar rig, quadrafuzz, izotope, digital wave mods - it's just stacked with data. at 24/48.
every time i tried to decimate the file (that's a technical term for downsampling from 24/48 to 16/41, which should give you an idea of what you're doing), it came out with a deficit of "air" at the high end, and bits of static interspersed. the problem was actually that the dithering wasn't working well. the blasts of static are what you expect from decimation, as you're destroying data in conversion and forcing the ghost of it out of millions of tiny spaces, but it's supposed to "go away" if you pepper it with random static. i had too many weird numbers in there - too many jagged, digital waveforms - for this trick to work.
what i actually ended up doing was sending the track out of one soundcard at 24/48 and recording it into another one at 16/41. this obviously introduced a little noise in the waveform in transfer, but it wasn't audibly different than what you'd get out of a working decimation/dither. more importantly, it prevented the static and brought back the air - because the computer was able to get the sound via converting to analog and back to digital in 16 bit, without a destructive digital process.
i agree that it's a waste of time from start to finish, but so long as people are releasing things in 24 bit, it helps to have a way to play it back without digitally decimating it - which the fools often do themselves.