Jump to content

Talk:Emphasis (telecommunications)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

The typographical usage should be listed at the top, not at the bottom. Better yet: this page should be renamed "emphasis (signal processing)" and there should be a disambiguation page.

Merger

[edit]

Pre-emphasis and de-emphasis articles very briefly describe the same process called "emphasis". There's no point in distinct articles, as it's much better to describe the whole process in all the details only once. --GreyCat (talk) 00:31, 23 June 2011 (UTC)[reply]

although it is the same basic concept need to keep the different naming. it is called in different names in different fields

93.172.166.243 (talk) 20:50, 12 January 2012 (UTC)[reply]

Yes, the usages of the different specialties should be preserved as usual in the merger process. The right article name is the "Emphasis (telecommunications)" which is the present article. The other names should redirect to this one, where the distinctions should be explained. Jim.henderson (talk) 11:07, 27 February 2012 (UTC)[reply]

Distort the signal by means of a pre-emphasis in order to correct transmission distortions??

[edit]

The sentence "In high speed digital transmission, pre-emphasis is used to improve signal quality at the output of a data transmission. In transmitting signals at high data rates, the transmission medium may introduce distortions, so pre-emphasis is used to distort the transmitted signal to correct for this distortion. When done properly this produces a received signal which more closely resembles the original or desired signal, allowing the use of higher frequencies or producing fewer bit errors." drives me nuts. What exactly is "high speed digital transmission"? And how can distortions be countered by distortions? Apart from this, the transmission medium may introduce distortions regardless of the data rate. So these sentences sound like nonsense. — Preceding unsigned comment added by 2001:638:A0A:1192:115D:ACD1:FF20:E94D (talk) 17:19, 19 November 2018 (UTC)[reply]

It's a deliberate, reversible "distortion" intended to partially compensate for the distortion the signal will inevitably encounter when recorded on or transmitted over an imperfect medium. EG, background hiss on tapes or with low-resolution digital sampling, that plus an inability to reliably store loud bass notes on microgroove vinyl, and boosting voice frequencies at the expense of anything else over telephone lines. Its nature can be heard by listening to the encoded signal without any de-emphasis (which is essentially what happens when you use a landline phone, with the boosting happening AFAIK at the exchange). Dolby tapes seem distorted with an overly bright, crashy treble sound; the rare CDs with preemphasis are probably similar, as would be samples filtered to sound better when played through an Amiga or Atari STe (etc) PCM soundchip with their respective low-pass noise-reducing filters turned on. Vinyl records sound very "thin" indeed, with much attenuated bass and very intense treble (the former reduced to as little as 1/4 of its original volume, and the latter boosted up to 4x).
The point of it all is that those are either the points where the medium is just generally weak in terms of reproduction/storage (signal falls off on long telephone runs, particularly losing top end and gathering low end buzz; tapes can be poor for treble response and particularly any kind of subtlety; vinyl would suffer literal structural collapse of the groove with overly strong bass but can store it in an attenuated form just fine, whilst needing a bit of help to faithfully store the highest frequencies without them being lost inamongst everything else thanks to the reproduction device being a physical object of finite, and quite considerable size compared to the wavelengths in question on a record turning at a common speed, with a workaday needle not being good for more than maybe 10~12kHz response without pre-emphasis assistance), or more commonly exhibiting particularly intense background noise across the frequencies in question - even when the signal being recorded is silence (all of the above display this to varying extents and with differing profiles, but mainly biased towards noise increasing with signal frequency; for tape it can be louder than -30dB at the very top end, and an average of -45db which is equivalent to a "clean" digital recording made in a room with quiet but noticeable aircon or projector noise).
In the latter case, if you, say, pre-boost the signal in the noise range by 20dB (which may be the maximum achievable without significant and irreversible distortion in that band, or even across all of them, caused by the emphasis itself), and then de-emphasise the same range by 20dB on playback, you get effectively the same "useful" signal (ie what you recorded, on top of the noise) out of the speakers, but the background noise / hiss is reduced by 20dB. Which in some cases can be enough to perceptually eliminate it in typical listening scenarios (ie moderate volume in a less than silent environment; having -65dB background noise could then be entirely acceptable as the signal itself doesn't peak more than 65dB above the ambient room noise, so it blends in and is barely noticeable unless you really strain to hear it... this is likely why Phillips thought 14-bit recording (about -78 to -84dB noise floor) plus pre-emphasis (potentially extending that to around -90 to -100dB) was sufficient, until Sony insisted on the digitally and electronically simpler 16-bit (about -90 to 96dB) without any filtering other than the 20 to 22kHz rolloff).
The idea is that the two deliberate distortions cancel each other out to leave a reasonably faithful facsimile of the original signal, whilst cutting out the unwanted but inherent noise (or other shortcomings) of the recording medium. Generally there's quite a lot of spare "intensity" to be had out of any given medium at a single wave frequency (other than vinyl below the 0dB emphasis crossover), so long as you don't increase the cumulative volume beyond what it can represent. Most recordings are made up of a wide spread of frequencies, each one only contributing a very small amount to the overall waveform, so the individually quieter ones (which tend to be treble in the first place, so very easily lost within the background noise and reducing the effective headroom) can be boosted quite strongly without actually changing their actual "shape" (phase errors etc notwithstanding), so when de-emphasised (which with any luck should also reverse the phase shift) they will be indistinguishable from the original, except for the now much quieter amount of added media noise. And for the vinyl bass frequencies, it's mostly an issue that the recording needle makes a much bigger swing for a given intensity at lower vs higher frequencies - ie, its sensitivity varies across the range, and that could cause both the needle to hit its stops and cause clipping distortion (and loss of the already quieter high frequencies) and put the groove walls at risk of collapse if they pass too close to each other or even try to steal each other's space (causing even more, less predictable distortion) - and the playback needle produces a smaller voltage for the same magnitude of swing. The attenuated bass produced by pre-emphasis still creates a fairly large groove deflection (ideally, the wave on-disc should be "visually" the same as the electrical signal going into the pre-emphasis circuit), it's just now within the limit of what can be safely recorded on the disc. Of course this now means the output signal has heavily attenuated bass because the playback needle is less sensitive to those frequencies, but so long as they're above both the mechanism "rumble" and the noise floor of the electrical link to the post-emphasis circuitry, they can be extracted just fine.
Really it's one of the age old engineering trade offs - doing something deliberately suboptimally to avoid even worse side effects if you don't. Like derating / governing engines intended to be run continually at full throttle so they don't shake themselves apart, even though that means they make much less peak power for the same capacity vs those intended for lighter duties (ie a lot of part throttle cruising with only brief moments of full output). In this case, we take the risk of causing some slight, imperceptible distortion to the recorded / transmitted signal between the mic and the speaker by applying a heavy EQ filter at the point of recording and then a mirror image one at the point of playback, in order to actually greatly improve its quality vs what would be achieved simply transmitting / encoding it in the raw.
Funnily enough, the slight loss of overall fidelity seems to be something people quite like; it's one of the things (besides use of tube amplifiers and filters) that give vinyl its so-called "warmth", vs the much cleaner and more accurate / faithful nature of digital recording. Perhaps a 14-bit system with emphasis might have had a little of its own, though that's closer to Dolby and I don't hear anyone shouting its praises on the basis of being a mellow and comforting effect rather than just cutting out some treble hiss. The warmth is essentially an imperfect de-emphasis filter that goes a little too heavy on boosting the bass and cutting the treble, whilst imparting a certain amount of fuzz, resonance and phase errors of its own. It can be argued as a bad thing, but if the point of recorded music is to create enjoyment, and people enjoy the effect, isn't that a net plus? 51.7.16.171 (talk) 11:59, 28 July 2019 (UTC)[reply]

Time constant explanation?

[edit]

Can anyone insert some kind of explanation as to what's meant by the filter time constants and what they imply for the actual filtering e.g. in CD pre-emphasis? When I convert them into frequencies (assuming they're the cycle time of a full wave), I get 20kHz from 50us - obvious enough, that's the usual CD frequency band cutoff, above which a sharply rolling-off low pass filter is applied - but then 66.7kHz from 15us, which doesn't really make any kind of sense for a recording medium where the maximum recordable frequency is 22.05kHz (and the sampling rate itself is 44.1kHz), or a filtering system where we measure roll-off in decibels per decade (so 20Hz, 200Hz, 2kHz, 20kHz... 200kHz?) or per octave (2.5kHz, 5kHz, 10kHz, 20kHz...). Is one of them meant to be -10dB and the other -20dB? 0dB and -23.3dB? And how does that at all relate to an emphasis filter that you'd expect to operate over the range of human hearing, and particularly over approx the 3kHz to 20kHz (or maybe 6.67k to 20k?) range than the ultrasonic...?

Though it's only really given for the little used CD schema here, it would be nice to have a general explanation as I've also seen the microsecond notation used for tape and vinyl emphasis or frequency response definition, and it made about as much sense with those too (though ISTR them being much lower frequencies, to the point that they barely covered the telephony band?) 51.7.16.171 (talk) 12:11, 28 July 2019 (UTC)[reply]

Your assumption about the cycle time is wrong. If you compute the reciprocal, you end up with angular frequency. You have to multiply the time constant by 2 times pi before doing the reciprocal in order to get the traditional cycles per second frequency readings. So you end up with about 3.2 kHz resp. 10.6 kHz – both well below Nyquist’s limit. -- Pemu (talk) 00:46, 27 March 2024 (UTC)[reply]