I have a question (i think stupid one, but i am looking for answer anyway.)
If i understand correctly, radio receiver has fixed sensitivity. By receiving good enough
signal, it can change to better data rates, but still, its sensitivity remains the same.
Am i wrong?
Now , for example -> http://i.mt.lv/routerboard/files/RB2011UiAS-2HnD-IN-140620095355.pdf
What is the connection 802.11n RX sensitivity to TX power?
Why does TX power changes (i.e mcs0/8 30dBm, and mcs7/15 25dBm) ?
Its clear for me why and when i will have mcs7/15, but what does TX power in this chart has to do with it?
Its confusing for me.
Thank you for replies :)
Xmit pwr & receive sensitivity being related?
It should not be related at all; typically the radio shuts of the receiver group when the xmitter transmits?
Look at the manufacturer your referencing, this is not exactly enterprise class gear ( IMHO); you get what you pay for; right? I think this is just some misguided marketing pdf that has nothing to do with the reality of the gear.
As was mentioned already, they really don't have anything to do with each other. Just like some people hear better than others, it does not mean it affects how loud they talk.
Now with that being said, if a company was using some form of automated transmit power control, it would lower or raise its TX power depending on how it hears other stations.
Just like being in a crowded noisy room, you may talk louder when you have a harder time hearing people because you figure they can't hear you either.
Most of the time that should not be used though, it doesn't work as well as the salesmen tell you.
Changes in data rates does not depend on the amount of power the station is sending out. It shifts to a different modulation and coding scheme so it can hear the bad transmission better.
Kinda like an eye chart, when you are up close you can see a full line very clearly. If you back up, your sight is the same but now you can only make out a big E on the same size chart. Switching to a lower transmission rate essentially makes it easier for the station to see the message it is trying to send. Just like the big E, it can see it but it is only able to see a fraction of the letters than it could when you are up close.
In fact a constellation chart is indeed very similar to an eye chart.
I ramble, sorry...
Sensitivity and Output Power are often listed as just two numbers. Very rarely is much detail given. For example a manufacturer may claim 17 dBm (50 mw) Power Output, and -83 dBm Sensitivity for their AP.
In reality, the power and sensitivity levels vary, at least somewhat, by channel. So are these average numbers, or the best numbers, or what ? How much do they vary by channel?
You can often get a hint of a products design/manufacturing tradeoffs when you see the real measurements. One manufacturer might stress level power output, while another stresses good sensitivity.
So when you see some numbers, what channel and at what rate were the numbers measured ? What firmware level was running, and when it comes to power output, is your country the destination? For a good example look at Cisco's n-AP specs and you will see fairly comprehensive lists broken down by rate and sensitivity.
Some manufacturers provide power levels using FCC measurement techniques and not those required by the IEEE. How in the world can you compare these ? I would argue that you can't really. A simple power meter can't make a good Wi-Fi power measurement. They are based on different technology. If nothing else, a good meter might cost thousands of dollars. A good WLAN analyzer will cost 10's of thousands.
As far as there being a correlation of one to one another, I have often seen radios with good power output that have poor sensitivity and vice versa. Although, if I had to choose between power and sensitivity I would go for the latter.
When it comes to different rates, it is entirely possible to have a radio with different power levels at each rate, or at least vary by modulation type, BPSK, QPSK, 16-QAM, etc. The same goes for power levels at these same points. The levels are often under the control of firmware, but how many manufacturers actually take the time to perform all the radio and compliance testing that this control would require. Just because a chipset vendor makes it available doesn't mean the radio manufacturer, that is the company that packages it all together, will take advantage of it.
As far as your question regarding the two MCS ranges go, the FCC and other authorities reduce the power limits with multiple radios (i.e. number of streams) in the same device.
Keep up the good questions, and good luck in your studies.
The replies to this should have covered your questions, but here's an attempt to synthesize it down a bit:
Tx Power and Rx Sensitivity are generally not related in any piece of equipment; they are two different subsystems.
Tx Power changes based on the modulation/coding scheme - Howard's explanation of this is right on the money. Bottom line is that you have to pay for what you get - higher data rates generally mean having less available power, but only in the sense of the fixed Tx Power figure. The key is the power-per-bit/symbol, but that gets into a slightly different question.
Matthew Gast touches on this in 802.11n: A Survival Guide.
From Chapter 2
To ensure that transmission and reception performance
is closely matched, a condition referred to as a symmetric link, system designers
must match the transmit and receive amplifiers; if especially powerful transmit amplifiers
and high-gain antennas are used, designers must also ensure that the receive side
has equal capabilities.
What I gathered from the section entitled "Radio Chains" Gast was making the point that enterprise class APs need to balance their Tx power and Rx sensitivity to ensure that clients that can hear the AP can be heard by it and vice versa.
I hadn't thought of "symmetric link" from a sensitivity point of view - only transmit power. But it makes sense.
The previous sentence in his book starts out: "Generally, an Access Point will have much higher quality components...".
My interpretation of this is recognizing that there are not many big companies that would spend the extra money on the Output Amplifier without also spending the money to make a high quality (i.e more sensitive) receiver.
Indeed, high quality AP's may have ten dB or better sensitivity levels than a mass produced client device. Much easier to receive the lower powered signal from the client.
If you use receiver sensitivity alone as a benchmark, the numbers may be appear very similar between a very good radio receiver and a bad one (a number such as -97dBm @1mbps) etc. The real difference is shows up in better "adjacent channel rejection" and better "dynamic range". These stats interestingly enough are not typically publicized. Differences will show up more in noisier environments.
Adjacent channel rejection is the ability of the receiver to reject an extremely strong signal on a totally different channel.
If anyone here is old enough to use a CB radio, this would show up say if you were on Channel 1, but someone was so close to you (say in the same parking lot) but on channel 40, you would still get their "bleed over" crunching in.
Dynamic range is the range of the receiver (maximum and minimum range). Minimum is the -97dBm but the maximum is the very close signals (such as -20dBm). Signals can actually get too strong. You'll notice throughputs actually decrease when your too close to an AP - this is why.
Some very good points Daniel.
I wouldn't make sensitivity the only criteria, but for two client devices with only 1 or 2 dB (conducted) output power difference, I have usually found the sensitivity values at the same rate, are a better predictor of range.
The only time I have seen problems with dynamic range, using RF lab test gear, is when there has either been a firmware or component change, or one caused by a manufacturing error - like incorrect interior cable routing.
The problems don't even have to be from changing an RF component. It could be a system oscillator, or a data cable that got changed-out because it was less expensive.
A small difference may cause the range to decrease from over 300' to less than 30'. I've also seen changes that reduced the working range to be between 30 and 200 feet.
Personally I have not seen any problems caused by poor adjacent channel rejection, but I do keep testing for it. That is not to say that I have not seen any problems from poor channel reuse in an installation. I've just not seen any with the measurements I've taken.
As far as published data goes, I take it all with a grain of salt. If nothing else, a particular client device manufacturer rarely has the "perfect" installation that will match the setup the radio manufacturer had to make their published measurements.
Good input my friend!
Question though - you mentioned: "A small difference may cause the range to decrease from over 300' to less than 30'. I've also seen changes that reduced the working range to be between 30 and 200 feet".
Assuming no obstructions, (FSPL alone) 30 feet x 2 x 2 = 120 feet (2x distance = 6dB more FSPL so 4x distance is 12dB), that to me is not a small difference - that would have to be >12dB difference to make that much range difference.