Forum

  • I often get questions about why theoretical power measurements and requirements so often seem at odds with reality. Well, it depends of course on your point of view.

    Much of the time, I am using an WLAN analyzer to measure Output Power from various radios, and I am doing this in conducted mode (ie direct connection to the radios antenna connector). My intermediate cables are also carefully calibrated.

    Why all the precautions ? It's because this is the defined method for taking Power, and many other measurements, according to the IEEE 802.11 standard. Nevertheless, I can also get fairly good numbers radiated (w/ antenna) under very carefully controlled circumstances inside an RF Enclosure of some kind.

    So now comes the interesting part. In either environment, using a normal (meaning not WLAN specific), Spectrum Analyzer, a client card, or the special built WLAN analyzer, the numbers will appear to be very different.

    And why is that?

    One answer is that the digital signals, by definition, are bursty signals. Their measurements have to be made using "gated" measurements - that is within close tolerances of the true start and end of the signals generation. A normal SA, say one used for FCC Compliance testing, is not doing this. Using the SA and attempting to measure the output power of say an 802.11b/g signal will give most people a very different number than what the WLAN analyzer provides.

    All in all, if you really want to compare the radios output to another one, you need equipment which is capable of measuring these "bursty" signals.

    Now getting to measurements made using the numbers (RSSI) returned by Client cards etc. The values returned by these devcies are controlled by the Base Band part of the radio, and the RSSI calculation method used. The CWNA text goes into those problems, so I won't dwell on this.

    But another difficulty here is where in the frames lifetime RSSI is measured. The radio baseband segment is usually controlled with another chipset from someone like Marvell. The chipset package is measuing RSSI over a small segment of the total signal - say the Preamble and SFD or one of the training fields.

    So even though this is consistent for this particular radio, and this particular antenna and everything else specific to this single device in this physical setting, it is not the same as the IEEE measurement which is taking the measurement over the entire Frame.

    If there is anywhere there is likely to be power variation between radios, it's in the first few microseconds of a radio being powered on. The IEEE has limits on these excursions, and their measurement methods take this period into account (they jump past it). But it is still there.

    Other variations across the frame can take place too, and can be related to temperature, the radios age, and a bunch of other parameters. Luckily for us, manufacturers are getting much better at all ot this.

    So what is the real point of this post - It is that SNR requirements can be tricky to set. It all depends on whether we are using RSSI values, or manufacturers spec sheets, or an SA measurement of some kind.

    To get a really good idea of the fudge factors involved in specifying SNR, look into the theoretical SNR numbers that various modulations require. You will find they are significanlty less than what people really use.

    This is only one issue, and of course short term environmental changes, like interference, and people walking around, and others also affects it.

Page 1 of 1
  • 1