Forum

  • Hi,

    Two newbie questions. Back in the day, I remember nic card manufacturers going through leaps and bounds to find cheaper ways to balance the signals coming and going from their card to the cat 5 fast ethernet cable plant. The balancing was needed to ensure that the signal played nice on the wire.

    I believe that air presents infinite impedance to an antenae, and was wondering if there is any balancing done for wireless, and if so, how do you balance against infinity?

    The channel separation on 802.11a is much cleaner than 802.11b/g, and I was wondering if anyone knew what the market motives were that allowed b and g, with their inherant channel slop, to become such a bigger player?

    Thanks,
    Tom

  • By (Deleted User)

    For your first question, what really needs to be balanced against is the noise floor. A good signal-to-noise ration (SNR) is required to have the best communication between RF devices. Better (and less expensive) antenna designs and radio chipsets improved this immensely over the years. Also, the ability to offload encryption and decryption processes helped the radio CPU perform much better in regards to getting packets on and off the wire.

    Regarding your second question, 802.11b/g solutions "won" in the marketplace because:
    a) they were there first -- .11b hit the market in 1999
    b) they were a whole lot cheaper than .11a products at the beginning
    c) they were available and worked
    d) distances received with .11b/g devices initially were much farther than what you could get with .11a devices (that's since been mostly resolved with better chip sets and antennas)

    Joel

  • The development of 2.4GHz networks over 5GHz networks is not a matter of technology but one of economics. 802.11b was backwards compatible with existing 802.11 networks. So, companies needed to spend less to expand their networks. 802.11a was ratified prior to 802.11g. People saw the higher speeds and greater throughput, which is the main concern of users, and wanted the speed for their existing networks. 802.11g used OFDM like 802.11a and could offer the desired speed while being able to also support 802.11b stations. When PCMCIA cards were very expensive, this was attractive to buyers. The reality is that mixed networks were and still are slower than pure OFDM networks but allowed 802.11b and 802.11g to coexist. 802.11g was backwards compatible with 802.11b, meaning existing hardware need not be replaced and saving money for the business. 802.11a offered cleaner airspace and greater throughput but at a much higher cost. When users traveled there was no and still is no guarantee that they will be able to find networks at hotspots which support 802.11a (802.11a/b/g cards were not so widely available as they are now). There is the long answer. The short answer is that a sub layer of the Politics layer of the OSI model ( my Layer 8 ) called the budget sub layer prevents common sense and the correct hardware from entering the workplace quite often.

  • Hi Joel,

    Thanks for the reply. To what layer did they offload encryption? Layer 2?
    Did early devices do encryption in Layer 1, or am I missing the picture?

    Tom

  • Bryan,

    Layer 8, the politics layer. Hah, I love it. I remember running benchmark tests on native Intel code which ran faster in translated mode on the Alpha than it ran on it's own native Pentium platform. Alpha's are mostly museum pieces nowadays even though they technologically kicked arse.

    Thanks,
    Tom

  • Layer 2 is for the most part open so as to allow the wireless devices to communicate. Encryption is layer 3 and up.

Page 1 of 1
  • 1