Forum

  • By (Deleted User)

    Per the 802.11 spec, receive sensitivity is a "conducted" measurement (ie not radiated, and no antenna involved). The same for transmit power.

    It's possible to correct for radiated measurements of sensitivity under isolated conditions - using calibrated loss tables. It's not so easy for power measurements which don't include some kind of "quality" factor - there is no PER (Packet Error Rate) measurement there.

    From my experience you need to be leary of manufacturers who provide little or no sensitivity readings for their equipment. It's very easy for a device to have very different readings on each channel, not to mention for every different rate as you would expect. Do they pick the best (probably), the worst (doubtful), or average values to present?

    I've found the same is true with antennas and gain per channel. The most thorough reports I've seen are from Cisco.

    By the way, the now recinded 802.11T spec, had lots of additional data on testing Wi-Fi gear in it.

    Cheers

  • Wow. I'm loving reading this thread. I appreciate all of the comments, and BJWhite is likely right that I should restrain my passion a little for the sake of the spirit of this forum. Douglas, GT, Ryan, Marcus, and the others are brilliant technolgists and well-respected in this industry...especially by me. I'm guilty of needing to feed my family, so I made the best decision that I could involving where to go after my CWNP days had run their course. I'm obviously biased at this point, though it's certainly best for everyone that I don't spew kool-aid on everyone. :) I have the utmost respect for the CWNP community, and will continue to contribute as I have the time to do so. Please feel free to reach out to me even if it's not Aerohive or Wi-Fi related.

    Big Love,

    Devinator

  • I'm not sure I would say that Omni antennas are terrible. In a shared medium and when using a protocol like 802.11, they have distinct advantages. If you're talking about client access, then omni makes more sense than directional...unless you are talking about pseudo-omni (array of panels)... Neither Xirrus nor Ruckus has a pseudo-omni. If you're talking point-to-point (like mesh or bridging can be), then Ruckus's solution makes good sense. Their outdoor bridging solution makes a lot of sense to me. If the 802.11 protocol were deterministic (let's say like a polling scenario), then directionals could make more sense. As long as clients need to hear each other for Tx coordination, omni antennas fit with the protocol better. Directionals can increase SNR, but may also experience consequences due to the 802.11 protcol. Xirrus's solution is certainly interesting and creative, especially for high-density, but individual APs are a more flexible design without inter-AP interference. I'll tell you what I really like about these two companies is that they have really nice GUIs. Easy to use. No vendor has it all right, and each has pluses and minuses in particular scenarios. I chose Aerohive for a wide variety of reasons, one of which was their technology. It resonated with me, and I thought it was extremely compelling. There are other good products on the market, and Xirrus and Ruckus are absolutely among them.

    Devinator

  • I like the last post .It makes me think that Devin is still vendor nuetral when it comes to forums.

    Omnidirectional antenna's come goodfor indoor wifi Enterprise deployment[When it comes to client density]

    Xirrus is also a good pick but $$$$ spent may be on the higher end .

    Ruckus would come good when the client density is lesser.

    Choosing a product will finally come down to what customer wants .A single product cannot satisty all the verticals .I wish we stop talking about "who is better generally "

    I would make more sense to compare products based on the verticals and their ROI.My 2 cents :)


    Thanks
    Wirelesswizard

  • I???¡é?¡é?????¡é???¡éd be interested to hear from manufacturers how they completely define the term sensitivity for Wi-Fi systems. Perhaps Devin can give us some insight.

    When a digital demodulator receives a modulated signal it processes that signal and outputs a digital data stream. Provided that the signal quality [ we???¡é?¡é?????¡é???¡éll call it S/N here ] is sufficient, then a digital data stream will be output.

    However, the question here is: ???¡é?¡é?????¡­?¡°How accurate to the original data is the output stream ????¡é?¡é????????

    In a microwave radio or satellite modem you will get an indication of ???¡é?¡é?????¡­?¡°carrier lock???¡é?¡é????????, but what of the quality of that output stream ?

    In other words, between two different manufacturers, both may be able to demodulate the signal, but we do not know the quality of that output stream. The only metric I can think of is to use Bit Error Rate. Subjective qualitative measurements could be done [ e.g with voice???¡é?¡é???????|.does A sound better, or does B sound better???¡é?¡é???????|.with video???¡é?¡é???????|..does picture A look better or picture B ? etc ]. These are highly subjective though.

    As an analogy, imagine two empty rooms of the same size. In one, standing against the back wall is a grumpy old man. In the other is one of two remaining young teenagers whose ears have not been damaged by MP3 players. A man enters each room in turn and from the opposite wall reads a prepared sequence of text to each in turn at the same speed and volume.

    At the end of the test the man asks each person if he heard [ was able to demodulate ] what was said. The kid says ???¡é?¡é?????¡­?¡°Yes???¡é?¡é????????. The old man says ???¡é?¡é?????¡­?¡°Of course !!???¡é?¡é???????? [ grumpily ]. Both have ???¡é?¡é?????¡­?¡°demodulated???¡é?¡é???????? the signal, but what about quality ? When asked to write down the sequence of words, the kid gets it all correct, but the old man has some words missing, and others wrong.

    The test is run again, but this time the speaker talks louder to the old man [ to compensate for the reduced sensitivity ]. The process is repeated until the old man gets all the words correct.

    The question now, is ???¡é?¡é?????¡­?¡°What do we define as acceptable as a test metric ????¡é?¡é???????? Do we say that the kid and the old man have to get every single word correct in the speech or do we say one word in ten, or one word in 50 ?

    This is idealized however, as we do not normally want perfection, due to cost. [ Sure, we could build a system that almost never has errors, but the link-budget would show almost impossible physical, monetary costs involved .We need to know what is ???¡é?¡é?????¡­?¡°acceptable???¡é?¡é???????? and what is not ???¡é?¡é?????¡­?¡°acceptable???¡é?¡é???????? for normal communication ]. With data, perhaps we say one bit in error in every one million, perhaps with video we say 1 bit in error in every 10 million. With voice, we may say one bit in error in every one thousand. It will depend upon the customer. Usually though, there are international standards that define ???¡é?¡é?????¡­?¡°acceptable???¡é?¡é???????? values. When an international link is designed, it depends on the customer. Is the link just for folks calling each other or is it for an audio channel for a high quality [ technical-wise ] presidential speech ?

    So, in order to define sensitivity, we need to have some form of a metric to say ???¡é?¡é?????¡­?¡°We???¡é?¡é?????¡é???¡éve demodulated the signal???¡é?¡é???????|that???¡é?¡é?????¡é???¡és great???¡é?¡é???????|but what about the quality ????¡é?¡é????????

    If different manufacturers use different metric values, the whole playing field would be uneven.

    Dave

  • By (Deleted User)

    Hi Dave,

    As I posted before, the 802.11 spec itself has the specific methods for determining Sensitivity and Power levels. Look in sections 15.4.x, 18.4.x, and 17.3.x for the rates, limits and dBm values.

    These are all CONDUCTED tests, meaning no Antennas, and the minimum sensitivity is specified for each different rate.

    Personally I use the Anritsu MT8860B and MT8860C WLAN Test sets to perform my tests, but there are several companies who make similar test equipment.

    Another approach uses Spectrum Analysers, RF Power meters, calibrated amplifiers, etc. to perform the same tests.

    The (since recalled) 802.11T recommended practice, aka the Wireless Performance Prediction (WPP)standard also had some good info in it.

    The real question is do they round up/down, average or what. Personally, I want to see results for channels 1, 6, and 11, but except for internal reports at my company, I've never seen any.

    The Anritsu will report hundreths of a dB variation, but I've never seen a spec reporting something like -65.43 dBm or even -65.4 dBm for Sensitivity, only -65.

  • devinator Escribi?3:

    I'm not sure I would say that Omni antennas are terrible. In a shared medium and when using a protocol like 802.11, they have distinct advantages. If the 802.11 protocol were deterministic (let's say like a polling scenario), then directionals could make more sense. As long as clients need to hear each other for Tx coordination, omni antennas fit with the protocol better. Directionals can increase SNR, but may also experience consequences due to the 802.11 protcol.


    I agree 100% with Devin here. I'm not sure I would say "Omnis are terrible" either. That's a pretty bold and blanket statement.

    It seems everyone is so torqued up about the APs, that they forget about the clients. It's a bi-directional problem to solve...don't forget that. I'm not trying to downplay TxBF, but there is more to the total solution than increased range to a couple of clients here and there.


    To the controller bottleneck questions. I've never seen the controller be a bottleneck. Properly designing the network is the first step to it not being a bottleneck. Don't design a controller in that supports one gig port if you're terminating 200 802.11n APs onto it. :) Things like that.

    Even in some of my larger customer networks, I've been surprised to see the utilization of the controller uplink ports being lower than what I'd expect...and that's with taking good sampling data.

    But that's just the easiest "bottleneck" to find...port utilization. Other bottlenecks could include encryption overhead, firewall throughput (if so equipped)...but again, the controllers are usually so over-subscribed in this capacity, that these possible issues brought up are very rarely issues. I'd even go out on a limb and say never, but I don't have the data to back that up. But I can say that I've never seen it be an issue.

    Aruba supports multiple forwarding modes anyway...so sometimes the controller isn't in the datapath depending on the requirements at hand. All data tunnelled back to the controller (tunnel mode), all data de-encrypted and processed at the AP (local-bridging), or policy-based forwarding where the AP runs firewall code and makes a decision to tunnel or bridge based upon the rules in the user role.

  • Hi bjwhite,

    I'm not entirely sure that Aruba's Policy-based Forwarding applies to general deployments, but rather only to remote AP (branch office) scenarios. If you have, let's say, a hospital or school, where there's 1 or more controllers with controller-based APs, Aruba APs don't do local forwarding, but rather send anything that needs to be filtered (firewalled) back to the controller.

    Am I wrong?

    Devinator

  • Aruba's policy-based forwarding currently cannot be applied to non-RemoteAP APs. However, that's just a definition--"RemoteAP" which in Aruba-speak just means that the APs are using IPSEC for the AP to controller communication and traffic tunnelling.

    Originally and even today, Aruba's APs GRE SSID traffic back to the controller. Early on, it was slick to make the entire AP to controller "conversation" happen with IPSEC so as to traverse untrusted networks like the internet, which was brought out in the 2004/2005 timeframe (called "RemoteAP" evolved over the years to be refered to as "VBN: Virtual Branch Networking")

    Then, from there, since the AP to controller path was completely secured, it wasn't a stretch to allow encryption to happen at the AP (since keys from the auth infrastructure were now within the IPSEC tunnel) which opened the way for local-bridged and policy-forwarded SSIDs, both of which require encryption to be handled at the AP.

    I have more information than I'm sharing here, so bring this subject up again in a few weeks. :) But let's just say there are cases where it's advantageous to tunnel unencrypted traffic (full 802.11 frame, all traffic) back to the controller for processing. And there are cases where processing at the AP and local bridging or tunneling just defined traffic is advantageous. Aruba does both and covers both bases.

    Regarding controllers though.....some people use the term "ah yeah, but you need controllers" and equate that with some drastic cost. Thing is, most vendors (Aruba certainly) has different sized controllers to fit the need. There is a controller that's no bigger than a Linksys router that supports 8 APs and makes the features you get as a whole pretty cost-effective.

    If you have a hospital or school, most likely the AP count would be such that it would make sense to place a controller there anyway...(obviously if you're talking about controller-based vendors.)

    The only place where not having a controller has been advantageous in my experience has been retail. As in, "I have 450 locations but only 1 or 2 APs per store." Putting a controller in every store really doesn't make sense. But placing a RemoteAP or two there in the store certainly makes sense....especially with policy forwarding (for local resources and printers, etc) and persistent SSIDs that can survive WAN failure, 3G/4G backhaul for primary or secondary connections, multiple wired ports, etc.

    Outside of that, the controller or no controller seems to be a religious/vendor war. :) Meaning, controllers at places where there are more than a handful of APs and users.

  • Hi Wlanman09

    That???¡é?¡é?????¡é???¡és exactly what I was looking for:

    15.4.8.1 Receiver minimum input level sensitivity
    The FER shall be less than 8???¡¥???¡¯?¡é?€?¡±10???¡é?¡é?????¡é?€??2 at an MPDU length of 1024 octets for an input level of ???¡é?¡é?????¡é?€??80 dBm
    measured at the antenna connector. This FER shall be specified for 2 Mb/s DQPSK modulation. The test for
    the minimum input level sensitivity shall be conducted with the ED threshold set ???¡é?¡é?€?¡ã???¡è???¡¥?¡é??????? ???¡é?¡é?????¡é?€??80 dBm.
    15.4.8.2 Receiver maximum input level
    The receiver shall provide a maximum FER of 8???¡¥???¡¯?¡é?€?¡±10???¡é?¡é?????¡é?€??2 at an MPDU length of 1024 octets for a maximum input
    level of ???¡é?¡é?????¡é?€??4 dBm measured at the antenna. This FER shall be specified for 2 Mb/s DQPSK modulation.

    They have done this in a sensible manner, by using a Frame Error rate [ FER ] which more accurately reflects the ???¡é?¡é?????¡­?¡°bursty???¡é?¡é???????? nature of 802.11. On a T-1 point to point microwave link, a continuous BER pattern would tend to be used.

    Tks

    Dave

Page 3 of 5