Is the SNR used by the CHD algorithm based on the value reported to the AP, as measured by the client, or is it the value measured by the AP for each client ?
It would seem like the former, rather than the latter, but the Cisco document I found was written a little ambiguously.
If it is the latter I could see low powered clients, mixed in with adequately powered clients, causing network problems.
I believe it's based on the RSSI of each client as measured at the AP.
According to what I read about Cisco's RRM, an AP can decide to increase its output power based upon the low level signal it is getting from a client. The lower the RSSI, the higher the AP output power can be.
This seems to be in contradiction to the idea of having a balanced link.
I know I have seen some comments lately that say the idea of needing a balanced link is nonsense, but this flies in the face of everything I've heard previously. Except maybe from the same people who say site surveys are not necessary.
I know for a fact that we have seen sites where the client power level is so low that the client can see the AP just fine, but not the other way around. It would not do any good at all, in this case, to boost the AP power. Now if you were commanding the client to up its power level, that would be another thing. But that isn't what seems to be going on with RRM.
Can anyone out there comment on how well this feature actually works in the field ?
The only other comments I have seen seem to say disable RRM, or constrain it immensely, especially on a VOIP network.
Works well. Most clients these days are ClientLink capable.
Cisco ClientLink 2.0’s innovative beamforming capability improves SNR for all
802.11 clients. ClientLink 2.0 technology takes the industry norm one step further
and improves performance in the downlink direction, making any client better able
to hear the access point. The wireless channel is reciprocal, meaning that the
transmission path from the access point to a client, and from that same client to the
access point, have sufficient similarity in each direction. Thus the access point can
use the adjustments calculated by a maximal ratio combining algorithm (referred to
as “weights”) to optimize the reciprocal signal transmitted back to that specific client
using the access point’s four transmit antennas. This technology essentially learns
the optimum way to combine the signal received from a client, and then uses that
information to send packets in an optimum way back to the client.
This increases the downlink signal-to-noise ratio (SNR) and the data
rate over range, thereby reducing coverage holes and enhancing the overall system
It appears Client Link gets its improvement from using bi-directional Multiple Ratio Combining (MRC), and NOT the power increase techniques I was wondering about in RRM. From what I've read, it sounds like MRC is a much cleaner technology, RF wise, than RRM.
I have not used or tested MRC myself, but it sounds great. Maybe if my company goes to "ac" someday, I'll get a chance to test it.
I wouldn't be surprised if Cisco were to dump RRM, in favor of Client Link wherever it can.