If I have a client devices that stops "talking" to its AP, how long (after the device actually dies, or is turned off), does the AP wait before it removes the client and logs the event to the AP log file?
Unfortunately the equipment was unattended at the time, so no one was present to see the problem, but from another set of unrelated traces I saw several wireless network glitches.
Several minutes later, the AP's log shows: "Packet to client xx.xx.xx reached max retries, removing the client"
The AP retry count is set to the default value of 64. The AP is a Cisco Series 1200. The WLAN is very lightly loaded. The client is nearby, and its usual connection rate is 54 Mbps. All the b/g rates are enabled.
It looks like the log message entry, and the client removal, are not happening for another four minutes after the problem actually occurred.
I can see where a delay might be intentional and justified, but does anyone have a more definitive explanation of this behavior ?
Thanks in advance.
Is this autonomous or controller?
On the controller the user idle timeout is what almost always deletes the client record. I know this because the WLC doesn't honor deauth frames from a client device, while autonomous aps will honor the deauth. In other words a client device on the wlc send deauth nothing happens on the wlc. it waits for the idle timeout to expire then expires the record.
Thank you very much for your response !
This is a lightly loaded automomous AP, and the device supposedly never issues De-Auths.
On the AP's Activity Timeout page under Association, I found the Client Station Default was set to 600 seconds, i.e. 10 minutes. There is no maximum Timeout set. The other four default timeouts on this page each show 60 seconds.
I just went back to my original (sparse) data and if I try to correlate the time to the point at which I absolutely first started recording excessive packet delays, I get an 8 minute delay. At least that's closer to 10 minutes :) .
Part of the difficulty is that I have two different networks, each with their own time server. Because of limitations in my data, I can only calculate the difference between the two of them as plus or minus one minute. On this particular day, they were different by 14 minutes. I don't have control over them, and the one server seems to lose time frequently. This has caused problems before, so I need to get IT to to straighten it out once and for all.
I am tempted to lower the default Client Station Timeout, for debugging purposes, but I don't know why it was set to 600 to begin with, and I don't want to cause any new problems.
What kinds of problems might I have, or encourage, if I were to lower this value to also be one minute ?
Because the AP is so lightly loaded, I doubt it would run out of any resources.