“Will Controllers Go Away?” and Other Industry QuestionsBy CWNP On 09/27/2010 - 30 Comments
I’d like to pose a question and then some more questions. Many of you started reading the CWNP blog back when the Devinator started writing it a few years back. He and I are still good friends, and I often look back on his decision to jump into the vendor-specific game from a critical vendor-neutral perspective. By “critical” I don’t mean that I am criticizing him for making that decision, but rather I’m talking about using critical reasoning to deconstruct the merits of his choice to go to Aerohive (making the assumption that he could have jockeyed for a position at any vendor for any predominant reason). Like many of you, I have a vested interest in the momentum of the Wi-Fi industry as a whole, the changes in technology, vendor positioning and market share, and the like. Thus, since I know he’s ok with it, I’d like to hear your opinions about his decision to put his flag in the sand with a controller-less architecture.
In his kickoff blog at Aerohive, Devin said that one of his key areas of interest in choosing a vendor was their “technology” and that he had to answer these questions: “Is the company creating a platform that’s based on solving tomorrow’s infrastructure problems? What is its technical and cost model differentiation? What has been their success to-date?”
In answering these questions, one of Devin’s big sticking points was the controller-less architecture, which Aerohive has pioneered. Aerohive and Xirrus were the original frontrunners of the distributed WLAN architecture, and now Bluesocket has joined their ranks. Even Meraki has moved the controller to the cloud, which is a shift in the traditional perception of a controller’s centrality. Other vendors that have traditionally focused on centralizing the data plane are beginning to restructure their architecture with distributed data forwarding as well. Is this a natural evolution that will continue until the controller is phased out completely?
Want my highly-biased opinion? Sure you do. I think Devin made a pretty good choice in the architecture department. If you read into my hemming and hawing about architecture, you can probably see that I think the WLAN controller will antiquate…eventually. However, I don’t think a market-wide architectural change is right on our doorstep, but it is on the near (2-3 year) horizon. Here’s why:
1. Distributed networking is most appealing when a centralized aggregation point becomes untenable. 802.11n will bring (and is brining) this argument to the forefront, but not with so many 802.11a/b/g devices out there, and not with only one or two spatial streams (SSs) as the norm. When three and four SSs become the norm (or 802.11ac comes to fruition), and Wi-Fi is the primary client access method, distributed data flow, with policy-based enforcement, will be necessary. And, when that happens en masse, end-users will start to wonder why the controller needs to be there. Then vendors will be forced to change the architecture, if they haven’t done so already.
2. Every large WLAN vendor sells a centralized architecture. Aerohive, Bluesocket, and Xirrus (and Meraki if you want to lump them in) represent such a small percentage of market share that their tiny marketing squeaks are inaudible and noiseless in comparison to the others, Cisco especially. Unless Cisco, HP, Motorola, or Aruba jumps on this bandwagon in full force, the other big players won’t even budge as long as they can continue selling controllers and extra controllers. Even when they do make the shift, I think they’ll be very reluctant to remove the controller completely. Why stop making money when it’s so easy?
I think three things must happen to inspire this shift. First, a large vendor must be heavily motivated (with dollar bills) to steal market share in the Wi-Fi space and proceed to buy a small vendor with this technology or create it themselves. Second, that large initiating vendor must whip out the marketing bullhorn and put some energetic noise behind this architecture. Third, client Wi-Fi devices must continue to increase along with application bandwidth requirements.
3. Many buyers of WLAN products don’t really know what a WLAN controller does. So, as long as Cisco sells it, they will buy it. That is the beauty of being Cisco. Until more people realize that other architectures are available, they will not care or know to care that they are paying for hardware that can be abstracted to software. This is largely an education and awareness process.
As it relates to Aerohive, I wonder how they will fare when other vendors begin adopting the same technologies that currently provide them with differentiation. Will they be able to stand out in a market with less architectural distinction? Will they be purchased before that time, ushering their current technology into a place of greater prominence?
Craig Mathias also discussed the architecture landscape topic recently. He basically said that there’s no definitive benchmark (proven advantage) at the moment as it relates to performance, technology, or cost from an architectural perspective. He also argues that because radio performance is inherently dynamic and the industry is continually changing, it's very difficult to come up with a reliable test that is both reproduceable and valid for a long period of time. While there may not be a conclusive test, I disagree with him about our ability to make conclusions about the demonstrated advantages of different architectures. In fact, I think the architectural argument currently teeters towards the distributed side of the camp. Based on his reasoning, no wireless benchmark test should stand up to scrutiny.
Of course, I would tend to agree with him about the relative equality of architectural performance in real network environments and sponsored tests today. At present, there’s little to no demonstrable differentiation in performance, and I suspect that this is largely due to two facts:
• Most network environments today still see only light to moderate utilization of the WLAN. • Vendor-sponsored and third-party tests usually include less than 10 APs.
My contention is that when traffic loads increase (with more users and/or more demanding applications), the differences in performance between distributed and centralized architectures will become more salient. Craig Mathias also points out an important differentiation in the architecture debate. That is, there are two different arguments. On the one side, we’re talking about architectures that support distributed data flow (almost all do, by now); on the other side, we’re talking about those few that support distributed control functions. Today, distributed data flow is the biggest issue in maintaining performance with heavily utilized networks. The process of distributing some data flow with limited QoS and security policy application is easy enough for most vendors, but they’re somewhat handcuffed in capacity by their current architecture and hardware platforms.
As I see the issues, once you’ve offloaded data forwarding to the AP, what’s the use of a controller? Traditionally, it manages control functions like coordination of radio resources and management of keys in 802.1X. If your APs can handle these additional processing requirements (which are minimal), I don’t see any reason to hang on to the controller. As a proof of argument, consider the topic of resiliency (performance metric of uptime). Distributed models are inherently resilient because distributed network nodes (APs) do not rely on any central point of contact for network operations (either data forwarding or control). If one AP fails, other APs simply stand in the gap (adopting stranded clients, filling RF coverage with dynamic RF, etc.), and the performance hit on the network is minimal. However, in centralized models, large numbers of APs are dependent upon a central controller for network functionality (could be data forwarding or control functions). So, when one controller fails, you lose some or all functionality for a group of APs, or all those APs fail-over to a standby controller. Fail-over to a redundant controller is the best approach for the sake of uptime and optimization of services, but consider the cost. Not only do you have the cost of the primary controller, but you also have the cost of backup (or clustered) controllers. Consider also that many distributed sites may each require their own controller. With many controllers costing near $20k a pop, the heavy iron gets expensive. So, even if you move data forwarding to the edge and address the performance argument that has become more important with 802.11n, you still have the cost argument to overcome. That is, you still require expensive WLAN controllers for functions that can be abstracted to software.
Of course, the larger vendors can drop their shorts on price and maybe the cost balances out, but this isn’t so much an architectural advantage as it is a big business advantage related to volume, profit margin, and business diversity. The architectural cost advantage seems fairly intuitive to me.
However, as we look at the big picture of industry momentum and opportunity, I’m persuaded to shift topics a bit here and give two thumbs up to a late-VC-funded vendor by the name of Ruckus Wireless. While they don’t really target the largest enterprises like many of the other vendors already mentioned in this article, they have a pretty strong sell for many other markets, including emerging segments. Ruckus has something that gives them a unique edge in expanding markets, and that is exceptional RF performance. Follow me down this road.
You’ve all heard plenty about Ruckus’s dynamic antenna array from a technical standpoint, but what we haven’t really talked about is the doors that their array opens in market verticals. Consider that while Ruckus’s indoor WLAN market share is still fairly paltry, indoor Wi-Fi is a small percentage of Ruckus’ business. They’re a leader in the outdoor Wi-Fi market (market share leader in Wi-Fi mesh shipments to service providers in 2009 according to Dell’Oro) too, which is only growing. We’ve all heard about, and experienced, the congested cellular wireless network woes and how Wi-Fi is stepping in as an alternative offering in high density metro and public areas. Though Wi-Fi is less pervasive and broadly appealing than cellular technologies to the consumer crowd, the rapid expansion in cellular is creating rapid expansion in Wi-Fi offload for cellular, which is a niche where Ruckus fits like the proverbial glove.
Also, Ruckus is in good with broadband service providers. Ever heard of AT&T U-verse? They use Ruckus APs for delivery of their carrier services (video, voice, and data) throughout many homes. This includes as many APs as there are TVs plus one. There are plenty of smaller service providers out there installing Ruckus gear as well. Ruckus was the first vendor with dual-polarity outdoor 802.11n APs, which is a big deal for high performance broadband wireless including last-mile broadband delivery. In addition to these specialties, Ruckus launched with the priority of delivering media in homes and businesses. They’ve capitalized on that market already, and if future trends about media (especially video) consumption are true, demand is only rising. Specializing in HD video and other media content delivery is very good, which should also help Ruckus gain market share in enterprises and hotspot focused customers (hotels, public areas, airports, etc.).
All of these market specializations are afforded by some proprietary and very sophisticated antenna technologies that improve reliability and throughput at range. That means more bang for the customer’s buck with fewer APs. No one else has this type of technology (read “leverage”) in these expanding markets, which is why Ruckus is very well positioned right now and growing rapidly. Good RF engineers are hard to come by these days and every WLAN vendor is dying to improve RF performance. Ruckus has a big head start.
So, here are some of the questions. Is Devin riding the right horse (from a vendor and architecture perspective)? And the inevitable follow-up question, why? Where do you think the market is going? If you had to jump to a specific vendor from a vendor-neutral position, which one would it be? What are the primary persuasive factors for such a choice?
Debate at will, but libel ain’t cool. Let’s keep it professional.
PS. I finished writing this article last week, but deferred posting to today. I just found out about this video, which includes a discussion about Motorola's architecture.Tagged with: beamforming, xirrus, HP, ruckus, aruba, Aerohive, Wi-Fi, Cisco, motorola, Meraki, WLAN, architecture, distributed, centralized, evolution, Bluesocket, control