Extreme is Bringing Purple Rain from the Cloud

During Networking Field Day 21 Aerohive, I mean Extreme, presented on their new “Cloud Driven End to End Enterprise” using ExtremeCloud IQ, formerly HiveManager. After the acquisition of AeroHive by Extreme there had been lots of speculation in the wireless community on what was going to happen with the product. The most obvious conjecture was the reason Extreme made the purchase was for the cloud technology that AeroHive already had, but how would they fold it into the mix with their other offerings?

Abby Strong (@wifi_princess on Twitter) started us off with a quick introduction into The New Extreme and the vision of the company. As Abby started us down the path we got some quick stats around the new technology users in the world, including the 5.1 billion mobile users and USD$2 trillion dollars being spent on digital transformation which was explained more. Digital Transformation is one of the hot marketing buzzwords in the industry at the moment, but what is it exactly? According to Abby, “Digital Transformation is the idea of technology and policy coming together to create a new experience.” This is what Extreme has been focusing on, but how? Extreme is doing this via their Autonomous Network, using automation, insights, infrastructure and an ecosystem all wrapped in machine learning, AI and security.

Picture1

The concept behind this is using the insights and information Extreme has gathered and looking at issues that arise in the network and being able to recommend if it is a possible driver issue, a recommended code upgrade to fix a network issue and so on. This is a really cool concept around automation and insights which is where most companies are trying to get in the industry and from what was shown at NFD20 in February and then again at NFD21, I think they are almost there with their expanded portfolio of solutions in Applications, Switching, Routing and Wireless and open ecosystem and open source. Check out more on those solutions and more about Extreme at https://www.extremenetworks.com/products/.

Next Extreme brought us into their 3rd generation cloud solution, ExtremeCloud IQ and showing their roadmap towards the 4th generation cloud.

The ExtremeCloud IQ Architecture was presented by Shyam Pullela and Gregor Vučajnk (@GregorVucajnk on Twitter) with a demo of the system.

The architecture is still the previous Aerohive design, however, without ever really digging into the product I was impressed with how they have done the back-end cloud. Currently Extreme is using AWS to host their infrastructure, but we were assured it was not dependent on AWS but could be run on any cloud provider. The setup is interesting as they have multiple regional data centers connecting back to a global data center. This provides resiliency built-in to the system, the ability to run in any country in which a public cloud can run and to collect the analytics and ML/AI data globally and not just from regional areas. With the architecture the ExtremeCloud IQ can also be run in different formats, public cloud, private cloud and on-prem to provide the customers with flexibility. From a basic cloud architecture standpoint, there is nothing crazy or specific Extreme is doing with the setup. The key to how they have done it comes into the scalability that has been designed into the system. Using a simple architecture makes it easy for Extreme to just add compute power to the back-end to scale it for large organizations.

Screen Shot 2019-11-08 at 9.27.55 AM

Picture2

With these regional data centers in use, the ExtremeCloud IQ is processing data to the tune of 3.389 PETABYTES per day and an astounding number of devices and clients to help with the ML/AI decision-making that the infrastructure is handling. These stats were mind-blowing to me and really shows the power of what Extreme has been building, especially around the Aerohive acquisition.

Screen Shot 2019-11-08 at 2.39.44 PM

Screen Shot 2019-11-08 at 2.40.27 PM

All of this data gets fed into the cloud dashboard as we see with the majority of other vendors. The client analytics is very reminiscent of the dashboards we see from Cisco, Aruba, Mist, etc., there is nothing too different in this regard with the exception of only getting 30 days of data, with no longer options available at this point in time. This is not a major hit against the technology, only that there is no way to correlate data longer than a one month period.

One of the differences that I see in the system is the lower number of false-positive issues that may be flagged by the system. Using the ML that is built into the CloudIQ is the ability to see anomalies and not present them as a possible bad user session. This is something that can cause headaches, especially in a wireless system with users entering and leaving areas with applications running. I will get deeper into these capabilities in an upcoming post.

The team that was on-camera also did not back down from some interesting and hard questions surrounding the roadmaps of the products, where things are and announcements that were made within 24 hours of the presentation being delivered.

All-in-all the solutions and products I am seeing from Extreme and very positive, they seem to have begun the integration of AeroHive nicely and I am excited to see where they go with the big purple cloud.

 

Security is the New Standard

Everywhere we look today we hear about hacking of servers or email systems, credit card systems being compromised and public Wi-Fi as a ‘use at your own risk’ service. With all of the  big bad’s out there, security should be the new standard within wireless.

Security is more than a buzzword

There are so many buzzwords in the industry at this point with 5G, WiFi6, OFDMA, WPA3 and so on, security should not be considered one as well. For years wireless security was nothing more than a rotating passphrase, if someone remembered to change it. WEP finally got hacked which gave way to WPA and then WPA2. But for the most part all devices where still using a passphrase that was proudly displayed on a white board, sandwich board or the like. When wireless was a ‘nice to have’ commodity this was just fine. With wireless now becoming the primary medium for access, security is a must. Data moving back and forth from private and public clouds requires data have better security than a passphrase. Certificates, central authorization and accounting has become a must. Centralizing these needs into a single system makes securing and monitoring devices within these data sensitive networks.

How can this go further within the network?

Taking security to the next level

Basic monitoring of security within the network, user logins, MAC authentications, machine, authentications, failures, etc. is great to keep up with what is happening or to troubleshoot when a user is having an issue. But with the risks in today’s networks, both wired and wireless, a deeper-level of understanding and monitoring is needed.

This is where a User and Entity Behavioral Analytics (UEBA) system comes into play.

The basics of a UEBA seems simple, but it is a very complicated process. Multiple feeds being provided by items such as packet capture and analysis, SIEM input, NAC Devices, DNS flows, AD flows, etc. all come into the system and are correlated against rules that setup by the security administrators. As this traffic comes in and is analyzed by user a score is provided to that user based on where they are going on the Internet, traffic coming in and going out to ‘dangerous’ locations (i.e. Russia or China), infected emails that were opened, etc. This score is then updated or times. Once customized thresholds that are configured by the administrators are met or exceeded different actions can be taken on that device, disconnected from the network, quarantined on the network, or an alert sent to an administrator.

Total Package

Designing and deploying networks with complete 360º security visibility is no longer an option but a must. With data flowing in and out of private and public clouds, into and out of Internet-based applications, and the pervasiveness of wireless as a primary access medium there has never been a more important time to make security a standard and not an after thought.

Cisco RRM Restart

Recently when working with Cisco wireless networks I have been really working to get Dynamic Channel Assignment tuned in and really understand much more about it. Some of the important things to make sure you are setting correctly include Anchor Time, DCA Interval (please don’t use the default, there is a blog post coming about that), etc.

One thing that became an option via CLI in the 7.3 code train was the ability to restart the RRM DCA process on the RF Group Leader. Why is this important I can hear some of you saying, or why would I want to do this? Here are a couple of examples of why.

If a controller enters or leaves an RF Group or if the RF Leader leaves and comes back online, as in a reboot, DCA will automatically enter startup mode to reconfigure DCA regardless fo the settings that have been changed on the controller, i.e. not using default of 10 minute intervals. But is there a need to do this manually? Yes.

As you add new APs into the network it is a good idea, and a Cisco recommendation, to initialize DCA startup mode. The reasoning behind this is as APs are added, DCA needs to rerun calculations and provide a more optimized channel plan based on the newly added APs and what the other APs are seeing over the air. When this command is run, it should be done from the RF Leader and will only affect the RF Leader.

The command should be run on both 2.4 GHz and 5 GHz radios:

2.4 GHz: config 802.11b channel global restart

5 GHz: config 802.11a Chanel global restart

Cisco Prime, What is it good for?

By now the majority of us have used some itinerant of Prime, NCS, or WCS for wireless management, placing APs on maps, template building, backups, etc. But what else can Prime really do?

I recently did a project where we needed to integrate a new prime instance with the standard CMX installs, which is a chore in and of itself (a post on that is coming), wireless management for the various buildings they have and some jobs to do back-ups of switch, router and ASA configs. There then a larger project to push QoS to a large number of switches, around 1,000 or so. APIC-EM was attempted but there was such a variety of switch models, chassis, IOS versions, QoS abilities to name a few. With these variances, only about half the switches were supported in APIC-EM. Since we had just stood up the new Prime, it was decided to use Prime to push these configs to the switches. Let’s be totally honest before we begin, Prime was not built as a wired network management suite. It was built form the old WCS and then pieces were added and we now have this. It is not horrible, but it is not the best for wired either.

Fun now ensues.

Initial thoughts were to just push Auto-QOS to all switches, however there was a requirement for more granularity. More fun begins. I start to set out writing config scripts in Prime for a couple of switch models to test on, 4506-E and 4500X. Should be simple right, take a QoS config, put it in the template, select the switch and go. To write a script in Prime you need some knowledge of Apache scripting commands which can be a little confusing in itself if you not done coding previously, like myself. I was lucky and had someone who could do these scripts and teach along the way.

Some of the pitfalls we had along the way included the need to build-in smarts to see what platform the switches were to use the proper commands, what version of code was on the switch, querying the switch to gather port types and line cards installed. To accomplish this you have to first begin with understanding the Prime database structure and how to call the appropriate variables for what you need. This excerpt from the Prime 3.1 user guide is a good place to start to understand the variable and how to call them from inside the CLI config templates. Also, see this Support Community Post which has some good info as well.

Now we have gotten our background info we are ready to start jumping in and breaking, I mean writing, some scripts. This was a lot of trial and error for me as we had to touch at least one version of each type of switch and verify we had the right CLI commands to enable QoS as it differs on platforms and even code trains within the same platforms.

After a couple of false starts with getting platforms commands, interface commands and settings just right we were able to get a working script for the first group of switches, the 4506-E,4500-X and a test Nexus 7K. The script ended up looking like this:

$Platform.contains(“Data Center Switches”))

The trick is we had to have the platform command and specifically the “Data Center Switches”. If a sh platform is run on the switches this is what is returned as the platform name. The reason we were looking at this command was it was easier and seemed more stable to call the platform type than the $Version.contains command to check IOS vs. IOS-X.

policy-map configs for IOS-X

#else

This is where we specify non-IOS-X config elements

access-list

policy-map

class-map

#foreach ($interfaceName in $InterfaceNameList)

#if ($interfaceName == (“GigabitEthernet0/0”))

#else

int $interfaceName

service-policy output QOS-SHAPE

service-policy input QOS-MARK

#end

#end

#end

These are the lines where the magic really happens. This code is going to the Prime DB and doing a querying for interfaces using the $InterfaceNameList and then we are checking if $InterfaceName == (“GigabitEthernet0/0”)) which is generally the management port on the switch. Of the port has that name we do not apply any Qos to it. If not any other $InterfaceName we apply the service-policy config to.

Gotcha 1 for me, make sure you account for all the #end statements you need. It becomes easy to lose track and it will frustrate you when you import to Prime and try to test it the first time.

With this basic config, you can now customize based on switch type.

The next step to deploy is we have to get this config into Prime, if you didn’t write it there, and make sure all our variables are working properly. After importing into Prime the Form View tab and Add Variable tabs will now be populated.

Our next post will cover Deployment of the newly created script to either 1 or 1,000 switches depending on the need.

 

 

%d bloggers like this: