Ekahau’s ECSE Advanced Class – Why you need this

Recently I attended Ekahau’s ECSE Advanced course. I had heard about the class through WLPC and knew the class had changed since its inception to include some really cool stuff so I jumped at the opportunity to attend.

During day 1 of the 3 day course it became apparent this was not going to be a standard class on surveys and wireless. After introductions and housekeeping we jumped right into the content. It was very refreshing to see that we would be covering more processes and workflows than the actual software and how to use it. The curriculum was very timely as well as I am working in my day job to build a team of engineers doing designs and surveys. Workflows and processes are always the hardest thing to deal with and get in place when building a team. As we continued the discussions of how and why certain workflows should occur within a team of engineers and surveyors you could see lights going off in the attendees heads and the discussion began to pick up with lots of ideas, information and thoughts around the subject. Things definitely started clicking for me on how the team should be setup for management and project sharing.

We then continued on with a discussion around Foundation of Success for a wireless project. Most of us that have been doing this for any length of time already have our idea of success and what determines success. But the discussion around this subject and the content was very thought provoking. Success has 4 foundations that equate to a repeatable process that, when followed will provide the same outcome each time which is what we are looking for with our projects.

The discussion then moved to how to work in teams with Ekahau files and manage the project files successful. This is not as easy as it sounds if you have teams of engineers and surveyors out on multiple projects that are complicated sites and they split the workload and then need to bring it all together. This is where a majority of teams begin to struggle. We then did an exercise within our lab groups to show how this works and the importance of following the workflows and lifecycle laid out at the beginning of the project. Things can quickly go off the rails as we found out.

The second day of class we began discussing Ekahau Connect and how the tools we have in Ekahau Pro help with teams of multiple users as well as the cool new tools Ekahau has added for the Sidekick, like Packet Capture and Cloud Sync. We began leaning on our wireless skills and knowledge for the labs we did at this point. We did a couple of surveys to capture data then did some analysis for spectrum analysis to get used to the RTFM interface. Once this was complete it was time to really use the RTFM and find hidden interferes in and around the classroom space. This is always a challenge and definitely helped remind me how important it is to go back to your roots in RF.

Finally we discussed and did some labs around attenuation testing and mapping. This is becoming more of an integral part of wireless surveys in many different forms. When used correctly the information gathered from attenuation readings can help to build out an information database for your team as well as cutdown on the time on-site APoS surveys can take, but still provide just as much data.

The class then finished on the final day with discussions around file manipulation, scripting and report templates. These three topics can really help shape how a wireless team uses the data from surveys and can really set the team apart from others. The scripting and manipulation is still new to me so I will not comment too much on it, but the report template aspect within Ekahau is one of the most important items of the software. We have for years written reports with a standard template and then copy and pasted screenshots and data sets from within either AirMagnet or Ekahau for presentation to a customer. Inevitably a reference to a previous customer or project would get lost in the shuffle and lots of late night quick editing would need to occur. With the way Ekahau handles report templates teams can save literal hours and even days in reporting. Beware before starting down the road, the templates are written in JSON either some knowledge is needed or some strong Google-Foo. When starting for the first time it seems overwhelming on the templates but as you get into and understand how things work and use the Ekahau site for reference and examples it comes quickly. Which is needed for the final exam for the course.

The course finishes with using data from the project we worked with during the week being used to build a report based off an example report. The example is what the final report should look like and we needed to build the code and formatting within the JSON template. This proved a little overwhelming for some, just because JSON may have been new and they had not dealt with the templates previously. It was a little bit of a challenge, but again was good as it helped to provide different aspects on reports and some ideas on formatting that I had not thought of previously, including using the Notes and Pictures features within Ekahau.

After the final, my head was full of ideas, thoughts, questions and excitement which is exactly what a course should do for us. The ECSE Advanced is more than worth the time and cost, especially if you manage or work with a team of multiple engineers and surveyors. The training arm of Ekahau has again scored big with this course in my opinion.

Ekahau Pro in the Field



In the wireless field Ekahau has started to become the standard for wireless site surveys and predictive designs. Earlier this year the latest version of their software, Ekahau Site Survey, was released with a cool new facelift, cloud sync functionality, new functions to use with the Sidekick and new branding to Ekahau Pro.

I personally have been hesitant to use it for a few reasons but mainly because I am not one to go all-in on new software that has not been put through the paces by me personally before turning over to a whole team as a ‘corporate standard’. Especially when there was such an overhaul as there was with Ekahau Pro. I had the opinion the software was kind of rushed to market and still had some issues that needed to be worked out before turning over to the larger team. Most have been fixed as Ekahau, as they always have, is listening to the users and professionals and working to bring us one of the best packages on the market.

Recently I attended the Ekahau ECSE Advanced course, this will be covered in another post, and got my hands truly dirty in the software and all the other tools Ekahau Pro has brought to us. This helped calm some of my misgivings on issues with the software as well as really helping me to understand some workflows, team concepts and basic awesomeness Ekahau has provided to the industry. After the course I needed to perform an outdoor survey that came out to about 2.8 million square feet, so I figured this would be a great time to really put the iPad App and some of the really great features of the software to the test.

I started with really sitting down and working through what my workflow should look like. This is something that I had somewhat done in the past, but not to the point of actually writing out from project inception to reporting how the flow should look. Without this workflow I now realize Ekahau is just data collection software. Once you get a solid workflow in place and really use it, the software really stretches its legs.

I began by setting up my project as I normally would. I then decided to try out the iPad survey instead of dragging my laptop all over this outdoor survey in the heat. I got my Sidekick all setup on my bag and got the iPad app running. I had to make a decision on getting the project to the iPad. I was on-site to tune the WiFi and really get it working better so I decided to transfer to the Sidekick to move it to the iPad. The cloud sync for me and my team at this point is not feasible as a usable solution as there is no file structure to keep projects separated by customer, survey, etc. With hundreds of surveys with multiple surveyors this begins getting out of hand and unusable very quickly. I am confident Ekahau has a solution they are working on and am excited for it so I can really start using this cool feature.

Having the project on the internal drive of the Sidekick was super useful. This gave me a central drive to use for both the iPad and the laptop as I needed to edit, etc. It also gives me a built-in backup of the project in case I do something stupid, like that would ever happen, and delete the wrong file or have some sort of corruption and lose hours of data that might not be able to replicated. Having the iPad connected to the Sidekick via a USB cable makes transfer very quick of the file and very simple. The connection for the Sidekick to the iPad can be somewhat challenging depending on what generation of iPad you are using. The Sidekick is a Micro USB connection and the iPad can be either a lightning connector or USB-C connection. I have had an issue with finding a Micro USB to Lightning connector cable that works without adapters and the like for the iPad. The ones I purchased did not hold up well in the field during surveying. Now came the fun part surveying.

Surveying with the iPad was a welcome change, but not without its own challenges. After years of holding a laptop in one hand and clicking while trying to read a map are coming to an end. The iPad was obviously much lighter than a laptop of any kind and the clicking with the Apple Pencil was nice and easy as opposed to using a touchpad or some such thing on a laptop and mis-clicking, right clicking on accident, etc. The main issue I had with the iPad and the pencil the heel of my hand accidentally tapping and placing a data collection point that I was constantly having to remove. This taught me another trick I should have done years ago. Clicking more often so I can easily execute an undo without having to re-walk all the real estate already covered.

I then decided to use the Notes function within Ekahau for the installation. This feature has been expanded nicely in Ekahau Pro to allow notes and pictures to all be together along with a running history of notes with each person who added a note and when they added it. This helps when multiple people are using a survey file and the notes are being pulled out into a report after the survey. I was using this feature in particular to capture pictures of the AP installation along with location and serial number and MAC address to output an as-built type table at the end of my report. The feature is very cool with the iPad as you can use the internal camera, then use the pencil to do markups right on the note and then type out any other notes needed for installation or information. The only drawback I had was I was using this during my validation survey and when I wanted to take a photo or place a note, I had to stop the survey and then restart after the note was captured. It was a pain the first few times, but you get used to it quickly and just work with it.

I had one other issue during this survey that was no fault of Ekahau, the iPad began overheating very quickly in the heat of the day. I made it about 30 minutes out of the gate before the iPad totally overheating and shut down for an hour or so to cool-down. Once I began in the evening and early in the morning I had no other issues with the iPad.


All-in-all after delaying using the iPad and Ekahau Pro for a few months, I am very happy I decided to put it through it paces and was very pleased with the final outcome and flow of work. As explained in the ECSE Advanced course, the workflow is the most important part of using the software. The ease of surveying with the iPad was very welcome and the ability to hold the survey file on the Sidekick and move back and forth from the iPad to the laptop for further analysis was very welcome and very exciting. Ekahau has yet again bought us what we have been asking for and is setup very well for the future.


To Predict or not to Predict, that is the question…

In the world there are many questions that polarize us all; did Han shoot first, Kirk or Picard, Left Twix or Right Twix. But the most important question of them all, should predictive designs exist. If you follow the wireless community this is probably the most polarizing topic right with lower data rates being enabled or not.


Designing wireless is one of the most challenging things we do. We receive a set of drawings and put the Solo cup down and start drawing circles. Wait, bad flashback. These were the good old days. We would draw our circles, place APs and then go on site and verify locations and take some survey readings with an AP on a stick to verify all looks good, what does the spectrum look like, are there interferers in the area.


Today we still draw circles, but they are really cool looking ones using Ekahau typically. We draw walls that can help us predict what loss may occur from walls, doors, etc. Then we go on site and take readings with the same software and an AP on a stick to make sure those pretty circles match. But why do it ahead of time and not when you are on site?


I have had many instances over the years where a predictive survey was all I was able to do. The customer would not sign-off on doing an on-site active survey because of the disruption it may cause, the building has not been built yet, or just no budget for it in the project.


I have also had the opposite where a customer would tell me that they saw no reason for a predictive and the coverage they had was ‘good enough’. But is it?


With the stuff we are putting on wireless today can we really be ok with just good enough? In a large portion of organizations, we have gone from wireless being a nice to have to a wireless first strategy. This includes VoWIFI using Skype or some other demanding application/protocol. How do we handle this without trying to do some kind of prediction? Are we to just install the network and then do a remediation at additional cost after things blow up quicker than Lee Badman’s temper when they take the all you can eat steak away??


With tools like Ekahau, and no they are not paying me, but they have awesome swag, you can do predictions based on applications, number of users, and device types. We no longer need the Solo cups, oh what? The keg just got tapped…


But all joking aside, is it really worth us guessing and throwing APs up and then coming back doing remediations after the fact to make sure we handle the new generation of wireless networks appropriately? Or should we just do the extra work up front so we have an idea of what we are walking into? The reports we can provide ahead of time as well as comparing to post-installation surveys are invaluable to this bloggers opinion and will continue to be fought for as long as I do wireless.



Cisco Prime – This is what it is good for. Part 2

In the previous post the scripting needed for multi-linecard switches like the 6500 was discussed. In this post we will finally deploy the configs we have created through our scripts using the Prime Deployment function.

To start we simply go to our config template and open it in Prime. We can see the script in the bottom pane of the screen and the Deploy button is available at the top of the page.

Once we click Deploy we are presented with a screen to select the switches to which we want to deploy the configs.

To filter by a specific switch name of prefix, Hit the filter icon and enter the name. As devices are selected with the checkbox, they will add the Device to Deploy area. When all devices to deploy are selected click Next.


The next area is the Workflow screen. We did not do anything in this area and just clicked Next.

This then displays the devices selected and we now can see the form created when the script was written which is where, like in the case of the 6500, lincecards can be selected. This area also has an option in the right corner to check the CLI commands against the device verified to make sure the commands are compatible.



After clicking Next we are presented with the Deployment Options area. We did a couple of different ways of deployment, On-Demand and Scheduled.


On-Demand is when selecting the Now radio button then Next. If deployment is to be scheduled at another date and time, this can be accomplished us the Date radio button and selecting the appropriate Date/Time. Be careful as this is the Date/Time of the sever. If your server is centralized in a data center and your site is in another time zone this needs to be taken into account.

There are a couple other options at the bottom of this screen that help to make sure we do not lose our config that we have worked on so hard, Copy Running Config to Startup and Archive Config After Deployment. These are fairly self-explanatory, but the second option is used if you are archiving your device configurations to the Prime Server for back-ups.

Once we click Next we get the final Deploy verification screen, this is our last point of turning around. Once Deploy is clicked the job will begin running in Prime and we can only abort it in the Job Dashboard.


At this point, sit back and have some coffee or something stronger, and wait to see the job complete in Job Dashboard. Depending on the number of devices the config is being pushed to and how large the config ended up being, this can take upwards of 20+ minutes to complete. You can keep an eye in it in Job Dashboard and make sure all devices are successfully being deployed to.

Some gotchas that gave us a little grief.

Portchannels. Depending on the model of switch, the portchannels have to have imnput part of the port config added to physical and the output added to the portchannel. We did this manually as it was easier and fee and far between, but with testing you could add this part to your script.

 Random Errors.  We would occasionally receive an error that a timeout occurred pushing the config to the switch. After doing research and looking at the actual switch it was determined the config would actually push and we never really figured out why this error would occur. If anyone else has seen this and has any further info, please let me know and I will update this with that info.

With that we complete the look at using Cisco Prime to push QoS configs to ~1,000 switches in the wild. I genuinely hope this helps some other folks out there and provides some info to all.

Look for more coming soon.

Cisco Prime – This is what it is good for. Part 1

In the last post we looked briefly at a scripting sample on adding QoS commands to IOS-X and IOS switches using Prime Infrastructure. To recap, we were looking to push QoS policies to ~1,000 switches of various models, IOS versions and even line cards. Using APIC-EM was not an option as only about half the switches were supported either because of old platform, IOS or other issues. Prime was selected since it had just been stood up for the wireless implementation and could push to all the various switch types, from 2960 to Nexus 7K.

With the scripts we needed to take into account the platform of the switch, the IOS and linecards as previously mentioned. This process has to use a combination of automation through Prime as well as manual intervention to know what the linecard is installed in the switch so it can be selected from a drop-down of available cards.

Last time we looked at a basic IOS config for QoS, how do we handle a 6500 series with a variety of linecards? Below is a sample of how this had to be handled.

The first thing, as with the previous script is we have to query the Prime DB structure and set the variables for the slots on the switches.


<param-group cliName=”cli command set” isMandatory=”true” name=”Deploy_QoS_Cat6500 parameters”>

                <description>Parameters for Deploy_QoS_Cat6500</description>

                <parameter name=”slot1″>

                    <description>Line Card Slot 1 Type</description>

 <default-value label=”Select the appropriate line card type or none for slot 1″>None</default-value>

















This has to be done for each possible line slot depending on the model. We went all the way to 13 based on the customer having a number of 6513 chassis.

Next we get to the meat of the QoS config that will be applied to the ports.

mls qos

mls qos map cos-dscp 0 10 18 26 34 46 48 56


##Queuing command structure


#set ( $OnePSevenQEightT = “wrr-queue queue-limit 10 25 10 10 10 10 10

wrr-queue bandwidth 1 25 4 10 10 10 10

            priority-queue queue-limit 15

wrr-queue random-detect 1

            wrr-queue random-detect 2

            wrr-queue random-detect 3

            wrr-queue random-detect 4

            wrr-queue random-detect 5

            wrr-queue random-detect 6

            wrr-queue random-detect 7

wrr-queue random-detect max-threshold 1 100 100 100 100 100 100 100 100

wrr-queue random-detect min-threshold 1 80 100 100 100 100 100 100 100

            wrr-queue random-detect max-threshold 2 100 100 100 100 100 100 100 100

wrr-queue random-detect min-threshold 2 80 100 100 100 100 100 100 100

            wrr-queue random-detect max-threshold 3 100 100 100 100 100 100 100 100

            wrr-queue random-detect min-threshold 3 80 100 100 100 100 100 100 100

            wrr-queue random-detect max-threshold 4 100 100 100 100 100 100 100 100

            wrr-queue random-detect min-threshold 4 80 100 100 100 100 100 100 100

            wrr-queue random-detect max-threshold 5 100 100 100 100 100 100 100 100

            wrr-queue random-detect min-threshold 5 80 100 100 100 100 100 100 100

            wrr-queue random-detect max-threshold 6 100 100 100 100 100 100 100 100

wrr-queue random-detect min-threshold 6 80 100 100 100 100 100 100 100

            wrr-queue random-detect max-threshold 7 100 100 100 100 100 100 100 100

            wrr-queue random-detect min-threshold 7 100 100 100 100 100 100 100 100

            wrr-queue cos-map 1 1 1

            wrr-queue cos-map 2 1 0

            wrr-queue cos-map 3 1 2

            wrr-queue cos-map 4 1 3

            wrr-queue cos-map 5 1 6

            wrr-queue cos-map 6 1 7

            wrr-queue cos-map 7 1 4

            priority-queue cos-map 1 5″ )

This is one example of the different structure that needs to be created which is also based on linecard model and what the card will support for commands and QoS. If you are new to this, as I was, the name $OnePSevenQEightT seems confusing, but being a great cryptographer, you can quicker decipher. OneP = One Priority, SevenQ =  Seven Queues and EightT = Eight Thresholds.

Now that we know what model line cards and the configs built for the actual QoS commands, we can start the interface configs for each slot.

## !—INTERFACE CONFIG for slot 1:


#if ( $slot1  == “6704” )

            #set ( $port_range = “Te1/1-4” )

            int range $port_range



#elseif ( $slot1  == “6708” )

            #set ( $port_range = “Te1/1-8” )

            int range $port_range



#elseif ( $slot1  == “6724” || $slot1 == “6824” )

            #set ( $port_range = “Gi1/1-24” )

            int range $port_range



#elseif ( $slot1 == “6748” || $slot1  == “6848” )

            #set ( $port_range = “Gi1/1-48” )

            int range $port_range



#elseif ( $slot1 == “6524” )

            #set ( $port_range = “Gi1/1-24” )

            int range $port_range



#elseif ( $slot1 == “6148” )

            #set ( $port_range = “Gi1/1-48” )

            int range $port_range



#elseif ( $slot1 == “None” )


In this code we are looking at each slot, #if ( $slot1, and we have to build a config for the slot with each possible linecard that could be installed because each takes a different command or queueing structure as we built in the first set of code.

The linecard model is then specified, == “6704” ). You may be asking, ‘Nick why does this even matter? That seems like a lot of extra code I just really don’t want to deal with.’ It does matter since each linecard model may have a different number of ports and even type of port. We cannot really specify commands to add configs to a Gig interface when the linecard is a TenGig card. We also have to account for the option that the linecard is not actually populated, can’t really put a config on a card that is not installed. It is painful but needed. Copy and Paste is your friend, but be careful to make sure the slot number gets updated each time.

At this point just make sure have the correct number of #end statements and don’t forget to close the clicommand.

We will now move on to Deployment of the configs we have created.




Becoming a Wireless Super Hero – Part 1

In the first part of this multi-part blog, we will explore what it takes to be a Wireless Super Hero.

My family and I went to see Justice League over the holiday weekend and with all the super hero movies and TV shows over the last few years it got me to thinking, What is needed to become a Wireless Super Hero?

Growing up I was always more of a DC fan than Marvel and specifically I loved Batman and the Flash. They were the ones that had the smarts and other than the ability to run really fast, no actual powers. Batman being my absolute favorite (until Ben Affleck came along) has his wits, tools and Sidekick (see what I did there?). Over the next few blog posts we will explore how to become the World’s Greatest Wireless Detective and what would someone need to build a Wireless Bat Utility Belt and BatCave.

Meanwhile back at the Hall of JusticeSuperfriends-Justice-League-Hall-of-Justice

The first step in becoming the World’s Greatest Wireless Detective is what all super heroes have to start with, training. It doesn’t have to be crazy League of Shadows level training, but understanding of the basic concepts of wireless is a must for anyone trying to get their feet under them in an industry that at one point seemed to be all black magic and smoke and mirrors. In our next post we will start looking at the tools you need to hit the street and start getting hands-on in the fight against bad Wi-Fi.

When I first started in wireless about 18 years ago, the only training you could find was specific to manufacturers prior to the being any wireless standards or organizations. Each manufacturer used proprietary configurations. The designs were more or less the same when doing 900 MHz, then 2.4 came along and things got wild. We had Telxon doing DSSS and Symbol with their Spring radios and FHSS. To get the needed knowledge you would have to attend courses for each manufacturer to under the proper design configurations. Standards organizations finally came about and we finally were able to get training around actual wireless concepts it was still somewhat vendor dependent and most times you trained in whatever you were selling or supporting at the time.

We now have so many great options for vendor-neutral training that gets into the heart of Wi-Fi and the technology with the CWNP program. I had heard about it for years and had looked at the CWNA book multiple times and kept saying I would do it and then would always get sidetracked chasing squirrels. I finally sat down a few months ago and went through it and wished I had done it years ago. It helped to clear up some misunderstandings I had made in my own head for years and gave some good insight into why we do the things we do in wireless which helps me to communicate that back to my customers as well when they ask, instead of the old “That’s how we do wireless.”

The vendor-specific training seems to be going in the same direction over the last few years. The last couple I have done are of course specific to their technology, but they are also trying to add more of the overall concepts and under the sheets knowledge of wireless that engineers should have. Anyone can hang an AP. What makes an engineer good is when they can see why connectivity is lacking or throughput is choking and understand the concepts and reasons behind those issues and how to properly conduct a predictive survey then understand the results of validation testing and make the appropriate changes.

Becoming a Wireless Super Hero – Part 2 will be coming soon where we will discuss the wonderful toys to add to our utility belts.

%d bloggers like this: