• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Standard Disclaimers
  • Resume/Curriculum Vitae
  • Why Blog?
  • About
  • (Somewhat Recent) Publication List

packetqueue.net

Musings on computer stuff, and things... and other stuff.

March 11, 2011 Cisco

Frame Relay Switch Setup

Read­ing Time: 5 min­utes

Edi­tor’s Note: Google Chrome seems to dis­like my site theme and is hyphen­at­ing absolute­ly every­thing.  Apolo­gies for that, and I’ll look into it just as soon as I get done with a few items on the “hon­ey-do” list.

If you are study­ing for the CCIE Rout­ing and Switch­ing exam, one of the tech­nolo­gies that is still heav­i­ly preva­lent is Frame Relay.  It is expect­ed that you know both the tech­nol­o­gy itself, and how to con­fig­ure it, but also how it inter­acts with and affects oth­er key tech­nolo­gies like OSPF and EIGRP.  Hav­ing the abil­i­ty to study Frame Relay, then, and get plen­ty of hands-on con­fig­u­ra­tion time becomes as impor­tant as with any­thing on the R&S 4.0 Blue­print.

While many net­work engi­neers are already famil­iar with Frame Relay from a con­sumer side–in oth­er words, from the per­spec­tive of an enti­ty which buys Frame Relay ser­vices from a provider–not many of us are famil­iar with the ser­vice provider por­tion of the equa­tion.  This makes set­ting up prac­tice labs dif­fi­cult if you are try­ing to study using your own equip­ment.  For­tu­nate­ly, you can set up your own Frame Relay switch fair­ly eas­i­ly, and that is what we’re going to walk through today.

A Frame Relay switch is the DCE device that sits inside a ser­vice provider’s net­work and moves the frames along from point A to point B.  There are many of these devices all work­ing togeth­er inside of your provider’s net­work to move your infor­ma­tion along, but for­tu­nate­ly for lab can­di­dates study­ing at home, you can eas­i­ly get by with just one.  Even more for­tu­nate is that you can use a fair­ly low pow­ered router to act as a Frame Relay switch, and not miss any­thing that you’ll need for pur­pos­es of the lab.

A quick note on the lab is in order here.  It used to be a part of the lab blue­print (dont’ ask me which one, or how far back in time) that you had to know how to set up a Frame Relay switch.  Cis­co has since tak­en that require­ment away, at least from the R&S lab, and so a lot of that knowl­edge isn’t com­mu­ni­cat­ed in teach­ing texts any longer.  What you’ll find in the lab itself is an already con­fig­ured Frame Relay switch that you’ll have no direct access to, but all of the infor­ma­tion you need to make your equip­ment talk to it.

It may seem coun­ter­in­tu­itive, but for a home lab the best device to use for a Frame Switch is actu­al­ly a router.  For instance, I’m using an old­er Cis­co 2621 mod­el for my Frame Switch, and it does every­thing I need it to do.  Ser­vice providers will typ­i­cal­ly use more spe­cial­ized gear, but all we’re going for in our stud­ies is a rea­son­able fac­sim­i­le.  If you want to spend a lot of mon­ey, fol­low the advice of so many oth­ers and spend it on your layer‑3 switch­es.

Anoth­er thing we want to briefly dis­cuss is inter­faces.  Gen­er­al­ly speak­ing, you can either fol­low the “run what’cha brung” phi­los­o­phy of just using what you have access to, or you can buy the inter­faces you want.  In my case I had a cou­ple of WIC-T1 cards that I’ve used, and then I bought a hand­ful of WIC-2T ser­i­al inter­face cards.  The key is to have a ser­i­al inter­face for each router you want to con­nect via the Frame switch.  So I have one T1 inter­face, and six ser­i­al inter­faces for a total of sev­en devices I can con­nect into the Frame “cloud”.  I find this to me more than ade­quate, though if you’re try­ing to dupli­cate a spe­cif­ic topol­o­gy you may need more or less.

The con­fig­u­ra­tion of a Frame switch is actu­al­ly very sim­ple, as you’ll see, though atten­tion to detail does mat­ter.  I’m assum­ing here, by the way, that you already know how to set up your router for basic access, clock, etc., so I won’t cov­er that here.  So, the first step in con­fig­ur­ing your router to be a Frame switch is to put it into Frame switch­ing mode using the com­mands:

ip cef
frame-relay switching

 

These com­mands turn on Cis­co Express For­ward­ing, put the router into a Frame switch­ing mode, and change quite a bit of the default behav­ior, so don’t expect to use this device as a router in any lab topol­o­gy you’re work­ing on.  This device will be just a Frame switch and noth­ing more.

The next step is to con­fig­ure the indi­vid­ual inter­faces you’ll con­nect your oth­er routers to, and you have a lot of choic­es here.  I don’t know exact­ly how the R&S lab devices are set up, so I’m just going to give you the con­fig­u­ra­tion I use.  I’ll post the con­fig­u­ra­tion below, and then go over the key comands:

interface Serial0/1
no ip address
encapsulation frame-relay
logging event subif-link-status
logging event dlci-status-change
clock rate 8000000
no frame-relay inverse-arp
frame-relay intf-type dce
frame-relay route 220 interface Serial0/2 120
frame-relay route 221 interface Serial0/0 320

 

The first few lines of the con­fig­u­ra­tion should be famil­iar to you already.  We’re set­ting our inter­face encap­su­la­tion to frame-relay, and then log­ging on a cou­ple of events.  The log­ging is com­plete­ly up to you, and not nec­es­sary one way or anoth­er.  I just find them help­ful.  Next we set the clock rate, and we tell the inter­face that we are the DCE end of the con­nec­tion.  Remem­ber, in a Frame Relay net­work the clock­ing (DCE end) comes from the line or provider side, so this is what you’ll want.  If I am work­ing with a T1 ser­i­al inter­face, I’ll also need a line for that:

service-module t1 clock source internal

 

This can change depend­ing on the type of card and how you have it con­fig­ured.

Now, the oth­er options we have here require a lit­tle more expla­na­tion.  The “no frame-relay inverse-arp” com­mand does just what it says, and you can argue for the Frame switch hav­ing this turned on, or off.  In most cas­es in the lab, you’ll be instruct­ed to not use inverse arp on the DTE devices, so I’ve just turned that func­tion­al­i­ty off on my Frame switch from the out­set.  It’s real­ly your call.

The next two lines begin­ning with frame-relay route are the ones that always seem to cause con­fu­sion.  You can read the first line as “If some traf­fic comes in from DLCI 220, with a des­ti­na­tion of DLCI 120, send it out inter­face Ser­i­al 0/2”.  Sub­sti­tute DLCI 221 and 320 on the next line, but oth­er­wise read it the same.  So if I now plug in a router to inter­face Ser­i­al 0/1, and assign DLCI 220 and 221 to two dif­fer­ent sub-inter­faces (for instance, dif­fer­ent options are pos­si­ble) the Frame switch will know what to do with that traf­fic.

So, if we have a dia­gram that looks like the fol­low­ing:

Then we have a con­fig­u­ra­tion for inter­faces that looks like so:

interface Serial0/0
no ip address
encapsulation frame-relay
logging event subif-link-status
logging event dlci-status-change
service-module t1 clock source internal
no frame-relay inverse-arp
frame-relay intf-type dce
frame-relay route 320 interface Serial0/1 221
frame-relay route 321 interface Serial0/2 121
!
interface Serial0/1
no ip address
encapsulation frame-relay
logging event subif-link-status
logging event dlci-status-change
clock rate 8000000
no frame-relay inverse-arp
frame-relay intf-type dce
frame-relay route 220 interface Serial0/2 120
frame-relay route 221 interface Serial0/0 320
!
interface Serial0/2
no ip address
encapsulation frame-relay
logging event subif-link-status
logging event dlci-status-change
clock rate 8000000
no frame-relay inverse-arp
frame-relay intf-type dce
frame-relay route 120 interface Serial0/1 220
frame-relay route 121 interface Serial0/0 321

 

I hope that helps out, and as always if you have any ques­tions or clar­i­fi­ca­tions please drop me a line here or on twit­ter where I’m known as @SomeClown.

Share

February 18, 2011 Cisco

Nexus Crash

Read­ing Time: 5 min­utes

As is typ­i­cal in the world of IT, prob­lems have a way of sneak­ing up on you when you least expect it, then vicious­ly attack­ing you with a Bil­ly-club.  Often this hap­pens when you are asleep, on vaca­tion, severe­ly ine­bri­at­ed, or have already worked 40-hours straight with no sleep.  In my case, Super-Bowl Sun­day at around 8:30pm was my time to get the stick.  And get it I did.

For rea­sons too sad to war­rant com­ment, and far too irri­tat­ing to explain in a fam­i­ly forum like this, our ESX host servers all became dis­con­nect­ed from our SAN array.  The root prob­lem was some­thing else on layer‑2, and got resolved quick­ly, but the vir­tu­al world was not so quick to recov­er.  In ret­ro­spect, the prob­lem was not a bad one, but when you’ve been drink­ing and can’t see the obvi­ous answer you tend to dig the hole you’ve fall­en into deep­er rather than climb prompt­ly out.

By way of back­ground, we are cur­rent­ly run­ning VSphere 4.0, with a few servers hav­ing 32GB or mem­o­ry and 8‑cores, and a few hav­ing 512GB of mem­o­ry and 24-cores. All ESX Hosts are SAN boot­ing using iSC­SI ini­tia­tors on a ded­i­cat­ed layer‑2 net­work.  We use Nexus 1000v soft switch­es and have our ESX Hosts trunk­ed using 802.1q to our Core (6506‑E switch­es run­ning VS-S720-10G super­vi­sors).  Every­thing is redun­dant (dupli­cate trunks to each Core switch) and using ether-chan­nel with mac-pin­ning).  So there you have that, for what it’s worth.  Now back to the crashed servers.

We reboot­ed all of the ESX host servers, and with the excep­tion of some FSCK-com­plain­ing they all came up quite nice­ly.  The prob­lem was that none of the vir­tu­al machines came up.  Let me add that we have the domain con­trollers, DHCP, DNS, etc. on these hosts.  Crap.

So the first thing I did in my addled state was to add DHCP scopes to the DHCP servers at anoth­er office across the coun­try, and point the VLANs off “that-a-way” by chang­ing the ip helper-address on each VLAN on the Core.  That got DHCP and DNS back online.  As you can prob­a­bly guess by now, I was Mac­Gyver-ing the sit­u­a­tion nice­ly, but real­ly didn’t need to.  That’s one of the prob­lems when you’re in the trench­es: you tend to think in terms of right-now instead of root cause.

The next thing I did was to start bring­ing up the vir­tu­al machines one-by-one using the com­mand line on the ESX hosts.  Why?  Because I had no domain authen­ti­ca­tion and the VSphere Client uses domain authen­ti­ca­tion.  Here is where some­one in a live talk would be inter­rupt­ing me to point out that the VSphere Client can always be logged into using the root user of the hosts, even when domain authen­ti­ca­tion is set up for all users.  Yes, that is true and it would have been handy to know at the time.

In order to bring up the vir­tu­al machines, I had to first find the prop­er name by issu­ing:

vmware-cmd –l

from the com­mand line.  This com­mand can take a while to run, espe­cial­ly if you have a lot of VMs sit­ting around, so go get a cup of cof­fee.

Once I had that list I pri­or­i­tized the machines I want­ed up first, and issued the:

vmware-cmd //server-name.vmx start

com­mand on each one.  That should have been the end of the boot-up dra­ma, but it wasn’t.  As it turns out, a mes­sage popped up (and I don’t remem­ber the exact phras­ing) to the effect of “you need to inter­act with the vir­tu­al machine” before it would fin­ish boot­ing.  So, now I issued the:

vmware-cmd //servername.vmx answer

com­mand and got some­thing that looked about like this:

Virtual machine message 0:
msg.uuid.altered:This virtual machine may have been moved
or copied.
In order to configure certain management and networking
features VMware ESX needs to know which.
Did you move this virtual machine, or did you copy it?
If you don't know, answer "I copied it".
0. Cancel (Cancel)
1. I _moved it (I _moved it)
2. I _copied it (I _copied it) [default]

Well, I didn’t know so I select­ed the default option (I copied it) and went on my way.  That is fine in almost every cir­cum­stance and got all of my servers boot­ed up.  It did not, how­ev­er, entire­ly fix the prob­lem.  In fact, even though all of my servers were boot­ed, none could talk or be reached on the net­work.

This is where a lit­tle famil­iar­i­ty with the Nexus 1000v soft switch­es comes in handy.  Very briefly, the archi­tec­ture is made up of two parts: the VSM or Vir­tu­al Super­vi­sor Mod­ule and the VEM or Vir­tu­al Eth­er­net Mod­ule.  The VSM cor­re­sponds rough­ly to the super­vi­sor mod­ule in a phys­i­cal chas­sis switch, and the VEMs are the line cards.  The inter­est­ing bit to remem­ber for our dis­cus­sion is that the VSMs (at least two for redun­dan­cy) are also Vir­tu­al Machines.

Some of you may have guessed already what the prob­lem turned out to be, and are prob­a­bly chortling self-right­eous­ly to your­self right about now.  For the rest of us, here’s what hap­pened:

I fig­ured out the log-in-using-root thing and got the VSphere client back up and run­ning (oh, not before hav­ing to restart a few ser­vices on the Vir­tu­al Cen­ter Serv­er, which is not a vir­tu­al machine, by the way.  I’m not total­ly crazy!).  Once I got that far I could log in to the Nexus VSM, and look at the DVS to see what was going on.  All of my uplink ports (except for ones hav­ing to do with con­trol, pack­et, vmk­er­nel, etc.) were in an “UP Blocked” state.

The short-term fix (again, the Mac­Gyver job) was to cre­ate a stan­dard switch on all hosts and migrate all crit­i­cal VMs to that switch.  That didn’t, how­ev­er, fix the prob­lem per­ma­nent­ly and besides, we like the Nexus switch­es and want­ed to use them.  With that in mind, and a day or two to nor­mal­ize the old sleep pat­terns, I set up call with VMware sup­port.  This actu­al­ly took longer than I expect­ed since I had to wait for a call-back from a Nexus Engi­neer, and they are appar­ent­ly as rare as hon­est sales-peo­ple or Uni­corns.  That said, I did get a call back and we pro­ceed­ed to trou­bleshoot the prob­lem.

One thing that sur­prised me was that it took the Nexus Engi­neer a bit longer than I would have thought to find the prob­lem, but even once he did it took longer to get res­o­lu­tion because we had to get Cis­co involved.  The prob­lem, as it turns out was licens­ing.

When you license the Nexus, you receive a PAK and you use that to install the VSM.  Once you do that, you have to request your license using the Host UID of the now installed VSM.  Cis­co then sends you a license key that you install from the com­mand-line of the VSM.  This is all some­what stan­dard and not sur­pris­ing.  What was sur­pris­ing was that we would have to do this at all con­sid­er­ing we had been licensed at the high­est lev­el (Enter­prise, superdy-duper­ty cool or some­thing) for years.

What hap­pened was that the copy VSphere made in order to get each Vir­tu­al Machine back up after our crash changed the Host UID of the VSM vir­tu­al machine(s).  Thus, the license keys were no longer valid and all host uplink ports went into a blocked state.  (I’ll save you the obvi­ous gripe I have with the Nexus not offer­ing any kind of com­mand-line mes­sage about our licens­ing being hosed.)  This is where we had to get Cis­co Licens­ing involved, as we had to send them the old license key files and the new Host-UID infor­ma­tion so that they could gen­er­ate new keys.  Con­sid­er­ing I was only on the phone with them for only 15 min­utes, it was as pleas­ant an expe­ri­ence as I’ve ever had deal­ing with Cisco’s Licens­ing depart­ment.  At least that’s some­thing.

After fix­ing the licens­ing, the ports unblocked and I went through the tedi­um of adding back adapters to the Nexus, mov­ing servers, etc.  At the end of the day, how­ev­er, it is all back to nor­mal and work­ing.  There are a lot of lessons learned here, and you’ll no doubt pull your own, but the one over­rid­ing thing to be on the look­out for is that, under cer­tain cir­cum­stances, if your Nexus VSMs are part of a crash and come back up, look to licens­ing first before trou­bleshoot­ing any­thing else.  Oh, and try to sched­ule your major sys­tem crash­es for a more con­ve­nient time… when you’re sober.  Just say­ing.

Share

February 12, 2011 Uncategorized

ASA TU Redux

Read­ing Time: 3 min­utes

Edi­tor’s Note: If you haven’t already, check out the first install­ment in this–hopefully not ongoing–series at http://blog.packetqueue.net/asa-tu/

At approx­i­mate­ly 1:58pm PST last Thurs­day the two edge ASA 5510 units at our cor­po­rate head­quar­ters dropped off the net­work.  At the time I was in a dif­fer­ent office up in Que­bec, Cana­da and so del­e­gat­ed to one of the oth­er engi­neers to work the prob­lem with TAC and bring them back online.  That process took much longer than expect­ed, and I won’t bore you with the details.  What I will bore you with, how­ev­er, are a few obser­va­tions I have now that we have more time and expe­ri­ence work­ing with Cis­co’s ASA prod­uct line:

  • The ASA has some sort of sys­temic, though exceed­ing­ly rare, prob­lem on 8.3(x) and new­er code.
  • Said prob­lem caus­es the units to reboot and take out the sys­tem flash (disk0:) but not user flash (disk1:).
  • The flash appears to be erased, but it is in fact the MBR that is gone, not the data (we used a hard­ware foren­sic disk analy­sis unit to ver­i­fy this).
  • Cis­co does­n’t have enough data points yet to even acknowl­edge this is an issue.  I don’t believe they’re “hid­ing” a prob­lem; I just don’t think enough peo­ple have expe­ri­enced the par­tic­u­lar set of cir­cum­stances that would cause this and sub­se­quent­ly report­ed back to Cis­co.

My own sus­pi­cions about the root cause are below, though I’d wel­come any addi­tion­al thoughts from any­one with expe­ri­ences in this area.  I should also point out that I have heard from at least two oth­er peo­ple that they have expe­ri­enced this exact prob­lem.

  • The behav­ior and crash lead me to believe that the ASA expe­ri­ences, at the point of fail­ure, the equiv­a­lent of a Win­dows “BSOD”.  This would point to either mem­o­ry or moth­er­board itself as these are the pri­ma­ry hard­ware-based caus­es of this type of crash in any sys­tem.  Most oth­er crash­es can be recov­ered from and pro­duce data.
  • The ASA access­es the flash on ini­tial load, but then runs from mem­o­ry.  The flash cards in these units had trashed MBRs which leads me to believe that the ASA was touch­ing the MBR at the time of the crash, which is incon­sis­tent with what I know about how the ASA is sup­posed to oper­ate.  It’s pos­si­ble it was just access­ing the flash to write a crash-dump and crashed part­way through.  That makes some sense to me.
  • All fail­ures I have expe­ri­enced and heard of from oth­ers have at least a cou­ple of things in com­mon:  They are all on 8.3(x) code.  They are all post user-upgrad­ed to sup­port 8.3(x).  This code required a mem­o­ry and flash upgrade, and so you had to buy upgrades from Cis­co and field-install them your­self.  These units were also all man­u­fac­tured imme­di­ate­ly fol­low­ing the Cis­co man­u­fac­tur­ing slow­down in 2008/2009 when lead times were run­ning into the sev­er­al months range.  This makes me a bit sus­pi­cious that qual­i­ty con­trol on either the mem­o­ry or the units them­selves could be to blame.  I’ve tried to ver­i­fy with revi­sion num­bers, etc., but I haven’t been able to gath­er enough data from “out there” to set­tle on this as a cause.

I hope this helps some­one out there, and I tru­ly am inter­est­ed in get­ting more infor­ma­tion from any­one that has it.  Cis­co is tak­ing our units back, but pulling them aside before refur­bish­ment so that their engi­neers can dis­sect the units.  If I find any­thing out from that I’ll post the find­ings here.

The con­fig­u­ra­tion and build-out of the ASA 5510 units is as fol­lows:

  • 1 Giga­byte of mem­o­ry, 512MB of sys­tem flash, 256MB of user flash.  IPS Mod­ule, Secu­ri­ty-Plus, Bot­net fil­ter, Any­Con­nect Essen­tials, Mobile, etc. licens­es.  Actu­al­ly, just about every license is on board; these units are at this point maxed on every­thing.  Uti­liza­tion is at a rea­son­able lev­el still.
  • Con­fig­u­ra­tion includes use of mul­ti­ple IPsec site-to-site VPNs, SSL VPN for all Mac, Lin­ux, Win­dows, iPad and iPhone, sub-inter­faces, state­ful failover, both IPv4 and IPv6, OSPF with sta­t­ic redis­tri­b­u­tion, and full IPS func­tion­al­i­ty.
Share

January 27, 2011 Apple

Why Bonjour Hates my Wireless Network

Read­ing Time: 5 min­utes

Why Bon­jour Hates my Wire­less Net­work

Many of you know my strug­gle as of late to inte­grate all of my recent­ly acquired Apple devices into my exist­ing net­work.  Many of you also know the frus­tra­tion I’ve had with this process and have been inno­cent passers-by to my inces­sant twit­ter updates, rants and spon­ta­neous bursts of mis­placed anger.  Here then, is my brief expla­na­tion of what the prob­lem is, and why I now—after too many rab­bit-hole adven­tures to list—believe that I will not solve my prob­lem with­out dif­fer­ent equip­ment or a rad­i­cal re-design of my net­work.

I should point out, by the way, that it’s not that I’m a masochist—really I’m not—but rather that in my stud­ies I find it use­ful to have a lot of equip­ment lying around.  That equip­ment inevitably works its way into my home net­work, and after some time I have a large and some­times con­vo­lut­ed struc­ture in place.  In this case, how­ev­er, the wire­less is pret­ty far out­side the scope of any­thing I’ll deal with on the CCIE Rout­ing and Switch­ing Lab Exam, and was brought in specif­i­cal­ly to sup­port some future upgrades to my home: wire­less secu­ri­ty, roam­ing VoIP phones, etc.  The irony, as my wife so per­fect­ly point­ed out the oth­er evening, is that if we just had a “reg­u­lar” lit­tle wire­less router “like all the nor­mal, non-com­put­er geek peo­ple” our Apple devices would all work.

If you haven’t read my pre­vi­ous post­ing on Bon­jour, that might pro­vide some more back­ground but isn’t, strict­ly speak­ing, nec­es­sary.  Some under­stand­ing of Bon­jour might be help­ful, how­ev­er, so very quick­ly, here it is: Bon­jour is Apple’s imple­men­ta­tion of a ser­vice dis­cov­ery pro­to­col sim­i­lar to Microsoft’s zero-conf.  It uses a cou­ple of address­es to make things work, and it is the pro­to­col behind Apple’s “every­thing just works” mag­ic.  If you want more than that, Google can offer you much deep­er expla­na­tions.

Bon­jour uses two address­es, real­ly, to do its work: 224.0.0.251 and 224.0.0.252, the lat­ter of which is the “dis­cov­ery” part of the pro­to­col and the for­mer where the action hap­pens.  The astute among you will notice that these are both link-local address­es and so won’t be for­ward­ed by layer‑3 devices (even real­ly, real­ly bro­ken ones) at all.  I had already been around the block with this once before, and so fig­ured that because my wire­less net­work was one broad­cast domain (thought I smug­ly) every­thing would be all good.  I was wrong.

Now would be a good time to toss in a quick net­work dia­gram so that you can visu­al­ize what we’re talk­ing about here.  The draw­ing below is just the wire­less por­tion of my net­work as it applies to what we’re dis­cussing in this arti­cle.  Rest assured, there is a lot more out there, but none of it is applic­a­ble to this sit­u­a­tion.

As you can hope­ful­ly see, we have a 2811 ISR con­nect­ed to a 2950 switch via 802.1q, and two 1142 APs con­nect­ed at layer‑2 to the switch.  What might not be as obvi­ous at first is that the Wire­less Lan Con­troller you see at the upper right of the dia­gram is a mod­ule sit­ting in the 2811 router.  This is where the heart of evil appar­ent­ly lies, but more on that in a minute.  The access points are on VLAN 16, and get DHCP assign­ment from the 2811 along with option 43 and option 60 which are both nec­es­sary (despite what you may hear) to get the radios reg­is­tered to the con­troller, at least in this con­fig­u­ra­tion.  All VLANs are allowed every­where (for test­ing) and no ACL/VACLs or any oth­er secu­ri­ty out­side of stan­dard wire­less is applied.

Before any­one points out the obvi­ous, by the way, I did recon­fig­ure this arrange­ment to put the APs on the same VLAN as the WLC man­age­ment inter­face, make that the native VLAN all the way through, and bridge the switch and router at lay­er 2 with BVI, just as a test to elim­i­nate layer‑3 bound­aries.  While inter­est­ing to do, that didn’t solve the prob­lem we’re hav­ing here.  In fact, I didn’t even notice the real prob­lem loca­tion until I made this dia­gram (who would have thought?).

The WLC mod­ules that plug into a router, while run­ning the same soft­ware and oth­er­wise oper­at­ing almost iden­ti­cal­ly, are dif­fer­ent in at least one key respect from their stand-alone coun­ter­parts: they can’t com­mu­ni­cate at layer‑2 with the router.  A stan­dard con­troller (say a 4400 series) can com­mu­ni­cate at layer‑2 with radios plugged in to access switch­es, there­by becom­ing the first layer‑3 hop from the radios—even when dif­fer­ent VLANs are assigned than man­age­ment.  The inte­grat­ed mod­ule, how­ev­er, com­mu­ni­cates with the host router across the back­plane at layer‑3.  Look­ing back at the dia­gram, you can clear­ly see that drawn out.  So no mat­ter what I do with bridg­ing from the radios, switch, router, etc., inevitably I’ll have lay­er three sep­a­ra­tion between the radios and the con­troller.

This is all well and good for most pro­to­cols, but not for link-local mul­ti­cast.

I think I found every rab­bit-hole pos­si­ble to get lost down, and pro­ceed­ed to do just that.  When I final­ly ran out of said holes to explore, kind folks on twit­ter that I respect and look up to sent me off in still more direc­tions.  I tried, in no par­tic­u­lar order:

(1)    Using Des­ti­na­tion NAT to change the 224.0.0.251 and 252 address­es to mul­ti­cast in the 239.x.x.x range

(2)    Using Des­ti­na­tion NAT to change the 224.0.0.251 and 252 address­es to uni­cast

(3)    Using helper maps

(4)    Bridg­ing every­thing under the sun to every­thing under the moon.  No love because the back­plane can’t be bridged.

I was going to even try GRE tun­nels, DCI, or any oth­er type of tun­nel to move Layer‑2 over Layer‑3.  At the end of the day, how­ev­er, besides get­ting tired of the project, I decid­ed that noth­ing was like­ly to work.  Why?  Because one of the first things a layer‑3 device does when it receives a pack­et is to decre­ment the TTL.  So no mat­ter what I do with NAT, or tun­nels, or any oth­er damned thing, the router will always decre­ment the TTL before it decides to pass the pack­et to some oth­er ser­vice (like DNAT, GRE, what­ev­er), there­by dis­card­ing the pack­et before it ever reach­es those process­es.

As far as I can tell today, this is unsolv­able.  Apple hates me, and oth­ers like me.  Using a TTL of 1 as your method of lock­ing down com­mu­ni­ca­tions is pret­ty rock-sol­id from a DRM view­point, but also very inflex­i­ble and heavy-hand­ed.  I’m going to put a portable 3560 in my enter­tain­ment cen­ter to sup­port my DirecTV box, Apple TV and oth­er enter­tain­ment devices so that they can share the iTunes library on my main com­put­er, but I’m not hap­py about it.  I lose my shiny N‑connected cool­ness, and my iPad won’t be able to con­trol those devices.  In addi­tion, I’ve had to hard-set my wife’s print­er, since her Mac can’t find it any more.

The bot­tom line is that all of the auto-con­fig­u­ra­tion mag­ic that Apple devices can have has gone away in my cur­rent set up.  I could fix it by run­ning a par­al­lel wire­less net­work using autonomous access points, or buy a cheap‑o wire­less router, but then I have the oth­er prob­lem where I lose vis­i­bil­i­ty and con­trol, just to make a quirky sys­tem work.  The only viable option, real­ly, is to change out my WLC mod­ule for a stand-alone controller—which I may do at some point—but at this point I’m tired and may just move on, defeat­ed.

Share

January 24, 2011 Uncategorized

Passing the Written

Read­ing Time: 3 min­utes

Pass­ing the CCIE R&S Writ­ten (350–001)

I am proud to say that I have com­plet­ed the first step on my jour­ney to the CCIE Rout­ing and Switch­ing cer­ti­fi­ca­tion: name­ly, I passed the writ­ten qual­i­fi­ca­tion exam.  I obvi­ous­ly have a lot more work to do before attempt­ing the lab lat­er this year, but it is a good sol­id first step, and con­sid­er­ing how long I’ve con­tem­plat­ed tak­ing said step it is just good to be mov­ing for­ward.

I’m not going to go into any details, talk about my score (it was­n’t per­fect by any means) or real­ly dis­cuss any­thing that even smells like an NDA vio­la­tion.  If that’s why your here and how you found this short blog post­ing, you’re in the wrong place.  I’ve worked far too hard for this to dimin­ish either the work I’ve put in to get here, or the work that so many oth­er full CCIEs have put in to attain their cer­ti­fi­ca­tions.  The only way you get the dig­its is to pay your dues like every­body else.

That said, my brief obser­va­tion for what it’s worth, is that this test was not entire­ly what I was expect­ing.  After years of tak­ing dif­fer­ent cer­ti­fi­ca­tion tests, includ­ing a vari­ety of oth­er offer­ings from Cis­co, this test seemed a bit, well, tame.  Not easy, just more straight-for­ward ques­tion and answer.  That was­n’t real­ly a pos­i­tive or neg­a­tive in my mind since I don’t real­ly con­sid­er myself a “test” per­son and would have pre­ferred a few more hands-on sce­nar­ios than I got.  But I sup­pose I’ll get more than my fill come lab-day.

The oth­er inter­est­ing thing I noticed was the ques­tions.  Some were almost cloy­ing­ly easy, while oth­ers a bit hard­er than I would have thought.  Pos­si­bly that is just a side effect of my study­ing habits.  In oth­er words, the ques­tions I found easy might be the same ones that trip some­one else up.  When you’ve been at the books long enough, you lose a lit­tle per­spec­tive on these things.  None of the ques­tions, how­ev­er, were sur­pris­ing in any way.  I think that the sub­ject mat­ter described on the blue­print, as well as some base-lev­el net­work­ing knowl­edge that is just assumed was all cov­ered in a way that you should expect of this lev­el of test­ing.

The last thing I found dif­fer­ent than some of the oth­er tests I’ve tak­en is the increased reliance on “stack­ing” tech­nolo­gies.  In oth­er words, you could see a ques­tion osten­si­bly focused on a par­tic­u­lar tech­nol­o­gy, but with one or two oth­er tech­nolo­gies rep­re­sent­ed in the ques­tion as well.  In par­tic­u­lar, you would be required to under­stand not only all three tech­nolo­gies in the ques­tion, but also the sub­tle inter­ac­tions that can hap­pen as they work togeth­er.  My sense is that this is prob­a­bly intend­ed to be more “real world” rep­re­sen­ta­tive, and in gen­er­al I think it worked well.

All in all I think it was like a lot of Cis­co tests: fair but dif­fi­cult.  If you know what you’re doing you should pass, and if you don’t, well… take your score break­down and hit the areas where you were weak.  Oh, and Cis­co: please make your exam­ple dia­grams eas­i­er to read!  I’m not so old that I need read­ing glass­es, but my god some of those dia­grams were bor­der­ing on illeg­i­ble.  On at least a cou­ple of occa­sions I had to squint, look side­ways, and try to see… like one of those damned “dot” pic­tures where if you stare long enough you see a dol­phin or some oth­er ran­dom­ly insipid thing you feel cheat­ed for hav­ing expend­ed the effort to see.

And now?  Off into some hun­dreds of hours of rack time.  Doh!

Share

December 31, 2010 Uncategorized

Random Thoughts

Read­ing Time: 4 min­utes

Random (and not so deep) Thoughts by Some Clown

I haven’t been writ­ing a lot late­ly, most­ly due to a com­bi­na­tion of my work and study sched­ule.  I thought, how­ev­er, that it would be use­ful to just toss down a few ran­dom thoughts on the prover­bial paper to wrap up 2010.  I’ll try to keep it some­what cohe­sive, but I can’t real­ly guar­an­tee any­thing.

Studying

Hav­ing made the deci­sion last year at Cis­co Live to final­ly buck­le down and pur­sue the CCIE Rout­ing and Switch­ing cer­ti­fi­ca­tion, I have been as busy as you might imag­ine with study­ing.  As I’ve gone down this road I’ve noticed a cou­ple of things:

(1) In the office I’m used to study­ing large white papers, doc­u­ments, man­u­als, com­mand ref­er­ences, etc., quick­ly to get to the answers I need for either deploy­ment or break-fix.  This is not the best way to study for the CCIE qual­i­fi­ca­tion exam, how­ev­er, as I tend to just as quick­ly for­get that infor­ma­tion past the point of it being imme­di­ate­ly use­ful.  I’ve had to change my habits now to include tak­ing notes, review­ing por­tions over and over, and cross-ref­er­enc­ing with mul­ti­ple sources.  Noth­ing earth shat­ter­ing to be sure, but a change for me.

(2) As allud­ed to above, I do a lot of cross-ref­er­enc­ing on my study mate­r­i­al.  I have mate­r­i­al from CCBoot­Camp that I con­sid­er to be my pri­ma­ry source (by virtue of being enrolled in the Cis­co 360 pro­gram through them).  I have also been read­ing the CCIE Rout­ing and Switch­ing Cer­ti­fi­ca­tion Guide, 4th Edi­tion, as well as the CCIE Rout­ing and Switch­ing Exam Quick Ref­er­ence Sheets–both by Cis­co Press.  I think it helps me quite a bit to read dif­fer­ent per­spec­tives on the same mate­r­i­al; to see it put a dif­fer­ent way on the page.  I have a Cis­co Live Vir­tu­al account as well, and so have been pulling some presentations–notably on QoS–from that site.

(3) I have over 16 years of pro­fes­sion­al expe­ri­ence in this indus­try, and while I am by no means an expert, I am con­fi­dent in things that I know.  To that end I would say that at some point in your stud­ies you will be almost guar­an­teed to come across infor­ma­tion, answers to prac­tice ques­tions, etc., that you just know are wrong.  I’ve had to learn not to be afraid to chal­lenge my study mate­r­i­al.  I don’t do it blind­ly, but I do go out and research in oth­er sources to ver­i­fy what I think I know.  I have found many instances of incor­rect infor­ma­tion in sev­er­al sources–more often than not in the Cis­co IOS exam­ple con­fig­u­ra­tions.  Some­times using com­mands that won’t work on that plat­form, oth­er times ref­er­enc­ing non-exis­tent class-maps or access-con­trol-lists.  Less often have I found bla­tant­ly incor­rect expla­na­tions of how a thing works, but even there I have found a cou­ple of exam­ples.  I take this as a good sign, actu­al­ly; it’s a sign that I am becom­ing more aware of the details of what I am study­ing.

Interesting Design Decisions

It always fas­ci­nates and bewil­ders me to see some of the design deci­sions that oth­er engi­neers make when putting togeth­er a net­work.  Much of what we do is sub­jec­tive, and even the most expe­ri­enced experts dis­agree on a good many things.  With that said, cer­tain things just don’t strike me as par­tic­u­lar­ly use­ful and it’s my pre­rog­a­tive to com­plain about them.  My top com­plaints from recent expe­ri­ence, in no par­tic­u­lar order are:

(1) My pre­de­ces­sor who built our main dat­a­cen­ter using 4503 switch­es exclu­sive­ly: access, dis­tri­b­u­tion and core (most­ly, but we do use a col­lapsed core mod­el).  The 4500 series is great but my gen­er­al argu­ment is that they’re under-pow­ered, or at least under-fea­tured for the core (Sup II-plus) and just a bit over­pow­ered for the access lay­er.  We use PoE 1‑Gig to every port in the build­ing, but the access lay­er is still bare­ly run­ning (less than 1 per­cent uti­liza­tion ever, on any met­ric).  I think some­one got a deal or some­thing.  We’re now replac­ing the core with a pair of 6506, 720 super­vi­sor, 10-gig, etc.

(2) A main dis­tri­b­u­tion point had a sin­gle 3845 with a 100-meg Inter­net con­nec­tion, and two full DS3 links.  Con­sid­er­ing the 3845 max­es out at 45 Meg of through­put, this seems a par­tic­u­lar­ly egre­gious vio­la­tion in my mind.  We’ve now moved that to a 3945, which if under full load is prob­a­bly still a tad over-sub­scribed, but much bet­ter and the price was right.

(3) Who was it at Cis­co that decid­ed that the ASA-5510 would only have two Gig links avail­able, and only with the right license?  Why only two?  Why not three or all five?  This might be a back­plane issue, I don’t know, but it just both­ers me.

(4) My own stu­pid­i­ty in set­ting up the afore­men­tioned ASA-5510 pair (failover) with the inside and out­side inter­faces on the gig links, when I should have had the two trunk links that han­dle much more traf­fic on those inter­faces.  This will be changed soon, but I should have done it right the first time.

In Conclusion

2010 has been a good year over­all, with a lot of inter­est­ing projects, expe­ri­ences, and sol­id learn­ing had by all–or at least me.  I’m look­ing for­ward to 2011 and all of the con­tin­ued suc­cess­es and expe­ri­ences to come.  I’d also like to give a spe­cial shout-out to all of my Twit­ter col­leagues, friends, fol­low­ers, and var­i­ous clingers-on and lurk­ers.  I have found the Twit­ter com­mu­ni­ty to be an invalu­able source of sup­port, wis­dom, and occa­sion­al­ly respite from the rig­ors of the dai­ly grind.  If you’re not on Twit­ter, I’d high­ly encour­age you to give it a look.

Hap­py New Year every­one!

Share
  • « Previous Page
  • Page 1
  • …
  • Page 10
  • Page 11
  • Page 12
  • Page 13
  • Page 14
  • Next Page »

Copyright© 2023 · by Shay Bocks