• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Standard Disclaimers
  • Resume/Curriculum Vitae
  • Why Blog?
  • About
  • (Somewhat Recent) Publication List

packetqueue.net

Musings on computer stuff, and things... and other stuff.

March 8, 2017 Cloud

Uila: Visions of the Future

Read­ing Time: 5 min­utes

If it’s true that a pic­ture is worth a thou­sand words, then nascent soft­ware com­pa­ny Uila has writ­ten a nov­el with their break­out soft­ware prod­uct for mon­i­tor­ing, and per­form­ing root cause analy­sis on, vir­tu­al­ized and phys­i­cal net­work infra­struc­ture, bridg­ing the gap between both. Rarely does a break­through prod­uct come out of a start­up on first try—usually there is much refining—but Uila may have done just that. The visu­al­iza­tions are noth­ing short of stun­ning, and the back­ing data is attained fast, and is high­ly accu­rate. Add cloud man­age­ment and you real­ly do have the mak­ings of a great prod­uct.

Before we get to the prod­uct, how­ev­er, in this case it is worth talk­ing for a moment about the founders of Uila. Chia-Chee Kuan, Dean Au, and Miles Wu have, col­lec­tive­ly, some 76 years of expe­ri­ence between them in the indus­try they’re try­ing to change; expe­ri­ence that comes in the form of patents, in the found­ing of com­pa­nies like Air­Mag­net and Cin­co Net­works, and in the many years of R&D at com­pa­nies from Cis­co to Fluke. Oh, and let’s not over­look their cre­den­tials in the form of com­put­er sci­ence degrees and master’s degrees from some of the best engi­neer­ing schools across the world. Back­ground and expe­ri­ence do not equal suc­cess, but cer­tain­ly the weight of expe­ri­ence here lends itself to a high lev­el of cred­i­bil­i­ty right out of the gate, and begs at least a sec­ond glance.

What’s the Prob­lem?

As data cen­ters and infra­struc­ture in gen­er­al have become increas­ing­ly com­pli­cat­ed, the risk of one thing break­ing and caus­ing the whole ball of yarn to come undone has increased expo­nen­tial­ly, while the tools to ana­lyze such prob­lems with an eye towards root cause analy­sis and res­o­lu­tion have not matured. Ven­dors are still sell­ing, and users are still using, point prod­ucts designed to trou­bleshoot one par­tic­u­lar aspect of a fail­ure. What we should be doing is not iso­lat­ing fail­ure domains from a trou­ble-tick­et point of view, but rather start­ing with a larg­er domain and shrink­ing from there. It’s a dif­fer­ent method­ol­o­gy, but in today’s data cen­ters it can be chal­leng­ing to go with the tra­di­tion­al bot­tom up tech­niques we’ve all learned to love.

Don’t take that to mean that we should­n’t use good, sol­id meth­ods for iso­la­tion of fail­ures, just that we should be doing so on sub­sets of the larg­er whole, with that larg­er whole firm­ly vis­i­ble and watched as we work. Too often we toss flam­ing bags of shit over the wall to the <insert team here> and hope that they fig­ure out the prob­lem (val­i­dat­ing that the prob­lem was theirs to fix) before toss­ing it back. In this way, each team in suc­ces­sion, goes through the insu­lar trou­bleshoot­ing steps rel­e­vant to only their domain, with no inter­est in, or view of, the big­ger whole. This is inef­fi­cient and slow, and a ter­ri­ble way to solve any prob­lem.

How Does Uila’s Prod­uct Work?

Uila’s prod­uct aims to bridge those inher­ent divides in trou­bleshoot­ing and appli­ca­tion vis­i­bil­i­ty spheres by uti­liz­ing vir­tu­al smart taps (vST) in con­cert with more tra­di­tion­al (SNMP, SMI, SSH) means. Vir­tu­al taps get their infor­ma­tion straight from the dis­trib­uted vir­tu­al switch (DVS) in a vir­tu­al­ized envi­ron­ment, and uti­lize agents to grab the phys­i­cal device data. All of this gets rolled up to the Uila Man­age­ment & Ana­lyt­ic Sys­tem, which is a cloud-based ser­vice han­dling the intel­li­gence and analy­sis behind and of the data–think: Mer­a­ki for appli­ca­tion vis­i­bil­i­ty.

Uila’s prod­uct can per­form deep pack­et inspec­tion (DPI), auto-dis­cov­er over 4000 appli­ca­tions, track appli­ca­tion trans­ac­tions and depen­den­cies, and track net­work and TCP per­for­mance all while remain­ing dis­trib­uted and agent-less. For a lot of sys­tems and net­work oper­a­tors the lat­ter point is a big one. Hav­ing to install agents on a mul­ti­tude of devices, many of which can’t actu­al­ly host agents, can become unwieldy. An agent-less prod­uct like Uila’s allows for a quick­er roll­out over­all, mak­ing for a quick­er time to val­ue for the busi­ness. And because of the var­i­ous inflec­tion points for data into the appli­ca­tion, full stack vis­i­bil­i­ty is more than just lip-ser­vice.

What Else Can It Do? 

One of the chal­lenges of mod­ern soft­ware-defined net­work­ing, sys­tems, cloud, etc., is appli­ca­tion dis­cov­ery map­ping. Assum­ing that one of the pri­ma­ry rea­sons for mov­ing to a new modal­i­ty in net­work fab­rics and automa­tion is agili­ty, under­stand­ing what the hell your appli­ca­tions are doing is of para­mount impor­tance. Under­stand­ing which ports are used by which process­es, which back­end data­bas­es talk to which front-end or mid­dle-ware soft­ware ser­vices, and how all of this can be orches­trat­ed and auto­mat­ed is not as easy as it might appear at first glance. Find­ing this infor­ma­tion can be frus­trat­ing and error-prone, and very often depen­den­cies get missed, lead­ing to down­time, roll-backs, or missed mile­stones.

Uila’s soft­ware, due to it’s full-stack vis­i­bil­i­ty, is actu­al­ly a very good tool for ana­lyz­ing a soft­ware stack, for per­form­ing appli­ca­tion depen­den­cy map­ping at a fast clip and with high accu­ra­cy and con­fi­dence. Hav­ing used a vari­ety of tools to per­form this analy­sis in the recent past, I can say that this tool is one of the best on the mar­ket. The visu­al­iza­tions are a nice touch, as dur­ing these exer­cis­es I have found that many appli­ca­tion teams are sur­prised to learn the com­plete scope of what their stack is doing, rein­forc­ing the fact that a tool like this is key to grab­bing all per­ti­nent infor­ma­tion pos­si­ble.

Con­clu­sions, More Infor­ma­tion, and Next Steps

In watch­ing the pre­sen­ta­tion that led to this arti­cle, research­ing the prod­uct online, using the prod­uct myself, and talk­ing with the founders in per­son, one thing has become abun­dant­ly clear to me: this appli­ca­tion has a lot more capa­bil­i­ties and fea­tures than I can prop­er­ly cap­ture in a sin­gle sit­ting. Stor­age ana­lyt­ics, vir­tu­al machine (VM) cross-talk, laten­cy and jit­ter, and myr­i­ad more options for trou­bleshoot­ing are all cov­ered through this tool, and the uses are lim­it­ed only by the time you have to delve in and push but­tons. I haven’t even brought up the auto­mat­ed root cause analy­sis capa­bil­i­ties, which in and of them­selves war­rant at least an arti­cle, if not a whitepa­per.

 

If you have any inter­est in the prod­uct, I would sug­gest you take a look at Uila’s web­site (http://www.uila.com) and poke around a bit. They have an impres­sive list of cus­tomers already, as well as some whitepa­pers and oth­er infor­ma­tion avail­able. They have a gen­er­ous 30-day free tri­al which is ful­ly fea­tured and includes sup­port and train­ing, which goes a long way to get­ting peo­ple in and using the prod­uct in enough time to actu­al­ly use the demo—something some more estab­lished indus­try play­ers might want to take note of. You can also see some videos of Uila pre­sent­ing their solu­tions to an audi­ence of indus­try folks of vary­ing back­grounds, by going to the Tech Field Day site at: http://techfieldday.com/companies/uila/.

At the end of the day, Uila may have writ­ten a nov­el, but the mar­ket will deter­mine if it’s worth read­ing.

Share

January 14, 2017 Uncategorized

Living the Command Line Dream with Mutt

Read­ing Time: 6 min­utes

It prob­a­bly stems from how long I have been using com­put­ers, and what my first com­put­er inter­faces looked like, but I have been enam­ored of com­mand line inter­faces since, well, for­ev­er. There have been the occa­sion­al graph­ics dal­liances: the Ami­ga, NeXTSTEP, the Enlight­en­ment win­dow man­ag­er, some aspects of the cur­rent OS X GUI. By and large, how­ev­er, in any envi­ron­ment I can think of, I pre­fer a com­mand line inter­face.

The chal­lenge in today’s world is that many oper­at­ing sys­tems, and an increas­ing num­ber of net­work, stor­age, and oth­er enter­prise-lev­el devices, all make it at dif­fi­cult to use the com­mand line. I do have valid rea­sons, in my own mind at least, for pre­fer­ring a text-based inter­face: I can be more pro­fi­cient when I do not have to remem­ber where all the lit­tle pic­tures and menus are, and what man­ner of click­ety-click­ing I need to per­form in order to accom­plish reg­u­lar, mun­dane, should-be-easy tasks; my brain is hard­wired to remem­ber words more than pic­tures, and as a gen­er­al rule the com­mand line offers more pow­er and flex­i­bil­i­ty than does the GUI. Of course, if you’re not accus­tomed to the com­mand line, or you’re not a lud­dite, you may scoff at the idea of using an archa­ic inter­face method, but I do find that most pow­er users I know—and this is pure­ly anecdotal—prefer a text inter­face as well: the Unix shell, Win­dows Pow­er­Shell, Cis­co IOS, what­ev­er. That said, I do man­age to claw my way back into the com­mand-line world where I can, a piece at a time, and the small vic­to­ries keep me sati­at­ed, for the most part.

One area where I find, almost with­out excep­tion, the GUI com­plete­ly hor­rif­ic and lack­ing is in today’s email clients. The appli­ca­tions seem to be mas­sive­ly bloat­ed, tak­ing much more mem­o­ry and pro­cess­ing pow­er than their func­tions would sug­gest. The lay­outs of most, while famil­iar, are only accept­able because we have all become used to them, not because they are use­ful. The great­est sin, though, is that the search­ing capa­bil­i­ties are almost uni­ver­sal­ly hor­rid, inef­fec­tu­al, and slow. I want­ed bet­ter, and I final­ly did some­thing about it: I went to Mutt. Actu­al­ly, I went to Neo­Mutt.

Mutt has been around for­ev­er as an email client, like the much vaunt­ed Pine—still avail­able, by the way—and Neo­Mutt is noth­ing more than Mutt pre-built with sev­er­al of the most com­mon Mutt patch­es. Mutt is an infi­nite­ly flex­i­ble email client that is built in the old vein of clients, and fol­lows the Unix ethos of doing one thing only, and doing that one thing well. While you can set it up to native­ly han­dle many things from gath­er­ing email, sort­ing, etc., it works best when you leave those func­tions to oth­er appli­ca­tions.

The chal­lenge with Mutt, and oth­er sup­port appli­ca­tions in what you might call its ecosys­tem, is that they aren’t always as intu­itive to set up as might be ide­al, and less so in today’s com­pli­cat­ed cli­mate of pro­pri­etary for­mats (EWS,) cloud-host­ed ser­vices, IMAP, POP, SNTP, GPG, etc. It took me a while to get my set­up dialed in, or at least close to dialed in (there are always tweaks being made), and in the hopes that some­one else can use what I have built, I’m putting it all out for pub­lic con­sump­tion.

I would be remiss if I did not point out that while I cre­at­ed much of what is in these con­fig­u­ra­tion files, oth­er peo­ple cre­at­ed much more. As is the tra­di­tion in the open source world, I bor­rowed the best bits from a mul­ti­tude of peo­ple, sites, man­u­als, DIY guides, my whiskey col­lec­tion, and ran­dom deities to land at what is a fair­ly ser­vice­able set of files. I would expect noth­ing less than for the read­er to take all of this mess and make it his own. Just con­tribute back some­how if you find a bet­ter way of doing things.

So, with­out fur­ther ado, the con­fig­u­ra­tions are below.

In order to get mail flow­ing into my sys­tem I have to use two pro­grams: davmail, and mbsync. I have three accounts I reg­u­lar­ly use, two of which are IMAP, and one which is my cor­po­rate Exchange serv­er using EWS. The IMAP accounts are not too dif­fi­cult to get work­ing, but the Exchange account was a tad chal­leng­ing at first.

Mbsync is a pro­gram that just runs in the back­ground, col­lect­ing email from my accounts at a set inter­val: five min­utes at home, one minute on my cor­po­rate machine. Just to get it out of the way now, I use cron to keep that pro­gram going (I know, I know, dep­re­cat­ed on OS X, but I’ll keep using it until the last pos­si­ble sec­ond it is in the OS.) I hope if you are going down this road you know how to use cron, but if not, just type: crontab –e from a ter­mi­nal and put the fol­low­ing line in:

The ref­er­enced mbsync file below is san­i­tized, but should give you an idea of what’s going on. The only real caveat here is that you’ll want to pay very close atten­tion to for­mat­ting. The mbsync pro­gram expects every­thing to be in a very spe­cif­ic order, and those spaces in the file are a part of that. If in doubt, con­sult the doc­u­men­ta­tion. Oh, and the “pass­cmd” I have in there is using PGP to decrypt an encrypt­ed pass­word file I have for each account. That way I don’t have plain­text pass­words lay­ing around on my sys­tem.

Oh yes, and in this con­fig­u­ra­tion I pulled cer­tifi­cates from my two IMAP accounts using openssl and stored them in an .accounts direc­to­ry that I cre­at­ed. These are my con­ven­tions, by the way, and you are free to pick your own direc­to­ry struc­tures that make sense to you.

I decid­ed against using the OS X ~/Mail loca­tion in my home direc­to­ry and instead cre­at­ed a ~/.mail fold­er. There I cre­at­ed one direc­to­ry for each account, and then cre­at­ed an inbox fold­er. Every­thing else will take care of itself once mbsync starts pulling down data. The con­fig­u­ra­tion as I have it cre­ates fold­ers as need­ed to match the serv­er struc­ture.

Now, because my Exchange admin­is­tra­tors do not have IMAP con­fig­ured for access out­side of the cor­po­rate net­work, and I did not want to VPN in every time I need­ed to check my cor­po­rate email, I chose to use a pro­gram called davmail. Davmail works by con­nect­ing to your Exchange serv­er using native pro­to­cols (either EWS or Web­dav, depend­ing on the age of the serv­er.) It then expos­es a series of local ports that you can con­nect to using indus­try stan­dard pro­to­cols like IMAP, POP, SMTP, etc. It can be con­fig­ured to start and run as long as your com­put­er is up, and if you install it via brew, you’ll get a quick lit­tle com­mand to run once it’s been installed. The con­fig­u­ra­tion that end­ed up work­ing for me is below. Note that the ports I’m using are arbi­trary, you can use dif­fer­ent ones if desired.

The meat of the con­fig­u­ra­tion is the mut­trc file itself, and there is far too much in this file to cov­er every­thing. I’ll point out just a cou­ple of details, and leave the rest for the read­er to mod­i­fy and cajole as need­ed. Again, the Google machine is your friend, as is the very good Mutt doc­u­men­ta­tion you can find with just a cur­so­ry search.

I have sep­a­rat­ed my per-account con­fig­u­ra­tion into sep­a­rate files for two rea­sons: I like things clean and easy to change, and this way I can switch between accounts in a nice man­ner. Both of these goals are met with the ini­tial bind­ing of the F1, F2, and F3 keys to source those files. I also source them at the top of the file so that they are pre-pop­u­lat­ed when I start Mutt.

I have also added some PGP con­fig­u­ra­tion because that is one of the must-have fea­tures for me in an email client. I have auto­mat­ic sign­ing of mes­sages con­fig­ured, and auto­mat­ic decrypt­ing of encrypt­ed mail. I still have to tweak the key bind­ings a bit, but that’s a small task.

I have made some changes in the file to force emails, as much as pos­si­ble, to come across as text and not HTML, to pull cal­en­dar invites out appro­pri­ate­ly, and to offer attach­ments and the HTML ver­sion of the email as options. It is not always per­fect con­sid­er­ing that almost every­one and every email client wants to send large, hor­ri­bly bloat­ed, HTML-for­mat­ted emails willy-nil­ly, but it does alle­vi­ate the prob­lem quite a bit.

I’m going to leave this as an exer­cise for the read­er, but a quick note on search­ing: Mutt uses regex as its search method of choice, and for those of you who know what this means, let the rejoic­ing com­mence. It does have oth­er ways to search and lim­it, how­ev­er, so play around with all of the search modal­i­ties includ­ed, and you’ll find some­thing that works well for you.

Much of this makes sense once you get into the pro­gram and get used to the par­a­digm, but I’d prob­a­bly just leave this file large­ly intact until you get a han­dle on all of Mutt’s func­tion­al­i­ty. None of this is gospel, and one of the best parts about Mutt is that it real­ly is infi­nite­ly flex­i­ble. You can make it as much your own as you’d like. It prob­a­bly makes sense to do a Google images search for Mutt or Neo­mutt, just to see the myr­i­ad dif­fer­ent con­fig­u­ra­tions peo­ple have put togeth­er.

Here is an exam­ple of an indi­vid­ual Mutt con­fig­u­ra­tion file. This/these file/files should only include that con­fig­u­ra­tion which is unique to the ref­er­enced account. Try not to dupli­cate func­tion­al­i­ty here, just put in what you need.

I hope this helps some­one out there who may be as in love with the com­mand line as I am. If not, maybe all of this work serves as a warn­ing: there be drag­ons on the voy­age to com­mand line nir­vana. In the next chap­ter of what is like­ly going to be an ongo­ing saga in com­mand line shenani­gans, I’ll talk a bit about my home­grown com­mand line Twit­ter client: part util­i­ty, part tra­di­tion­al client writ large across the vaunt­ed ter­mi­nal.

Share

August 22, 2016 Uncategorized

Glue Networks Changes the Automation Game

Read­ing Time: 3 min­utes

“Sim­plic­i­ty is the ulti­mate sophis­ti­ca­tion.”

Clare Boothe Luce

Accord­ing to a recent­ly pub­lished sur­vey by Cis­co Sys­tems, Inc., IT orga­ni­za­tions spend 67% of their bud­gets on oper­a­tional expense, and con­sume 80% of their time in the process. That is a stag­ger­ing amount of time and mon­ey spent on sim­ply man­ag­ing the tech­nol­o­gy infra­struc­ture of an enter­prise. It comes as no sur­prise, then, that so many peo­ple are focused on reduc­ing those num­bers, and on increas­ing time-to val­ue of new tech­nol­o­gy in the process.

Glue Net­works is one of the com­pa­nies try­ing to solve this prob­lem by bring­ing automa­tion to the net­work. Jeff Gray, CEO of Glue Net­works, claims that they have built the “first mod­el dri­ven, mul­ti-ven­dor, soft­ware defined net­work, orches­tra­tion plat­form that allows orga­ni­za­tions to con­trol their net­works in a new way.” That’s a lot of words, and a bold claim, to be cer­tain, but there’s many a slip twixt the cup and the lip, and claims are easy to make. Prov­ing them out is more dif­fi­cult.

One of the chal­lenges in today’s net­works, par­tic­u­lar­ly when speak­ing about automa­tion, is the wide vari­ety of prod­ucts, plat­forms, hard­ware, cir­cuits, etc., that exist and need to be con­trolled. If I want to make a change in my QoS pol­i­cy across a spe­cif­ic path in my net­work, for instance, I may have to touch sev­er­al brands of gear, as well as sev­er­al dif­fer­ent mod­els with­in one vendor’s prod­uct line. Automat­ing any kind of task, let alone a com­plex change, has his­tor­i­cal­ly been extreme­ly dif­fi­cult. Even with great script­ing, it’s dif­fi­cult to make changes every­where at once.

Glue has tak­en a unique approach to this prob­lem by cre­at­ing data-dri­ven mod­els based on intent. In oth­er words, their Glue­ware Con­trol plat­form looks at what the intent of the oper­a­tor is in request­ing a change. You’re oper­at­ing at a high­er lev­el of abstrac­tion from the raw gear to be changed, and let­ting the con­trol plat­form fig­ure out how to exe­cute the changes need­ed in order to ful­fil your intent. You tell the sys­tem what hard­ware you have, and it uses its knowl­edge of that hard­ware to exe­cute changes.

When­ev­er you abstract anoth­er lay­er above a hard­ware change, you rely more and more on the accu­ra­cy of your mod­els and for­mu­las to tell the hard­ware what needs to hap­pen. If your automa­tion engine can only talk to three mod­els of switch­es, it is not going to be spec­tac­u­lar­ly use­ful. Cur­rent­ly the Glue­ware Con­trol plat­form comes with mod­els for 13 dif­fer­ent mul­ti-ven­dor pack­ages and oper­at­ing sys­tems, with many more on the way. These are the recipes for how the sys­tem talks to your gear, so more is always bet­ter. The cur­rent goal, accord­ing to Gray, is to release a new ven­dor pack­age every three weeks.

Glue also announced the release of the Glue­ware Com­mu­ni­ty, which is what they’re call­ing their user-dri­ven, online, ecosys­tem for col­lab­o­rat­ing with fel­low users. Here is where a robust com­mu­ni­ty of users exchanges recipes and for­mu­las that they have writ­ten, which may not be some­thing that Glue has released them­selves. In oth­er words, maybe you have a some­what rare device that few peo­ple have, and no one has writ­ten a mod­el for it yet. You wrote a mod­el (there are plen­ty of exam­ples and instruc­tions on the com­mu­ni­ty site) to sup­port your unique device, put it into the com­mu­ni­ty repos­i­to­ry, and now oth­er users can ben­e­fit from your solu­tion. This is quite a good way to both encour­age com­mu­ni­ty par­tic­i­pa­tion, and to rapid­ly increase adop­tion by grow­ing the mod­els and for­mu­las repos­i­to­ry in leaps and bounds.

Respond­ing to cus­tomers, Glue has also tak­en their tra­di­tion­al­ly cloud-based plat­form and extend­ed it into an on-premis­es solu­tion, say­ing that many cus­tomers need­ed a “behind the fire­wall” solu­tion. They have also expand­ed from a pure­ly WAN-based solu­tion, into both LAN and dat­a­cen­ter envi­ron­ments, extend­ing their use­ful­ness across the whole of the enter­prise, rather than sim­ply run­ning as a point solu­tion.

Anoth­er fea­ture that is extreme­ly promis­ing is the abil­i­ty for the plat­form to dynam­i­cal­ly cre­ate mod­els, based on your gear, in a brown­field envi­ron­ment. You can install this prod­uct, and based on what it knows, it will mod­el your net­work devices and pull them into the sys­tem. This short­ens the time to val­ue equa­tion by allow­ing users to imme­di­ate­ly derive val­ue from the prod­uct, some­thing which helps to pre­vent an expen­sive pur­chase from becom­ing shelf-ware.

All in all, I’d say that Glue has made great strides in this release, and def­i­nite­ly is at the fore­front of ven­dors pro­vid­ing solu­tions to one of the most press­ing issues of the day. While many oth­er prod­ucts and solu­tions pur­port to solve the automa­tion prob­lem, reduc­ing oper­a­tional expens­es and staff uti­liza­tion, far too many require large invest­ments in what are ulti­mate­ly non-ven­dor agnos­tic ecosys­tems. These lat­ter sys­tems tend to move the prob­lem from OPEX to CAPEX, while intro­duc­ing tremen­dous amounts of com­plex­i­ty. Glue offers a solu­tion that is both sim­ple and pow­er­ful, and should def­i­nite­ly be some­thing you take a look at imple­ment­ing.

For more infor­ma­tion on the plat­form, how it works, and what prob­lems it solves, take a look at this pre­sen­ta­tion by Olivi­er Huynh Van, CTO and Co-founder of Glue Net­works:

 

 

Share

July 20, 2016 Uncategorized

Cisco Live 2016 Wrap-up

Read­ing Time: 4 min­utes

Cisco Live Sign

The 2016 Cis­co Live con­ven­tion has just wrapped up, and I felt I should write a post-con­ven­tion arti­cle as my own cathar­tic way of send­ing it out with a bow on top. It’s tak­en me a few days to gath­er my thoughts this time, as the con­ven­tion was held in Las Vegas. While the adage that what hap­pens in Vegas stays in Vegas may or may not be true, the hang­overs and weari­ness cer­tain­ly fol­low you home, and cer­tain­ly slow down the writ­ing.

This year, the con­fer­ence was held inside the Man­dalay Bay Con­ven­tion Cen­ter, and though the con­ven­tion cen­ter is huge, we man­aged to fill it. In fact, as is the way every year, there are increas­ing num­bers of peo­ple attend­ing, with this year hit­ting around the 30,000 mark. Believe me, you could feel the crush of peo­ple. Walk­ing in and out of keynotes, head­ing to lunch, and a few oth­er times, it seemed like 5pm on the free­ways in Los Angeles—as in, can’t get there from here. The staff at Cis­co, and the staff at Man­dalay Bay are very pro­fes­sion­al, so while the num­bers of peo­ple may have been pro­hib­i­tive at first blush, every­thing worked well, and we all were able to get where we were going, even­tu­al­ly. I do won­der what the aver­age par­ty­go­er in Vegas thought being con­stant­ly sur­round­ed by mas­sive amounts of peo­ple wear­ing badges with vary­ing lev­els of flair—most like­ly very bemused.

On anoth­er note, I have to say that the check-in process this year was by far the best I have expe­ri­enced at any con­ven­tion I have been to. While my case is no doubt an anom­aly due to check­ing in at an odd time, I was very hap­py that the entire process took less than 5 min­utes. Because Cis­co moved to a pre-reg­is­tra­tion set­up, where you checked in ahead of time and received a QR code, the process was sig­nif­i­cant­ly sped up. I walked up to the front, they scanned the code from my email, ver­i­fied iden­ti­ty, and hand­ed me my badge. Every­one I spoke to dur­ing the week had a sim­i­lar expe­ri­ence, and agreed that reg­is­tra­tion this year was stel­lar.

I can­not com­ment on any of the ses­sions direct­ly, as I did not attend any, though anec­do­tal­ly I heard that they were as impres­sive and insight­ful as always. I came on a social pass this year, pri­mar­i­ly because I had so many com­mit­ments that I knew I would like­ly not have time for any ses­sions. Cis­co brings out their best speak­ers, sub­ject mat­ter experts, and cus­tomers every year, and this year was no dif­fer­ent. There were many pan­els on a range of sub­jects, short two-hour ses­sions focus­ing on a giv­en tech­nol­o­gy or prod­uct, and half or full day “tech­to­ri­als” where a very gran­u­lar, large­ly hands-on, sub­ject was explored.

This year also saw the return of the very pop­u­lar “hackathon,” which draws teams of pro­gram­mers into the con­ven­tion early—it begins on Saturday—for a 24-hour con­test of cod­ing chops. The con­tes­tants are giv­en a sub­ject (this year it was on pre­vent­ing the declin­ing bee pop­u­la­tion,) and told to con­tribute their best solu­tion to the prob­lem. The win­ners receive a mon­e­tary prize, and their solu­tion is dis­played for atten­dees to view. This event con­tin­ues to grow, and with the move­ment of soft­ware cod­ing into the realm of net­work engi­neer­ing, I am con­fi­dent it will grow even big­ger next year.

There are always a lot of ancil­lary events at the con­fer­ence, and as a Tech Field Day del­e­gate, I am priv­i­leged to have been involved with the Tech Field Day Extra events this year. This is always a great place to see new­ly announced and emerg­ing tech­nolo­gies, ask a lot of ques­tions, and report back out to the com­mu­ni­ty at large. It is some­thing I am always grate­ful to be a part of, and it pro­vides a valu­able ser­vice to both the pre­sen­ters and the view­ers of the live stream or record­ed videos. This year we heard from Opengear, Glue Net­works, Veeam, and Cis­co. I will be writ­ing more on that lat­er, so stay tuned for a series of arti­cles recap­ping those pre­sen­ta­tions.

All in all, there are too many things that go on dur­ing the week for me to ade­quate­ly describe in one blog post. I see great friends I only see once or twice a year, I met new friends, and I even sat in on a cou­ple of live podcasts—one in which I was made to par­tic­i­pate. There were many, many late nights, along with the pre­dictable slow morn­ings, but it is always worth it nev­er­the­less. Cis­co Live is a con­fer­ence I haven’t missed in sev­en plus years, and one I will not miss at any point in the future. If you attend­ed this year, you know what I mean, and if you did not attend, make it a point to do so next year. You will not be dis­ap­point­ed. Just to make it easy, here’s the upcom­ing sched­ule:

  • Las Vegas, June 26 — 29, 2017
  • Orlan­do, June 10 — 14, 2018
  • San Diego, June 9 — 13, 2019
  • Las Vegas, May 31 — June 4, 2020
  • Las Vegas, June 6 — 10, 2021
  • Orlan­do, June 12 — 16, 2022

Also, check out a friend of mine’s blog on this year’s event, com­plete with pic­tures. She did what I failed to do this year, which is to cap­ture many pic­tures of the event.

802Tophat_2016-Jul-13 cantechit_2016-Jul-12 IMG_20160714_205208 Cisco Live Closing Picture

 

Share

February 10, 2016 Cisco

Cisco Announces VIRL on Cloud

Read­ing Time: 1 minute

Cis­co just announced today that their VIRL prod­uct, for­mer­ly only avail­able in a per­son­al edi­tion, and an enter­prise edi­tion, will be avail­able in a cloud host­ed ver­sion soon. This new ver­sion will remove a major bar­ri­er to adop­tion, which is the need for horse­pow­er. Now you’ll be able to spin up work­loads sep­a­rate from your own hard­ware, but still access them as if they were local. Any­one with a valid VIRL license will be able to access this ser­vice.

Details of the new release will be announced for­mal­ly on the VIRL web­site and social media plat­forms on 2/16. There will also be a webi­nar on 2/23 to share more about the col­lab­o­ra­tor for this effort, and to walk folks through the set­up.

I real­ly wish I could say more, but suf­fice it to say that this new ver­sion has the poten­tial to be a game chang­er in the world of net­work sim­u­la­tions, train­ing, proof of con­cept devel­op­ment, and in myr­i­ad oth­er use cas­es. Stay tuned.

Share

January 3, 2016 Cloud

ZeroStack — Simplified and Automated OpenStack

Read­ing Time: 5 min­utes

“Clut­ter and con­fu­sion are fail­ures of design, not attrib­ut­es of infor­ma­tion.” ~ Edward Tuft

In the ever-so-neat­ly pack­aged and mar­ket­ed buzz­word bin­go world that we know as “cloud,” there are two gen­er­al­ly accept­ed fla­vors: on-premise, or pri­vate, cloud, and pub­lic cloud. Pub­lic clouds of the most well-known vari­ety are Ama­zon Web Ser­vices (AWS,) and Microsoft Azure. On the pri­vate side you have VMware, if you buy a few-bajil­lion dif­fer­ent soft­ware pack­ages, or Open­Stack, itself a fair­ly unwieldy beast of an ecosys­tem. All achieve rough­ly the same goals and end-state, name­ly allow­ing fast and easy cre­ation and con­sump­tion of large­ly tran­sient, vir­tu­al­ized work­loads. The ben­e­fits and draw­backs of each, how­ev­er, exist in dif­fer­ent spheres.

The pub­lic cloud providers offer very easy to con­sume ser­vices, already built in their envi­ron­ments and on their hard­ware, as secure as we can call any­thing these days, all for an osten­si­bly nom­i­nal fee. Your data lives out­side your data cen­ter, how­ev­er, and can suf­fer from what’s known in the indus­try as “noisy neigh­bor” syn­drome, where­by oth­er users’ appli­ca­tions host­ed on the same hard­ware as yours, can con­sume enough resources to starve your appli­ca­tions. Addi­tion­al­ly, as they say in the val­ley, AWS is very cheap if you fail, it’s very expen­sive if you suc­ceed, mean­ing that the cost of pub­lic cloud looks great at first glance, but once you start con­sum­ing a lot of resources, your costs can quick­ly bal­loon to eye-water­ing lev­els.

The pri­vate cloud ecosys­tems aren’t with­out their own blem­ish­es, though, and while offer­ing a much low­er oper­a­tional cost on paper as you don’t have ongo­ing fees to a provider, and you ulti­mate­ly have more secu­ri­ty and con­trol since your data lives in your own data cen­ters, they tend to be very chal­leng­ing to stand up, and even more com­plex to main­tain. Many com­pa­nies have to increase hir­ing of spe­cial­ized staff just to mon­i­tor and main­tain the sys­tem, and often have to pay con­sul­tants to get the envi­ron­ment built and tuned in the first place.

A visu­al exam­ple from a deck they pre­sent­ed on dur­ing a recent Tech Field Day event shows a tra­di­tion­al mod­el of cloud, and the dif­fer­ences at a high lev­el between pub­lic and pri­vate imple­men­ta­tions of the same:

Cloud Before ZeroStack

Cloud Before ZeroStack

From what I can tell, ZeroStack seems to have been found­ed on the premise that OpenStack—one of the most talked about pri­vate cloud sys­tems out there, and by far the one with the most buzz—is an incred­i­ble prod­uct, but need­less­ly com­pli­cat­ed for all but a few folks. Their core mis­sion to sim­pli­fy and reduce Open­Stack to an easy to use plat­form, deploy­able in min­utes rather than weeks, is one that will undoubt­ed­ly res­onate with a sig­nif­i­cant seg­ment of the IT pop­u­la­tion. In doing so, how­ev­er, they run the risk of col­lid­ing head on with some of the big boys of indus­try.

What ZeroStack does is com­modi­tize Open­Stack into a hard­ware and soft­ware plat­form which can be deployed in under 15 min­utes, a claim which I can ver­i­fy first hand. They ship a box—based on mer­chant hardware—to your loca­tion, you plug it in, answer a few ques­tions, it phones home, and with­in min­utes you have a ful­ly func­tion­ing Open­Stack envi­ron­ment. As any­one who has deployed Open­Stack, even in a basic devel­op­ment envi­ron­ment, can attest, it is very time-con­sum­ing and not entire­ly triv­ial to stand up in any kind of func­tion­ing manner—presumably the end goal.

ZeroStack’s hard­ware is cus­tom built from off-the-shelf com­po­nents, and comes in four dif­fer­ent fla­vors depend­ing on your needs. The servers can “stack” in a scale-out mod­el, and use a dis­trib­uted stor­age fab­ric, dis­trib­uted man­age­ment, and dis­trib­uted SDN fab­ric, across all servers. This allows for large build outs, but per­haps more impor­tant­ly, it allows for seam­less host fail­ure recov­ery through a leader elec­tion mech­a­nism. The more inter­est­ing bit is how they han­dle man­age­ment, and is where we begin to see where we can draw a Mer­a­ki com­par­i­son.

What ZeroStack has done that is the most evolutionary—I won’t say rev­o­lu­tion­ary since this is being done already, more on that in a minute—is moved the man­age­ment com­po­nents of Open­Stack into a cloud (how many abstrac­tions of cloud can we han­dle before the whole thing blows up in a cloud of vague­ly con­sul­tant-smelling mar­ke­tec­ture?) They host this on their own plat­form in their own data cen­ter. This allows you to man­age the sys­tem from any­where, much like what Mer­a­ki did for wire­less net­works.

This can be illus­trat­ed again with a slide from the same pre­sen­ta­tion ref­er­enced above:

Cloud After ZeroStack

Cloud After ZeroStack

By sep­a­rat­ing what we could loose­ly call the con­trol plane from the data plane, to bor­row from the net­work­ing world , the entire­ty of the Open­Stack sys­tem and deploy­ment mod­el is made man­i­fest­ly eas­i­er for the aver­age enti­ty to deploy and con­sume. You rack and stack the hard­ware, point it at the ZeroStack cloud man­age­ment por­tal, and it does the rest. You get a cup of cof­fee and when you’re done you have an on-premise cloud. There are obvi­ous­ly some sub­tleties to the deploy­ment, and extra knobs you can tweak if you choose, but this is far quick­er than a tra­di­tion­al deploy­ment, and should appeal to many peo­ple.

The main risk I see from a long term via­bil­i­ty per­spec­tive is that this mod­el of Open­Stack deploy­ment puts ZeroStack square­ly in the path of at least Cis­co and HP Enter­prise, with their Meta­Pod and Helion CloudSys­tem Enter­prise prod­ucts, respec­tive­ly, which per­form almost entire­ly the same func­tion at a high­er cost point. Cisco’s solu­tion, in par­tic­u­lar, can be more accu­rate­ly com­pared to a VCE vBlock than to the ZeroStack plat­form, the for­mer com­ing pre-racked and plug-in-to-pow­er ready with full-blown UCS com­pute, Nexus 9K switch­ing, and ASR rout­ing. Cisco’s solu­tion has the fur­ther ben­e­fit of being ful­ly deployed and man­aged by Cis­co, and so is quite lit­er­al­ly a plug-and-play solu­tion. HP Enterprise’s Hel­lion CloudSys­tem uti­lizes VMware for the cloud plat­form, but func­tion­al­ly accom­plish­es the same goal of dis­till­ing what can be a com­plex deploy­ment down to a sin­gle pur­chase propo­si­tion.

I think where ZeroStack has an advan­tage, and pos­si­bly an unchal­lenged mar­ket space niche, is in the low­er to mid-tier price points. Many (many, many) com­pa­nies who wish to deploy Open­Stack sim­ply won’t be able to afford the Cis­co or HPE solu­tions, but still have a desire and a need for a sim­pli­fied deploy­ment mod­el. If ZeroStack’s prod­ucts can run in a sta­ble man­ner as well as they can deploy, I think they have a fight­ing chance of remain­ing viable for some time to come. Either that or they’ll estab­lish enough of a mar­ket foot­print, and become enough of a chal­lenge the the big-boys inevitable desire to sell down-stream in a like­ly scaled down prod­uct and price point, that they’ll be acquired for a sur­pris­ing­ly large amount of coinage—not at all out of place for the val­ley.

Share
  • « Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • …
  • Page 14
  • Next Page »

Copyright© 2023 · by Shay Bocks