Cisco Domain Ten: Domain 3: Automation and Orchestration

Continuing my tour of the Cisco Domain Ten (SM)  framework for simplifying data center transformation, with this blog, I’ll build upon my previous blogs and introduce Domain 3, which is concerned with “Automation and Orchestration“.

Domain 3 pic



I’ve asserted previously that having an automated, virtualized data center is a necessary — but insufficient — basis for cloud — and Cisco Domain Ten portrays this very well.  That said, automation and orchestration in my view is one of the 2 or 3 most important domains to focus on when transforming a data center, and when planning a cloud  architecture.  Automation is quite simply fundamental to delivering benefits such as cost reduction, elasticity, rapid service delivery and agility to your end users/ stakeholders/customers.    So what are the key problems we in Cisco Services can help you with in this domain?

First, though, let me clarify some of the terminology used in this space.  I “grew up” working in service provider network management, where initially (being honest!) we confused the task of “configuring” (individual) devices with the more holistic act of “provisioning” the complete service.  I thought I’d got it.  Then a few years ago, the term “orchestration” became part of the technology vernacular, particularly in the data center. Orchestration in my mind reflects the increasingly complex multi-layer/multi-device/multi-system/multi-faceted nature of data center services today.  Let’s think about it — to deliver a new service (application), you may have to provision and orchestrated across multiple application components, patch revisions, create a new virtual machine (one or many) via your hypervisor(s), never mind configure a range of server, virtual switch (Cisco Nexus 1000V), data center switching, firewall and load balancer devices.  Do you really want to do this manually?!

Many of you will undoubtedly select a commercially available orchestration product.  And invest substantial time making that selection.   In Cisco today, while our Cisco Intelligent Automation for Cloud product fulfils the need of domains 3, 4 and 5 (Orchestration, User Portal and Service Catalog respectively), it’s the Cisco Process Orchestrator component of this solution that provides the intelligent, flexible and most importantly programmable engine to support your orchestration needs.   In our customer deployments of Cisco Intelligent Automation, we have found that customers invariably specify that we help them design and deploy custom features i.e. features specific to their requirements and business goals.  This means — for me at  least — that the highly flexible and programmable workflow-based nature of the Cisco Process Orchestrator is probably the most important aspect of this solution.  So it’s wise to carefully consider which solution meets your needs best.

However, let’s now consider what in practice turns out to be a more significant set of challenges.  Here is the analogy: It’s one thing to choose the car.  It’s another thing to be able to drive that car.  Likewise, when I worked in service provider network management, it was one thing to design a user interface that showed a series of SNMP MIB values, but it required a different type of engineering skill-set that developed a user interface that genuinely delivered productivity and insight to the end user.

So when I hear about the challenges of cloud orchestration, selection of the orchestration solution is I would argue the easier part.  The more significant challenge is making it deliver for your cloud services.   First, you have to get right is your service catalog (more on this later).  Then for each IT service you plan to deliver, you need to translate the service definition into a series of application/server/device actions.  Multiple systems, multiple operating environments, multiple management interfaces.  And no doubt each with their own “nuances” or “black art” approach to configuration.  And you need to support these in your orchestration engine.  You often require deep intellectual property to make this translation — it’s not sufficient to just know how the orchestration engine works.

This is where Cisco Services brings the real value to your team.  Our team involved in the orchestration domain know all about cloud services design and implementation.  They know about applications, software components, hypervisors, and  network, storage and server devices.  Oh — and they also know how best to program your orchestration engine to realise the services you need to transform your approach to IT delivery.  So don’t just choose a vendor that provide management and automation tools.  Make sure you choose a partner that can deliver orchestration and data center intellectual property and expertise.




Tags: , , , , , , ,

Demystifying the Catalyst: IOS Device Sensors

In this blog, let us take a look at how Catalyst access switches profile the various connected devices and make the information available to various network services.

Many devices like laptops, IP phones, cameras etc. are connected to the network and need to be managed by IT for asset management, device onboarding, switch configuration, policy management & device energy management. Traditionally, IT administrators manually added each device for each service. This consumes unnecessary overhead and is an inefficient use of IT’s time.

Cisco IOS Device Sensor is an embedded Cisco IOS feature that is available on Cisco access switches. It automatically profiles connected user devices such as IP phones, wireless access points, laptops, cameras, and video conferencing terminals.  It gathers information like device type and sends this information to the various network services automatically.  For example, in a BYOD scenario, Cisco ISE (Identity Services Engine) profiles connected devices using IOS Device Sensor and in turn configures switches with appropriate access policies. Cisco Auto Smartports is another service that makes use of the device profile information from IOS Device Sensor. It sets appropriate QoS and security settings on the switch based on the device. Here is a use case that shows the impact of Cisco IOS Device Sensor.

Without Cisco IOS Device Sensor: Before an IP phone can be connected to the network, an IT administrator must configure the access switch to assign the traffic to correct voice VLAN and to set appropriate QoS settings. This manual approach needs to be repeated for each of the access switches in the network when deploying IP phones in the network.

With Cisco IOS Device Sensor: Cisco IOS Device sensor on the access switch detects the IP phone and automatically sends the information to the Cisco Auto SmartPorts feature in the switch. Cisco Auto SmartPorts applies the right VLAN and QoS policies for voice traffic based on configured policy. No manual intervention is required. It works automatically on every access switch for all the connected IP phones.

So what are the benefits to IT?

  • Cisco IOS Device Sensor automates configuration of switches for connected devices, simplifying network administration and accelerating the deployment of new devices.
  • Cisco IOS Device Sensor minimizes the need for each network service to continuously scan the network to collect identity information.
  • ‘Plug n Play’ for the end user and IT administrator.

This functionality is available in the following Catalyst platforms:

Tags: , , , , , , , , , ,

From Hudson River to Data Center : When Teamwork, Process and Respect Save the Day .

This is my first year as an attendee at the Gartner DC conference.  I’ve been here once before working demos on the tradeshow floor, but this year it’s purely about information gathering.  Tradeshows floors are great.  You get to wander around and chat with a captive audience of your industry peers, partners, and “frenemies” collecting pens and light up bouncy balls.  Based on where the swag really ends up, I think the pen purchasers really need to start thinking about logo branded crayon packs. But there is so much to learn in the conferences even in the most unexpected sessions.

SulleySullenbergerMy primary take aways from the initial keynotes were that Hadoop is a strong early adoption application candidate for cloud in a non-virtual context  (Hadoop in the data center  was recently covered in Jason Rapp’s blog) , that commodity compute is the leader in cloud computing (I cried a little on the inside with this one), and that personnel development and team building/creation is one of the biggest factors in an IT success story.

For day one the celebrity keynote was from Captain Chesley Sullenberger which seemed out of place before listening to him.  His talk about teamwork, process, and respect leading to his success in pulling off that harrowing landing on the Hudson spanned well from the people aspect of organizations, and was a very enjoyable listen.

These take aways seem to me  even more critical as IT organizations have to quickly evolve their data centers to meet demanding  business requirements, without expecting additional resources .

Gartner does a very nice job of interactive polling within their conference.  For the starting keynote the audience poll (~2,000?) revealed that budgets edging up, but for the greatest number of attendees are mainly flat.

It seems that 34% of the audience has to deal with a flat budget, 20% of the attendees benefit from a marginal increase (<5%), and  14% experience a small decrease (<5%)

Talking about data center evolution, as a Cisco guy, I had absolutely to attend (by choice ) David Yen’s presentation.  David is our Sr VP & GM in charge of our DC Technology Group, so he’s the big picture for anything Cisco in the Data Center. He is a Phd, with a very large experience in compute, applications and network, acquired through executive role at Sun Microsystems, Juniper and Cisco. David’s talk was about the evolution of the data center and the relevance of Cisco -You may want to check the blog from Giuliano Di Vitantonio, VP Marketing Data Center and Cloud with slides and videos “ The Evolving Data Center : Perspectives from the Gartner DC Conferences“  In his presentation David Yen covered some of the background for the evolution of the data center model, and the gains to be expected in the fabric model we see through Fabric Path in optimization of the new East/West data patterns.



This all has a strong relationship to our Unified Computing System solution. Which as a server platform “loaded  with features “ might be perceived at some disadvantage in comparison to commodity compute, we’re happy to see that in reality our customers have placed us at #3 in datacenter compute world wide, and #2 in the US for an implementation that is only three years into the market, thanks to providing strong management capabilities, system agility, and dynamic integrated network functionality, as well as great TCO. As proof points , you may want to check Bill Shields blogs on this topic, but also the Cisco Buil& Price website with promotions of the month.

This Conference gave me also the opportunity to discuss other “more technical ” topics such as security for cloud and virtual services.
So stay tuned, as I will be back in January for additional conversations.



Tags: , , , , , , , , , , , , ,

The path to mature Infrastructure and Operations is through culture?

As the Gartner Data Center conference in Las Vegas, NV closes, I can’t help feeling a bit of irony in bearing witness to the contrast in culture and atmosphere that this city encompasses relative to the experience many of us have interacting with Information Technology organizations today.

Moving any taboos about Vegas aside, the experience here is about an immersion into a culture of service. From the moment you step into a hotel to the moment you sit down to test your gaming fortunes, your experience is facilitated by professionals whose job it is to ensure you have a good time. Whether greeting you at the door, serving that fine cocktail or dealing your next hand of blackjack, an excellent experience is made possible by people who know how to be of service.

In contrast, many IT organizations today struggle in providing such a positive service experience to those who are seeking to use IT resources for their own productivity. Having some experience of my own in the world of hospitality, I was delighted but not surprised in observing the conference lunch staff have a plan to insure everyone who finished a session around lunch time, was fully accommodated. Each attendee was guided to the next available seat and immediately greeted with a fresh salad, ice tea and warm roll. Careful attention was paid to whether or not I want more or less of something, and if I’m ready for what’s next. Throughout lunch, I experienced a pleasant positive attitude by the attentive wait staff that satisfied my expectations.

What would it take to bring this culture of service excellence to users and organizations? Users of IT resources need the assistance and care of IT professionals so that they can be fully enabled for productivity.

Thankfully, while attending presentations around Infrastructure and Operations, I noticed an ominous theme around what it will take to mature the IT services in organizations today, the message pointed directly to a problem of culture.

In an example of how a change in culture really can transform productivity, Jarrod Green describes in his session, “Kill the IT Service Desk: Create a Business Productivity Team to Transform IT From the Grassroots”, the concept of the Business Productivity Team(BPT). Jarrod discusses business productivity teams having a singular focus on enabling business outcomes through:

1. Extending the capabilities of current and new IT resources
2. Proactive Identification to the solution to a problem
3. Understanding of and alignment with Business Challenges
4. Enabling user self sufficiency and digital literacy
5. Establishing the relationship with the business as a trusted advisor

This savvy service team sounds really excellent! But what does it look like?

It starts with someone who has knowledge of both technical and business processes. Instead of being an expert up the Ivy tower, they meet the user face to face where they are, leading them in solving their technical problems and teaching them about a new feature or way to do their work faster and smarter. Because a Business Productivity team is customer oriented, they earn the ability to influence by building partnerships and driving the consumption of features in current and new technologies that add value.

Wow, I must have stepped into an imaginary organization whose culture expects nothing less and rewards its professionals well! A pretty serious culture change is necessary in order to facilitate this unique capability.

In working with customers during services engagements, I am often asked by CIOs and IT Management how they can facilitate maturing their organization into   becoming a strategic differentiator in the business they support.  When focused on the evolution of customer service, support and the improvement of end-user experience I often refer to the “Fanatical” Customer Support that differentiates Rackspace in being a market leader of data center and cloud services.   Rackspace’s support model encompasses the spirit of enabling productivity and success as the outcome for its customers.

We can speak endlessly about novel technologies that create all kinds of efficiencies and time saved for users. In order to get the most out of the investment in technology, an evolved IT Service desk that drives productivity and end user satisfaction is needed for that next step toward an extraordinary IT organization.  Within the Operate Practice in Cisco’s Advanced Services,  we strive to help customers achieve the goal of operational excellence in the planning, building and management of their IT Investments.

In my coming posts I will share more about what I think the IT organization of the future, enabled by new cloud tools and processes, will look like. More importantly, I want to bring forward what I think a proactive, inspiring and value-creating culture looks like for both IT teams and the organizations who depend on them.

Tags: , , , ,

Journey to “Self-Healing” Enterprise Networks

Within IT & Network Process Operations community, automation started with the big hype and promise for “self-healing” solutions for systems, network, and process automation.  Remember the promise of “Robotics”!? Wouldn’t it be great to have our servers, systems, and networks solve their own problems? Leading to more stable systems and networks in which system administrators and network administrators would be free to work on higher priority activities and be more productive, improving the quality of Enterprise solutions. Though it is a noble goal, IT and network process automation did not deliver its full promise but instead started us in the journey towards the goal.

There are many reasons why IT process automation solutions for network domain has not fulfilled its promise. Many people have done in-depth analysis, which can be summarized as two big inhibitors for wide-spread adoption of automation in network operations:

  1. Need for out-of-the-box workflow templates for rapid development of network operation process automation for quick-wins
  2. In-depth understanding of complex network implementations with domain knowledge of the Enterprise processes and industry best-practices for support

Regular network operations include many tasks such as routine setup, repetitive maintenance, support processes, and very importantly time-consuming troubleshooting. The following figure shows key drivers for network operation automation:

Drivers for Network Operations Automation

Drivers for Network Operations Automation

With understanding of drivers for network operation automation, one should also understand the myth and long-held belief – that such automations are costly endeavors. If implemented without appropriate architecture and support, they can be. It is very important to understand that they don’t have to be. In fact, with the right solution and support in place, you can easily automate and streamline your network operations to actually save your business money in the long run.

Lastly, when it comes to network operations automation, you need to build on your success step by step. In my next blog, I would discuss steps to overcome inhibitors for network operations automation using appropriate solution architecture with in-depth intellectual properties in network domain and support services from industry experts.

More Information on:


Tags: , , , , , ,

The Evolving Data Center : Perspectives from the Gartner DC Conferences

I have just come back from the Gartner Data Center conferences in London and Las Vegas where I got to witness the increasing relevance of Cisco in the data center. The critical role of the network to enable the world of many clouds has becomes evident, and Cisco continues to establish itself as an innovator in the server market.  Our vision and solutions really grabbed the attention of the analysts and customers at a level that I certainly didn’t see last year.
Data center consolidation, server virtualization, and converged infrastructure continue to be chief concerns among decision makers.  Emerging topics such as fabric –based infrastructure, hybrid cloud, and network programmability were definitely the focus of numerous presentations and endless conversations.

Cisco continues to innovate on all these fronts, and we had a lot of progress to present to the audiences in London and Vegas.

Three Insightful Conversations 

I’d like to share with you three conversations I had at the Gartner DC Conference in Las Vegas. Two are with the sales and engineering leaders for Cisco Data Center, Frank Palumbo (@fpalumbo) and David Yen, and the third is with one of our partners, Siki Giunta from CSC, who participated on a panel on Cloud that I moderated.

Frank Palumbo on convergence, virtualization, network programmability, and SDN

In the first conversation, Frank Palumbo, VP Global Sales, reports some of the major concerns of the IT organization.  Our conversation covers:

  • The new role of the “cylinders of excellence” — servers, network, storage and security teams — when the goal is to implement a convergence infrastructure;
  • The benefits of deploying unified computing in environments where virtualization coexists with “bare-metal” workloads; and
  • Network programmability and SDN.

David Yen on the evolving data center

My second conversation was with David Yen, Cisco SVP & GM, Data Center Group, who gave a great presentation to more than 600 attendees called “The Evolving Data Center:  Past, Present, and Future.”

Cisco has a unique position in this evolving data center as a key player in network, security, storage access, virtualization, and now servers with Unified Computing.  In fact, the combination of Unified Computing, Unified Fabric, and Unified Management with the technologies of recently acquired companies such as Cloupia represents a terrific value proposition for companies looking to deploy cloud infrastructure.

On the topic of the network programmability, David insisted on the importance of going beyond SDN (Software Defined Networking) to really embrace Cisco ONE (Open Network Environment) which offers the flexibility to choose between different deployment  models – based on platform API’s, or  a controller software, or  a virtual overlay network. This very realistic vision guarantees to our customers the opportunity to use hybrid implementations and build upon existing infrastructure with investment protection.

Watch this dialog with David Yen in Las Vegas to discover the role of the new data center and what drives Cloud computing infrastructure deployment evolution:

Cloud Supply and Demand: The Customer Perspective 

To illustrate the range of sourcing options that Cloud brings to IT shops, I moderated a panel at Gartner DC Las Vegas on the topic of  “Cloud Supply and Demand: The Customer Perspective” with Siki Giunta, CSC Global VP Cloud Computing, and PayPal’s Ryan Carrico, Advanced Technology and Research Team.

I invite you to watch this dialog with Siki Giunta. CSC is positioned by Gartner in the top right corner of their IaaS “magic quadrant” so Siki’s perspective is one you won’t want to miss:

For Siki, three major success factors for an enterprise -class cloud offering are security, scalability, and global reach.  This differentiation is made possible through the CSC-Cisco partnership and Unified Data Center solution.

Growing momentum of data center evolution

As these three conversations reveal, data center evolution is gaining momentum.

And the network is increasingly important as a critical element in the evolving data center making possible converged and fabric-based infrastructure, hybrid clouds, and network programmability.

Stay tuned for more info and insights from Cisco on data center evolution…we’re working hard with our partner ecosystem to give you the solutions and tools you need!

Tags: , , , , , , , , , , ,

Where’s My IPv6 Prefix? Part Deux

My previous post examined how a regional Provider Independent (PI) prefix is propagated across the Internet.  This post discusses the second aspect of the issue: how does Provider Assigned/Aggregateable (PA) space propagate across the Internet?

The second aspect of prefix propagation has to do with organizations that receive a direct assignment that does not fall within the PI block.  These assignments are typically made from the PA space that the registries have.  The intent for PA space is for the owner of that space, which is typically a service provider (SP), to only announce that prefix to other organizations. 

For this case study, I’ll use the 2001:420::/32 prefix that has been assigned to Cisco by American Registry for Internet Numbers (ARIN).  I’ll use the same method to track down how this prefix and other prefixes in this range are propagating.

For the ARIN region, I’m using the Global Crossing route server:

    route-server.phx1>sh bgp ipv6 uni | incl 2001:420

    * i2001:420::/32    2001:450:2001:8018::1
    * i2001:420:1::/48  2001:450:2001:8018::1
    * i2001:420:4::/48  2001:450:2001:8018::1
    * i2001:420:5::/48  2001:450:2001:8018::1
    * i2001:420:80::/48 2001:450:2001:8018::1
    * i2001:420:81::/48 2001:450:2001:8018::1
    * i2001:420:1000::/40
    * i2001:420:1100::/41
    * i2001:420:2000::/37
    * i2001:420:2000::/35
    * i2001:420:207F::/48
    * i2001:420:4000::/36
    * i2001:420:4420::/48
    * i2001:420:54BF::/48
    * i2001:420:54FE::/48
    * i2001:420:C0C0::/46

Note here that the 2001:420::/32 prefix has been broken up into 15 component prefixes.  Doing some further digging into the 2001:420::/32 block you can see that Cisco is multi-homed to a couple of different SPs.  Note that I have cut some output to shorten it up for this blog.

    route-server.phx1>sh bgp ipv6 uni 2001:420:4000::/36
    BGP routing table entry for 2001:420:4000::/36, version 268453350

    Paths: (17 available, best #11, table default)
      Not advertised to any peer
      1239 109, (received & used)   ß  AS 1239 is Sprint
        2001:450:2001:8018::1 from (
          Origin IGP, metric 50, localpref 200, valid, internal
          Community: 3549:2351 3549:30840
          Originator:, Cluster list:
    route-server.phx1>sh bgp ipv6 uni 2001:420:1000::/40
    BGP routing table entry for 2001:420:1000::/40, version 269696093
    Paths: (24 available, best #9, table default)
      Not advertised to any peer
      3356 109, (received & used)   ß AS 3356 is Level 3
        2001:450:2001:8018::1 from (
          Origin IGP, metric 100, localpref 201, valid, internal
          Community: 3549:2355 3549:30840
          Originator:, Cluster list:
    route-server.phx1>sh bgp ipv6 uni 2001:420:54fe::/48
    BGP routing table entry for 2001:420:54FE::/48, version 263425868
    Paths: (6 available, best #4, table default)
      Not advertised to any peer
      6939 109, (received & used)
        2001:450:2001:8018::1 from (
          Origin IGP, metric 100, localpref 200, valid, internal
          Community: 3549:2722 3549:31276
          Originator:, Cluster list:

Breaking up a large block like a /32 makes some sense if your organization is trying to do some load balancing, provide high availability, or has a global presence.  As mentioned in my previous blog, keep in mind the guidance in RIPE-399 and RIPE-532 and use de-aggregation judiciously and in accordance with what your SPsupports.
If we look into a route server in the RIPE region, we see that all the prefixes have propagated into that region.  Please note that I did clip out some of the command output for brevity, and to highlight that the prefixes are there.

    OpenTransit/France Telecom Route Server> show route protocol bgp 2001:420::/32 all terse
    inet6.0: 10444 destinations, 54461 routes (10444 active, 0 holddown, 0 hidden)
    Restart Complete
    + = Active Route, - = Last Active, * = Both
    A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
    * 2001:420::/32      B 170        85       100 >2001:688:0:3:4::4 7018 109 I
    * 2001:420:1::/48    B 170        85       100 >2001:688:0:3:4::4 6939 109 I
    * 2001:420:4::/48    B 170        85       100 >2001:688:0:3:4::4 6939 109 I
    * 2001:420:5::/48    B 170        85       100 >2001:688:0:3:4::4 6939 109 I
    * 2001:420:80::/48   B 170        85       100  2001:688:0:3:4::4 6939 109 I
    * 2001:420:81::/48   B 170        85       100 >2001:688:0:3:4::4 6939 109 I
    * 2001:420:1000::/40 B 170        85       100 >2001:688:0:3:4::4 7018 109 I
    * 2001:420:1100::/41 B 170        85       100 >2001:688:0:3:4::4 7018 109 I
    * 2001:420:2000::/35 B 170        85       100  2001:688:0:3:4::4 3356 109 I
    * 2001:420:2000::/37 B 170        85       100  2001:688:0:3:4::4 3356 109 I
    * 2001:420:207f::/48 B 170        85       100  2001:688:0:3:4::4 6939 109 I
    * 2001:420:4000::/36 B 170        85       100  2001:688:0:3:4::4 1239 109 I
    * 2001:420:4420::/48 B 170        85       100 >2001:688:0:3:4::4 6939 109 I
    * 2001:420:54bf::/48 B 170        85       100  2001:688:0:3:4::4 6939 109 I
    * 2001:420:54fe::/48 B 170        85       100  2001:688:0:3:4::4 6939 109 I
    * 2001:420:c0c0::/46 B 170        85       100  2001:688:0:3:4::4 1239 109 I

Similarly, I can look into a looking glass in the APNIC region to see what is happening.   From the Hurricane Electric looking glass in Singapore:> show ipv6 bgp routes detail 2001:420:54fe::/48
Number of BGP Routes matching display condition : 1
1       Prefix: 2001:420:54fe::/48,  Status: BI,  Age: 166d22h44m54s
         NEXT_HOP: 2001:470:0:1ee::2, Metric: 1903,  Learned from Peer: 2001:470:0:1b::1 (6939)
          LOCAL_PREF: 140,  MED: 1,  ORIGIN: igp,  Weight: 0
         AS_PATH: 109
            COMMUNITIES: 6939:1000 6939:6000
       Last update to IP routing table: 1d22h19m31s


From the Sprint looking glass in Medellin, Colombia:

    Sprint Source Region: Medellin, Colombia (sl-gw10-med)
    Performing: Show Route
    BGP routing table entry for 2001:420:54FE::/48, version 2421465
    Bestpath Modifiers: deterministic-med
    Paths: (2 available, best #2, table Global-IPv6-Table)
      Not advertised to any peer
      6939 109
        2600:0:1:1239:144:228:241:41 (metric 851) from (
          Origin IGP, metric 4294967294, localpref 90, valid, internal
          Community: 1239:666 1239:667 1239:1000 1239:1007
          Originator:, Cluster list:,
      6939 109
        2600:0:1:1239:144:228:241:41 (metric 851) from (
          Origin IGP, metric 4294967294, localpref 90, valid, internal, best
          Community: 1239:666 1239:667 1239:1000 1239:1007
          Originator:, Cluster list:,

From the South African IX route server:

    tpr-route-server>sh bgp | incl 2001:420:
    * i2001:420::/32    ::FFFF:
    *>i2001:420:1::/48  2001:4208:100::12
    *>i2001:420:4::/48  2001:4208:100::12
    *>i2001:420:5::/48  2001:4208:100::12
    *>i2001:420:80::/48 2001:4208:100::12
    *>i2001:420:81::/48 2001:4208:100::12
    * i2001:420:1000::/40
    * i2001:420:1100::/41
    * i2001:420:2000::/37
    * i2001:420:2000::/35

In this blog, I’ve tried to show that in today’s IPv6 Internet, that PA prefixes are propagating across all regions without regard to where the prefix might originate.  The major policy point that defines prefix propagation today is prefix length.  The operational community has settled on the /48 prefix length as the current boundary where filtering policy is applied.  The origination of the prefix does not figure into current filtering policy.  This does not mean that you should go forward and de-aggregate your /40 prefix into 256 /48 prefixes.  But, it does mean that you should analyze the requirements that you have and see where aggregation and de-aggregation is an appropriate solution

A last note here, which should sound similar, I again want to  promote working closely with your SP to ensure that they are capable of supporting your needs.  A quick search on the Web yields two providers who have posted their prefix policies: NTT and Sprint. These policies can certainly change over time and should be closely monitored by both talking with your SP and also using the tools available to look into how SP’s across the globe are implementing their policies.

Tags: , , , ,

College Migrates from SPARC to Cisco UCS

When it comes to their IT infrastructures, academic institution IT teams have a lot in common with IT departments in the business world. Both need to offer their customers the flexibility to access applications and resources at anytime, anyplace, and on any device. They also need to provide these services with limited budgets and administrative resources while maximizing the efficiency of the data center.

Sheridan College, renowned for its leadership in the field of digital media studies, serves approximately 18,000 full-time students and 35,000 part-time students a year. Their IT department has up to 18,000 active network connections and each student may use various devices. They also have applications that serve the faculty and staff.  Key applications such as Oracle PeopleSoft and Oracle Database are running on Cisco UCS. During open enrollment there are as many as 5000 concurrent connections per second.

Migrating off its legacy SPARC architecture and consolidating its data center using Cisco UCS the Sheridan College IT organization was able to realize tremendous benefits:

  • Improved infrastructure virtualization from 40 to 85 percent.
  • Increased capacity while reducing the number of physical servers that it needs from 200 to 70.
  • Reduced power consumption by 78 percent.
  • Service levels improved and to nearly 100 percent uptime.
  • Highly efficient 100 to 1 server to administrator ratio.

Most importantly, the Cisco UCS deployment has allowed the Sheridan IT team to shift focus from maintaining the infrastructure to leveraging the extra data center space for new, innovative projects. The goal being the continued success for their customers – in this instance the customers are students.

Sound familiar? These are exactly the same type of goals and results that we hear from our corporate case studies.

The full Sheridan College case study is available here .  Please check out our RISC/UNIX migration program page for additional case studies, performance briefs, solution briefs, white papers, and migration guides.

Cisco Validated Designs for Cloud -Part 4-Virtualized Multiservice Data Center 3.0

Over the past weeks, Tom Nallen introduced the concept and benefits of the Cisco Validated Design , then Laszlo Bojtos,  illustrated this concept with the Cloud Service Assurance for Virtualized Multi-Services Data Center 2.2 Cisco Validated Design, with a specific emphasis on the integration with Cisco Intelligent Automation for Cloud . Finally John Kennedy shared with us the latest news on Flexpod.

jotungThis week , I met Johnny Tung, Systems Marketing Manager for Data Center Solutions, to talk about a very interesting announcement : The Virtualized Multiservice Data Center

” Johnny , can you tell us what Happened to Cisco’s Unified Data Center on Dec 3th?

Well…it just got more interesting! You may have heard of Virtualized Multiservice Data Center. Let me remind you. It is Cisco’s reference architecture for the Unified Data Center. The big news here is that we have just released the 3.0 design. We are introducing Cisco FabricPath into the Unified Data Center network in order to simplify and scale Cloud Ready Infrastructure designs for Private and Virtual Private Cloud deployments.



FabricPath simplifies and expands existing data center network design by removing the complexities of Spanning Tree Protocol (STP), and thus enabling more extensive, flexible, and scalable Layer 2 designs. This release marks the introduction of FabricPath-based designs into VMDC;  further FabricPath-related VMDC releases will follow as Cisco develops and evolves its FabricPath offerings.

 What does it mean for our customers ?

Our customers  can now leverage our Design Guide (public access) and Implementation Guide (partner access) for their  Cloud. And this info is available to them for free (as usual). If they  are not already a hardcore VMDC guru, they may want to know that 3.0 does not obsolete the previous releases. Each VMDC release addresses a different tenancy model, so they can all co-exist.

What’s the best place to start to review the new contents?

You want absolutely to visit our  Design Zone VMDC site to find detailed information on the new contents , but here is a quick summary that I put together

VMDC Release 3.0 includes three design options to suit a variety of deployment scenarios, of increasing scale and complexity:

•Typical Data Center — represents a starting point for FabricPath migration, where FabricPath is utilized as a straight replacement for older layer 2 resilience and loop avoidance technologies, such as vPC and Spanning Tree.  This design assumes the existing hierarchical topology – featuring pairs of core, aggregation and/or access switching nodes  – remains in place, and that FabricPath provides Layer 2 multipathing.

•Switched Fabric Data Center — represents further horizontal expansion of the infrastructure to leverage improved resilience and bandwidth characterized by a CLOS-based architectural model.

•Extended Switched Fabric Data Center — assumes further expansion of the data center infrastructure fabric for inter-pod or inter-building communication.

Use cases that have been specifically addressed in this release include:

•DC and PoD design

•Inter-PoD communication (multi-PoD or DC wide)

•Inter-PoD clustering

•Inter-PoD VM mobility

•Inter-PoD/Inter-building (intra-campus) Service Resilience

•Split N-tiered applications


The specific HW & SW components that were used for validation of VMDC Release 3.0 are:


Where can I find more?

For more information on VMDC please refer to:

For all other questions please contact:

Tags: , , , , , , , , , ,

Where’s My IPv6 Prefix? Part One

In a previous blog series about interfacing with your ISP, I mentioned tools that Internet Service Providers (ISPs) have, such as looking glasses and route servers, that can be used to verify their policies.  In this blog post, I want to examine some of those tools, but primarily I want to show how prefixes are propagating across the Internet. 

The question of prefix propagation comes up often when discussing how to develop an IPv6 address plan.  What happens if an organization gets Provider Independent (PI) space from a registry and then tries to advertise that prefix, or a smaller portion of that prefix, in a different region?  Will ISPs in that region filter the non-regional prefix?  Will they let the aggregate pass, but not the more specific prefixes?

There has been some guidance on prefix aggregation and de-aggregation published by the Réseaux IP Européens Network Coordination Centre (RIPE).  RIPE-399 and RIPE-532  documents give guidance on how organizations should approach the aggregation and de-aggregation issue.  The guidance in the documents advises using aggregation when and where possible, and using de-aggregation judiciously to solve a specific problem.

The concern over the de-aggregation of prefixes stems from the potential for a huge increase in the number prefixes that have to be carried.  IPv6 moves to a 128 bit address space and opens up the very real possibility of having to carry millions of prefixes.  The current IPv6 Internet routing table is ~10.5K prefixes.  As a comparison, the current IPv4 Internet routing table is ~400K prefixes.  So we have a way to go before the IPv6 table approaches the size of the IPv4 table, but that does not mean we can ignore the problem.  Typically, it is better to work on issues when they are smaller and more easily handled.   

One proposed solution is to get address space from each regional registry where your organization has presence.  I find this solution to be more about moving the issue around versus actually solving it.  In this case you are not necessarily reducing the number of prefixes, you are merely making people get more prefixes that also have to be advertised.  For example, if your organization has sites in Europe, North America, and Asia, then you need to get three prefixes.  You could equally get a single prefix from any one of those registries and break that prefix into three more specific prefixes.

The above solution also relies on the development of a regional prefix aggregation policy.  For a regional prefix aggregation policy to work, you would have to advertise an aggregate for which you may not have more specific prefix information.  You can get into some very tricky routing situations when you advertise an aggregate but do not have all the specifics available.  Routing “black holes” can easily develop in that situation because the aggregate would cover a lot of the space that the organization was not responsible for routing.

There are two aspects to the prefix propagation issue.  One issue deals with PI address space and the other deals with Provider Assigned/Aggregateable (PA) space.  I will talk about the PI space issue in this post and the PA space issue in my next post. 

Let’s look at what happens with Provider Independent (PI) prefixes.   Each regional registry has defined a policy for PI space, and identified an address block to make address assignments to organizations within that region.  With each registry having their own defined PI policy, the concern is that operational policy will also develop around how PI assigned prefixes will propagate across the Internet.  The other concern is that  an organization with a global presence will get a PI prefix from each registry.

To try and address this concern, I can use several tools to track what is happening across the Internet for an organization’s prefixes.  To find an organization that is using PI space, I first use Dan Wing’s website that verifies if sites are using AAAA records for their website.  One of the sites that is using both  PI space, and breaking up that space into more specific announcements, is the web site for Louisiana State University. Using American Registry for Internet Numbers (ARIN)’s whois search tool, LSU has been assigned 2620:105:B000::/40..   I can then check to see what prefixes LSU is advertising by using a route server.  Route severs are routers that Service Providers (SPs) have set up to give a view into how routing is working in their domain.  I have been using the  Border Gateway Protocol Advanced Internet Routing Resources site to help me find route servers.

Using AT&T IP Services route server located in the US, I can see that LSU is advertising the block that it was assigned and also 3 more specific prefixes from that block:

  • route-server>sh bgp ipv6 uni | incl 2620:105:B
  • *  2620:105:B000::/42
  • *  2620:105:B000::/40
  • *  2620:105:B040::/44
  • *  2620:105:B050::/48

To check what is happening in other regions, I can go to Global Crossing’s European Route Server and have a similar peek into what prefixes have made it to Europe:

  • route-server.ams2>sh bgp ipv6 uni | incl 2620:105:B
  • *>i2620:105:B000::/42
  • * i2620:105:B000::/40
  • *>i2620:105:B040::/44
  • *>i2620:105:B050::/48 

Similarly I can check the South African Internet Exchange Route server:

  • tpr-route-server>sh bgp | incl 2620:105:B
  • * i2620:105:B000::/42
  • * i2620:105:B000::/40
  • * i2620:105:B040::/44
  • * i2620:105:B050::/48

You can also use a “looking glass” to get a view into what’s happening in a SP domain.  When you use a looking glass you are interfacing with a Graphical User Interface (GUI) that will run the command on a router for you.  I use the same site above to help me find available looking glass sites.   In this case, I want to check out how LSU’s PI prefix is propagating in the APNIC region.  To check the propagation, I am using the BroadBand Tower looking glass in Japan.  Because it is done on a per prefix basis, I am just showing the check for the most specific prefix:

  • BGP routing table entry for 2620:105:b050::/48
  • Paths: (3 available, best #3, table Default-IP-Routing-Table)
  •    Not advertised to any peer
  •    4725 3356 32440 2055
  •       2001:278:0:2235::1 from 2001:370:100::12 (
  •          Origin IGP, metric 300, localpref 90, valid, internal
  •          Community: 9607:3252
  •          Last update: Tue Sep 11 16:49:35 2012
  •    2516 209 32440 2055
  •       2001:370:100::13 from 2001:370:100::9 (
  •          Origin IGP, metric 100, localpref 100, valid, internal
  •          Community: 9607:2011
  •          Originator:, Cluster list:
  •          Last update: Thu Oct  4 12:05:25 2012
  •    2516 209 32440 2055
  •       2001:370:100::13 from 2001:370:100::7 (
  •          Origin IGP, metric 100, localpref 100, valid, internal, best
  •          Community: 9607:2011
  •          Originator:, Cluster list:
  •          Last update: Thu Oct  4 12:05:25 2012

To check what is happening in the LACNIC region, I am using the RNP looking glass in Rio de Janeiro Brazil.  Similar to the other looking glass output, I am specifically checking to see if the most specific prefix has been advertised.  I’ll leave it to the interested readers as homework to verify that I’m also getting the same output for the other prefixes.

  • Router: Rio de Janeiro, RJ
    Command: show route protocol bgp 2620:105:b050::/48 terse exact


  • inet6.0: 11043 destinations, 11203 routes (11041 active, 2 holddown, 0 hidden)
  • + = Active Route, - = Last Active, * = Both
  • A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path
  • *2620:105:b050::/48 B 170        125          0 >fe80::8271:1f0b:b8f4:9ae627750 11537 324402055 I

In this post, I have tried to show how providers across the globe are treating regional PI space.  I used some of the publicly available tools to track an ARIN PI prefix and how it propagates across regional registry boundaries.  This example is not meant to be a definitive example or show consistent policy.  It is merely a peek into how a particular prefix has propagated across the globe.  Please use this information as a starting point in your planning and talks with your SP about how IPv6 prefixes are handled.

Stay tuned for my next post where I will be continuing the prefix propagation discussion.

Tags: , , , , , , ,