Cisco UCS in a world with windows

Earlier in the week I had blogged about Cisco UCS in a world of open source computing. And now with Microsoft TechEd right around the weekend, I get to blog about Cisco UCS in a Windows world. Both are relevant now that Cisco is the #2 x86 blade server vendor worldwide with 17.6% of the market and in a statistical tie for #4 in the server category according to research firm IDC. So if you plan to be in New Orleans next week for Microsoft TechEd, or if you use Microsoft technologies in your data center will find the following very interesting.

Last year Cisco UCS Manager, the single point of management for UCS domains, was the Best of TechEd Winner in the breakthrough product category. If you use Microsoft PowerShell or some of the Microsoft System Center 2012 suite of products for management, you definitely want to check out the demos we will have at TechEd.  The Cisco PowerTool for Powershell lets you use a comprehensive list of commands called “cmdlets” to manage all the components of a UCS domain.  With Cisco UCS PowerTool, your operations team can tie together the management of storage components, computing components, and software applications into custom, end-to-end management solutions that are easy to use and easy to script.

If you use the Microsoft System Center Operations Manager, you can download the UCS Management Pack for System Center Operations Manager and monitor the health of Cisco UCS.  With the Management pack you can

  • Monitors Cisco UCS devices such as Cisco UCS blades, chassis, and rack servers.
  • Correlate faults and events across bare metal and virtualized Cisco UCS infrastructure.


If you use the Systems Center Orchestrator, you can get the UCS Integration Pack for System Center Orchestrator and automate UCS Management. The Cisco UCS Integration Pack exposes prebuilt runbook actions that cover all management aspect of Cisco UCS.  The Integration pack enables IT administrators to:

  • Automate Cisco UCS management, improve predictability, expedite delivery, and reduce errors.
  • Deliver scalable and reliable Cisco UCS infrastructure through orchestrated workflows.
  • Provide consistent service across multiple systems and departments.
  • Optimize and extend Cisco UCS capabilities through integration with third-party management tools.

If you use the Systems Center Virtual Machine Manager (SCVMM), you can see the entire UCS inventory with the UCS GUI extensions. With the extensions you can

  • Manage physical UCS servers and Virtual Hyper-V instances in one place
    • Correlate hypervisors to Cisco UCS service profiles and physical servers
    • View and manage UCS servers directly from SCVMM by launching the UCSM GUI

As shown in the diagram above, all this is made possible by the .NET library, which provides a programming interface for all the Microsoft System Center integrations. The .NET library itself uses the UCS XML API to communicate with the Cisco UCS Manager. It is designed to support full manageability of UCS across multiple releases.

So be sure to visit our booth if you are at at Micrsoft TechEd 2013 in New Orleans next week to check out these capabilities in Windows environments.

Tags: , , ,

Lippis Podcast on All Things Great and New with VXLAN

Han Yang PodcastThis week Nick Lippis from the Lippis Report sat down with Cisco Nexus 1000V Product Manager, Han Yang, to talk about the latest enhancements and trends with VXLAN, the primary virtual overlay tunneling technology in Cisco Nexus 1000V virtual networks.

In this 15 minute podcast (registration required), Han touches on three key innovations in the VXLAN area: 1) Cisco approaches to eliminating the requirement for IP Multicast by VXLAN (which I earlier blogged about here), 2) the support for virtual services, like virtual firewalls, with vPath (which I blogged about here), and 3) the use and availability of VXLAN gateways to connect virtual workloads to physical workloads in mixed application environments (which I haven’t blogged about yet, but probably should have :-) ).

Nick’s podcasts always provide a good perspective on emerging technology trends and he takes complex topics and really helps his listeners get up to speed on the important bits. We always enjoy the chance to work with him. Give it a listen and let us know what you think.

Tags: , , , ,

IT Transformation & Innovation Result in Operational Efficiencies & Better Customer Service

It is no small task to enforce change within a well established industry and organization, but it’s not impossible. At Louisville Gas and Electric and Kentucky Utilities’ (LG&E and KU) — where operations were inefficient and obsolete — change was needed. With no easy way to obtain data, employees were conducting a lot of time consuming manual paperwork. In addition, disparate systems in which the data lived presented further complications. That is no way to stay competitive in the digital age.

On a quest for change

LG&E and KU deployed Cisco® Unified Computing System™ (UCS), based on Intel® Xeon® processors. Using the Cisco UCS® in conjunction with business intelligence software, They found that information from all systems could be brought together, easily accessed, and efficiently reported. This, in turn, led to faster and more informed decision making.

But it didn’t end there. The company is also now utilizing (pun intended) mobility, which allows for access to the system from anywhere, at any time. Users don’t have to be on-site to obtain information that will help them complete their tasks in the field. And collaboration capabilities are proving extremely helpful in time sensitive situations, such as power outages.

No surprise here: the benefits from LG&E and KU’s IT transformation is also reflected in the customer service end of business. Their customers are now experiencing lower costs and higher satisfaction.

Regardless of industry, the lessons learned by LG&E and KU’s implementation can apply to your business as well. Engage with us and find out how.

Read more about the transformation of LG&E and KU and their journey to adopting Cisco technologies on

Cisco Innovations You Should Really Look at for Faster Adoption of Clouds

Emarin Cisco Cloud PortfolioAmong all the Megatrends that have significant implications from an infrastructure perspective (as discussed here), the one that customers in EMEAR currently expect the most from Cisco is Clouds. Both in terms of technical guidance as well as architectural innovations.

In the Cisco Cloud approach, intelligence in the network can help ensure delivery of cloud services, provide access and services to the right users, and offer the flexibility to connect with public, hybrid, and community clouds.

As shown in the illustration to the right, three main elements must be considered to build an efficient cloud. In addition, the network needs to provide dynamic access to these resources, and the Cloud applications and services must deliver anywhere, anytime access.

With the emergence of Cloud architectures, innovation is required within the network to allow IP to gain two critical features not natively provided by IP today: IP mobility and the Virtual Private Network (VPN).

With Clouds, applications can seat anywhere, but they can also be moved conveniently at any time. This means the network must be able to deal with resource mobility to deliver on the promise of clouds. And that is what LISP addresses.

For a LISP primer, you can start with the video below:

LISP is an overlay routing protocol in the sense that it allows decoupling of the core transport, which can be any IP transport method from the edge, which can be any IP applications site. LISP is intended to allow dynamic resource provisioning independent of the network infrastructure. In short, any IP address can be positioned anywhere it is needed.

LISP capabilities are key to successful Cloud usage. Their Virtualization capability allows multi-tenancy while mobility is mandatory for flexible provisioning.

Emarin Cloud Network Architecture Options Overview


By definition, Cloud providers offer services to a very large number of customers, and all these services must be provided in the same Data Center and onto the same network. It is therefore as important to virtualize the compute resource as it is to virtualize the network.

Before LISP, MPLS was the only technology able to offer multi-tenancy. Now, LISP brings IP multi-instance transport, allowing IP for multi-tenants in the Cloud.

Virtual Private Cloud is the model for providing Cloud services to the Enterprise, and the access network between provider and customer may not be virtualized. LISP is an appealing technology to provide multi-instances over any type of IP access network. If security is required due to public transport, then encryption is supported on LISP.


There are two ways to consider mobility over the network in the Cloud. One way, inherited from the existing networking models, is to have the provisioning system allocate the compute to one home Data Center site from which the resource could be moved to any other site for operations needed. The other way, which is more innovative, is to totally decouple the compute address space allocation from the physical site where it will be run. Compute to network decoupling is one challenge that LISP can help solve.

In addition, mobility is a key enabler for hybrid clouds. Hybrid Cloud service consists of establishing a relationship between the customer data center and the provider data center. This relationship allows cloud bursting, migration, backup services, and others. LISP allows users to extend the customer subnet to any place in the provider Cloud that can create the required relationship.

Cloud is about the mobility of the compute, but at large scale this will clearly have an impact to the network. LISP, with its pull model, is there to handle such a huge scaling factor.

Building a Hybrid Cloud Using LISP

Traditionally, outsourcing services needed to fully dedicate a Data Center to a provider. With Software-as-a-Service (SaaS), providers can offer application outsourcing. With Infrastructure-as-a-Service (IaaS), providers can offer partial DC hosting. It is important to note that the success of such approaches is linked to how easily the provider Cloud can be integrated with the customer existing resources.

Hybrid Cloud is about interconnecting the Provider Cloud to the Enterprise Private Cloud.

Two types of traffic have to be considered with the Hybrid Cloud. One is the inter-subnet-routed traffic that is mostly a LISP VPN transported type of traffic; the other one is the intra-subnet traffic. Intra-subnet traffic is an interesting new paradigm that ensures subnet continuity for cloud insertion by allowing providers to insert their application right into the heart of the customer data center. IP routing is not really able to provide such a subnet extension. In the last few years, a new design approach has arisen on the market, which is called Data Center Interconnect (DCI) where VLANs are extended over the long distance network allowing extended subnets. The big question is then: is this architecture realistic in a Cloud approach where both ends of the DCI would not belong to the same owner? Can the broadcast domain of an enterprise really be extended to the service provider’s domain? With its ability to extend subnets connections without extending VLANs, LISP is certainly an appealing solution.

To review, LISP is a protocol built for the Cloud, especially the Public Cloud, the Virtual Private Cloud, and the Hybrid Cloud as it offers two new fundamental capabilities to IP: Virtualization and Mobility. LISP offers both of these in a scalable manner.

If you have not yet, I would encourage you to have a look at Omar’s LISP posts. They’re a good resource. LISP is one of the many innovations Cisco is bringing to the industry to ease the journey to Clouds from an infrastructure perspective.

Are you already familiar with LISP? If so, what is your view? What are some of the use cases where LISP brings the most benefit to you?

Many thanks!

Tags: , , , , , , ,

Will you make the Summit?

To all our Cisco partner’s in MEAR that have registered for our annual Partner Summit on 3–6 June 2013 in Boston, Massachusetts; I’d like to welcome you ahead of the event next week.

This year we’re expecting an incredibly high turnout – nearly 3,000 partners from 150 countries who will be networking  and discussing our Go To Market, the latest technologies, new markets, business transformation and leadership, and lots more.

Alongside the Summit in Boston, an estimated 10,000 people from our partner organisations will be taking part in the Virtual Partner Summit. This offers real-time access to the same content, speakers and resources – it’s the next best thing to being there in person and I’d encourage as many partners as possible to get involved.

The fast pace of business means it easy to forget how much ground we’ve covered in the last 12 months. Among all the networking, meeting new people and sharing news, the Summit is our chance to look back and celebrate our mutual successes, recognising our partners’ achievements and rewarding excellence.

Hot topics for 2013

The Internet of Everything will no doubt be a hot topic. During the three day session, I’m expecting lots of debate about the impact of connecting people, process, data and things – and the new revenue streams it’s creating.

Our partners in MEAR always have news, views and insights to share about expanding into new markets. For me, that’s what makes the Summit so crucial to building relationships across the globe. It’s the arena for extending your professional network with many of our top executives at Cisco including the MEAR leadership team and your fellow channel partners. It is our opportunity to listen to you and everyone’s chance to have some fun together.

If we don’t get to meet in Boston, watch the Virtual Partner Summit as it happens, and follow the event on Twitter #CiscoPS13 and #MEARps13 . You can also catch-up with the sessions you missed via our on-demand service, which will be available from the end of the Summit until 8 July.

See what’s on at the Cisco Partner Summit 2013.

Tags: , , , , , ,

Big Returns on Big Data through Operational Intelligence

Guest Blog by Jack Norris

Jack is responsible for worldwide marketing for MapR Technologies, the leading provider of a enterprise grade Hadoop platform. He has over 20 years of enterprise software marketing experience and has demonstrated success from defining new markets for small companies to increasing sales of new products for large public companies. Jack’s broad experience includes launching and establishing analytic, virtualization, and storage companies and leading marketing and business development for an early-stage cloud storage software provider.

Big Data use cases are changing the competitive dynamics for organizations with a range of operational use cases. Operational intelligence refers to applications that combine real-time, dynamic, analytics that deliver insights to business operations. Operational intelligence requires high performance. “Performance” is a word that is used quite liberally and means different tings to different people. Everyone wants something faster. When was the last time you said, “No, give me the slow one”?

When it comes to operations, performance is about the ability to take advantage of market opportunities as they arise. To do this requires the ability to quickly monitor what is happening. It requires both real-time data feeds and the ability to quickly react. The beauty of Apache Hadoop, and specifically MapR’s platform, is that data can be ingested as a real-time stream; analysis can be performed directly on the data, and automated responses can be executed. This is true for a range of applications across organizations, from advertising platforms, to on-line retail recommendation engines, to fraud and security detection.

When looking at harnessing Big Data, organizations need to realize that multiple applications will need to be supported. Regardless of which application you introduce first, more will quickly follow. Not all Hadoop distributions are created equal. Or more precisely, most Hadoop distributions are very similar with only minor value-added services separating them. The exception is MapR. With the best of the Hadoop community updates coupled with MapR’s innovations, the broadest set of applications can be supported including mission-critical applications that require a depth and breadth of enterprise-grade Hadoop features.

With multiple applications comes the need to run complex workloads and coordinate all data flows across these applications. Event-driven, enterprise-ready, workload automation are an important part of the entire solution and MapR is working closely with Cisco on a number of fronts including the Cisco Tidal Enterprise Scheduler (TES). TES ensures peak workload performance, efficiency, and scalability across enterprise environments through a single pane of glass that controls automated Hadoop workloads. The powerful and cost-effective Cisco Unified Computing System C-Series Rack Servers are increasingly being deployed in our customer’s data centers. As you look at your organization identify where to start but realize that it’s a journey.

Do you plan on attending Informatica World next week, June 4-7? If so, you don’t want to miss the Andrew Blaisdell’s presentation on Integrating Informatica and Hadoop for Seamless BI Data Extraction. Thursday, June 6 from 9:00 ­ 10:00 am.

The Cisco Tidal Enterprise Scheduler (TES) Informatica Adapter is a critical part of Cisco Customer Value Chain IT’s integrated workflow. Connecting ERP data sources with Informatica’s data transformation capabilities for Hadoop consumption and extraction to Teradata, with output to Tableau BI, Cisco has created a high performance data-mining engine that has delivered immediate ROI through closed sales opportunities.

In this session, you will learn:

  • How Cisco IT is easily solving the Big Data dilemma by using a single pane of glass to develop and run automated Hadoop workloads
  • How to use the Cisco TES API integrations and standards-based controls to create event-based data mining techniques
  • How Cisco’s best practices for Hadoop architecture achieve immediate ROI and lowers the cost of managing Big Data mining environments

You can also visit the Cisco booth at Informatica World to see a demo of the Cisco Tidal Enterprise Scheduler and MapR.

See you at the show!

Tags: , , , , , , , ,

Cisco Datacenter Solutions at Microsoft TechEd North America 2013

Please visit Cisco’s TechEd web site to learn more about what we have going on at the show and visit to learn more about Cisco’s Microsoft capabilities.


I first attended Microsoft TechEd in 1996 in Los Angeles. What a learning experience it was.

This year, I’ll be there staffing the Cisco booth (#1701) speaking about the Cisco Unified Computing System.

If you answered “yes” to any of these questions, stop by and speak with the UCS server team.

Cisco will also be demonstrating:

Both the Nexus 1000V and FlexPod are finalist for the Best of TechEd 2013. We are very hopeful this will be our third year in a row to win a Best of TechEd award.

Best of TechEd North America 2013 Finalist

If you won’t be able to join us and would like to learn more about how Cisco is changing the economics of the datacenter, I would encourage you to review this presentation on SlideShare  or my previous series of blog posts, Yes, Cisco UCS servers are that good.

  1. IDC Worldwide Quarterly Server Tracker, Q4 2012, February 2013, Revenue Share

Tags: , , , , , , ,

Steps to Optimizing Your Network- Software Strategy

Although I’m calling it the software strategy, it really targets the development of two specific standards: software image and configuration template.  Choosing the correct software will be a critical factor in the success of a deployment.  All the design and best practice efforts will amount to nothing if a critical defect is encountered in software.  Software defects have created their fair share of network outages but what’s most frustrating is that many of these outages are caused by well-known defects that could have easily been avoided with a little research!

Software Risk Analysis

It has been my experience that often customers will deploy a hardware platform with the software that came installed from the factory.  As you can imagine, this leads to a very diverse deployment of software.  In this instance, diversity is not good as it creates a scenario where you cannot manage the risk and it is left to chance. Standardization of your software releases will put you in control of the risk and allow you to properly research software prior to deployment.  

The decision tree on selecting software is pretty straight forward.  The design will determine the hardware and the hardware and feature requirements will define the software releases that can support the deployment.  Well, that certainly sounds easy but you’re not done yet.  There will undoubtedly be multiple releases of code available for download, so which is the correct release to choose?  Shouldn’t we just choose the latest release?  The answer to this has to be adjusted to fit the specific circumstances.  Let’s say there are numerous releases available and the latest release was posted just a week ago.  With all software, you have to understand that there is a chance that changes made could have introduced what the industry calls a “regression” bug as part of the changes in that release.  Often times these can go undetected by the testing of the release and tend to be found in the field.  This makes the “age” of the software rather important from a risk perspective.  The 1st thing you would want to understand is what bugs does the most recent release resolve and whether these fixes would impact your deployment.  If it resolves no bugs relevant to your deployment, then you can lower your risk by deploying an earlier release with much wider distribution.  Of course, if it does resolve a bug that would impact your deployment, then we will need to attempt to mitigate this risk in our own testing and piloting process (I will discuss change management strategy in my next post).

So what can be used to research?  The Bug Toolkit, available on provides the ability to launch a target query to identify known defects for the platform and software in question.  The tool will return the details of the bug along with workarounds (where available) and list the status (New, Open, Resolved …).  If a defect is resolved, then you can refer to the releases that it has been resolved in and move to a new release which does not have the defect.  If it is open and could impact the deployment, then you would need to seek technical assistance from TAC or your Network Optimization Service Engineer.  This research should also be supplemented by the individual release notes for a given platform or software release.

Security Vulnerabilities

If the software was researched as outlined above, we will already be aware of software-based advisories, responses and notices that have been publicly released.  In this instance, you would just need to monitor the release of these to ensure that there is nothing new that could impact your deployment. See publicly released Security Advisories, Responses and Notices.

Configuration Best Practices

Now you develop the final configuration that will be submitted for testing or validation.  There are many published best practice guides and documents on  The Design Zone provides some great content and searching for “Catalyst 6500 Best Practices” will return a link to Best Practice Recommendations for the Catalyst 6500 Series Switch.  Other sources of best practice configuration would our Cisco Press books. Leveraging Baseline Templates can allow for the creation of custom templates to create customer best practices and monitor compliance.

Researching software requires knowledge and access to the latest information possible.  The Network Optimization Service (NOS) provides a full software strategy for your deployed devices, researches software and tracks compliance as a core part of this service.  The NOS engineer is able to look at the complete pool of defects including those that have not yet been fully documented for external consumption.  Most importantly, the NOS engineer is able to arm your engineers with the knowledge, specific to your business, needed to be successful.

Check out my next post where I’ll cover the final strategies- change management and network management.

Tags: , ,

Cisco UCS in the world of open source computing

A few weeks ago, I was at OSCON (Open Source Conference) 2013 — a conference hosted by Cisco where we had speakers from IBM, Canonical, Red Hat and Rackspace, among others.  I learned a lot, specifically about the evolution of Hadoop and the OpenStack project.  As a follow on, I collated different activities around Cisco UCS and OpenStack, which I will share in this blog.

Dr. Dan Frye, Vice President, Open Systems Development, IBM, head of the IBM Linux Technology Center (LTC) gave the keynote address at the conference. It was nostalgic considering the fact that I sat in the same aisle as some of the LTC team members in the IBM facility in Austin, a few years ago. His talk included some fascinating historical anecdotes and three lessons IBM learned about open source software development-

  1. “Develop in the open” (Don’t try to contribute finished software products, heed to feedback)
  2. “Don’t reinvent the penguin” (“Scratch your own itch” – interesting phrase to explain the behavior of communities which want to solve the problems at hand and not those perceived to be problems by external entities)
  3. “Work with the process” (The community process is usually an agile methodology with no assumptions on roadmaps and delivery dates)

These lessons are invaluable in light of the open source projects such as OpenDaylight (no pun intended) and OpenStack that Cisco is now an integral part of.  According to Dr. Frye, these newer open source consortiums have the following characteristics:

  1. Larger number of initial members
  2. Quick starts
  3. Relatively large initial budgets
  4. Often require the commitment of a specified level of FTEs

Chris Wright from Red Hat expanded upon the principles and ethos of open source projects including release early, release often, iterative development and the culture of giving back. He contrasted the Linux kernel development project with the OpenStack project showing the relative speed of projects with the number of developers and commits by release. He gave a fantastic overview of the various Openstack component projects. He also identified two newly graduated projects namely, Ceilometer and Heat in the Grizzly release. I gave a talk on the requirements for the Ceilometer project, and you can find the slide deck on slideshare.

After attending the conference, I looked for projects within Cisco, which used OpenStack or contributed to it.  Cisco is a major contributor to the soon to be renamed Quantum networking project.  The Cisco WebEx group is a poster child of the OpenStack community.  In the true spirit of open source development, we now have a project underway which addresses the challenge of setting up Openstack on Cisco UCS servers using the UCS Manager, Cobbler and Puppet.  You can find a scripted configuration of Cisco UCS servers using the Python SDK on Cisco CDN.  The scripts add the configuration of newly prepared servers to a build node, which then automates OpenStack deployment on the server system. Cisco UCS is the ideal platform for MaaS (Metal as a Service)- a term used by Canonical.  Since the Cisco UCS exposes an open XML API, users can harness “Metal as a service” with minimal software investment.

Cisco is exploring a partnership with Red Hat within the Openstack context and is a platinum sponsor of the upcoming Red Hat Summit in Boston.  Ram Appalaraju, Vice President, Marketing in Cisco’s Data Center Group will deliver a keynote address on June 12 at 9:30 AM.  If you plan on attending you may also want to check out the demonstration showing the automated installation of the Red Hat Openstack distribution on Cisco UCS.

Tags: , , , ,

Setting Up an IPv6 Testing Plan

In my previous blog, I talked about building out a lab to help with IPv6 integration testing.  It cannot be understated how important it is to test any new feature that is going to be deployed on the network.  This statement is true independent of the feature involved.  In this case, we are talking about IPv6, but we could just as easily be talking about virtualization or BYOD.

So now that we have the lab build up in progress, what’s next? 

First, you should communicate to your teams that you have a lab and that it is there to test new applications. Too often, IT people are told of an integration plan at the last minute or not at all.

Second, it is always important to have a plan and objective when going into any testing situation.  A test plan and objective gives you something that you can fall back on to either recreate the observed results, or ground you in the middle of a complicated configuration.

So what’s in an IPv6 integration test plan?

Just as the lab setup tries to mimic the “real world”, the test plan should be developed to create test cases and scenarios that will mimic situations that are common in the daily operations.  To help define what the “real world” looks like, refer to your audit of the current features that are in use, or conduct a new one to get that information.  This audit will help define what needs to be tested and how it should be tested.  The test cases should also be inclusive of the end system operating systems and applications that are part of the operational network.  The features and services (e.g. DHCPv6, DNS) that these devices use should also be part of the test plan.

A primary divergence from testing that has been done for IPv4 networks, is that the network now has two transports available for use – IPv4 and IPv6.  The test plan has to take into account that at any given time, one or both transport protocols will be used.  The test plan needs to be developed to cover situations where both transports are available as well as situations where only one transport is accessible.  In the operational network, situations might develop where the only transport available is IPv6.  It is important to understand how the network, end systems, services, and applications work in that scenario.

It is also very important to build up test cases and scenarios that fall outside of normal operations.  These scenarios give insight into how the system will behave when operating outside of the envelope.

Test cases should be developed to find out the performance peaks (e.g. packets per second, connections per second) and resource utilization characteristics (e.g. CPU and memory utilization) of the system.  Lab testing can also be used to evaluate the performance of the individual components that are part of the design, to ensure that they do the functions they are configured to do.  Results found in the lab can be used to compare against equipment vendors benchmarks.

It is important to establish the performance envelopes during the testing phase.  These results will be used to establish some key performance indicators that can be used as a basis of comparison when the design is moved into the operational phases.  The performance benchmarks in the lab can be used to establish operational thresholds that can be monitored by the network management system when the design is deployed.

There are also resources available to help with the overall testing efforts.  The following sites can be used to help define some test cases and what features should be used to evaluate IPv6 feature functionality:

The IPv6 Ready and USGv6 sites will also help indicate which vendors’ products have been evaluated by these testing programs.  Keep in mind that these programs are a starting point for testing, and should not be used as a substitute for doing your own analysis and testing.

Tags: , , ,