Flash and Virtualization Give Customers More Reasons to Appreciate the Cisco/HDS Partnership

– April 27, 2017 – 0 Comments

Few years back, enterprise storage was primarily dependent on spinning disks built into stand-alone arrays. Then advent of flash storage changed everything in the storage industry landscape in terms of key performance factors. This includes flash storage arrays being able to transmit data at line rate, even for random reads, a behavior which is observed for the first time by many enterprises under production.

Server virtualization is another game-changer that is rapidly approaching maximum adoption thresholds within most enterprises. Server virtualization has also increased application density on the same physical HBA port, which has created new high-performance requirements for server-to–storage connectivity.

To meet these needs and accelerate ongoing digital transformation, Cisco and HDS recently introduced new hardware systems and software that enable customers to achieve end-to-end 32G Fibre Channel connectivity from server to network to storage.

Specifically, HDS innovations includes 32G support on VSP F-Series and VSP G Series along with other software enhancements, helping customers to respond quickly to new business needs, and to, scale without sacrificing performance. Cisco contributions included upgrades to its SAN portfolio in the form of the highest-density 32G directors in the industry today, offering advanced flash storage support, embedded analytics, and improved server-to-storage connectivity via the 32G HBAs for UCS C-series.

Why HDS + Cisco 32G FC Offering

Jointly, the Cisco and HDS 32G SAN solution sets new standards for storage networking, providing customers with choice as well as breakthrough levels of user experience and operational efficiency. This next-gen joint solution is the latest success from the decades-long HDS and Cisco partnership that has continually delivered state of-the-art innovations designed to accelerate the speed of innovation and deliver real-time high-performance storage solutions to customers.

Ideal for wide variety of storage networking use cases, the Cisco MDS offers a combination of performance, non-stop operations, and multiprotocol flexibility that is complemented by the versatility and performance of the HDS Storage portfolio. Together, the two companies enable customers to simplify and eliminate guesswork from IT by providing best-in-class IT process automation and real ROI with complete ecosystem insights.

As demonstrated in the flash storage use case, the migration to 32G technology optimizes storage networks with the ultra-high demands of flash arrays. Having the ability to transmit 32G Fibre Channel at line-rate data is essential for higher application performance and support for increased VM density on the same hardware system, which leads to cost savings and efficiency. As explained in this HDS blog, having the right network in place for flash array deployments is everything to ensure the best user experience that today’s enterprise customers have come to expect.

The joint solution will be available through HDS, who resells Cisco MDS under its storage brand. For more information about the complete Cisco storage networking portfolio, go to www.cisco.com/go/mds

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Cisco ACI and Fortinet Joint Solution Enables Business Agility at Axians AB

– April 26, 2017 – 0 Comments

Introduction: In this blog, I am covering a customer success story featuring the Cisco ACI and Fortinet joint solution. Axians AB is a leading Swedish IT sourcing company, with multiple offices in the Nordics and employees across the globe. Axians offers hybrid IT services from datacenters and the public cloud to a diverse set of customers spanning private, government and service providers.

Axians’ Requirements: With a rapidly growing customer base, Axians had a pressing need for a scalable and flexible infrastructure, one that would provide high utilization rates, simplified management and reduced costs. In particular for networking, Axians required an SDN architecture with programmability, accelerated application security delivery and time-to-market advantages. Further, Axians’ Service Provider business needed support for multi-tenancy and hybrid-cloud integration.

Axians Infrastructure

Axians uses a single ACI fabric that stretches across 40 kilometers, spanning two datacenters in Skondal and Haga. Customers are virtually separated and placed as Tenants in the solution. UCS B-series is the server platform running vSphere 6.0 with Cisco AVS integration to VMware, with support for local switching mode. Integration with the Axians internal platform, as well as integration with documentation systems and the legacy network are other characteristics of the environment.

Current ACI Deployment: The Axians deployment environment is a scalable, flexible one with two spines and three leafs in each datacenter (Skondal and Haga), forming a single ACI fabric, with the APIC management cluster at the Haga location. Fortinet’s FortiGate enterprise firewall is deployed in the Axians production environment for L3 (Routed) traffic flows among EPGs.

Each customer in this design is set up as an independent tenant with distinct security functions using Fortinet’s Virtual Domain (VDOM). Depending on the maturity of a customer, the setup can be implemented using an application-centric or a traditional EPG to vLAN mapping. FortiGate adds stateful inspection with advanced functions and better visibility beyond stateless ACLs.

To provide effective security, the security and data elements across the deployment must be well-integrated and able to share intelligence. FortiGate is part of the Fortinet Security Fabric which provides broad, powerful and automated security capabilities that span the entire attack surface. Looking ahead, the possible integration of Cisco ACI with FortiGate and the Fortinet Security Fabric can deliver the following benefits to Axians:

  • Consistency and transparency across physical and virtual application workloads
  • Single-pane-of-glass management enablement with full visibility on security policy enforcement
  • Predefined security policies deployed rapidly through the complete application deployment lifecycle
  • Scale on-demand with automation
  • Broad, powerful and automated security via the Fortinet Security Fabric and ACI.

The integration of Cisco ACI and Fortinet can deliver accelerated software-defined security, enabling transparent security services insertion anywhere in the network through single-pane-of-glass management. The Fortinet Security Fabric’s integrated, collaborative and adaptive architecture can deliver security without compromise to address our security needs. The joint solution provides enhanced visibility and security, lower TCO and increased efficiency in service provisioning and network security segmentation,” said Erik Sohlman, Sr Manager Infrastructure, at Axians.

Buoyed by 2 years of production success with ACI, Axians is moving forward with advanced integration plans in their ACI environment. East-West traffic has become predominant, and security has become very critical. Perimeter security alone does not suffice anymore. Axians is looking to further enhance their networking and security services to further automate and secure their datacenter infrastructure. With an ever-increasing customer base and demands for application agility, Axians is embarking on orchestration as a key initiative. And they are looking to Cisco ACI and Fortinet as a key foundation to meet these enterprise-wide objectives.


Related Links:

www.cisco.com/go/aci

www.cisco.com/go/dcecosystem

www.fortinet.com

https://fortinet.com/products/virtualized-security-products/fortigate-connectors.html

 

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Cloud, Security and Analytics at Open Networking User Group

– April 21, 2017 – 0 Comments

ONUG is here again and next week I will be heading to San Francisco to participate in the Spring conference. This will be the fourth ONUG conference I am attending and it has been fascinating to see the event evolve from defining the SD-WAN business requirements, and expanding to Cloud, Security and Analytics – all new topics at the Spring event.

The key behind all of these technologies is digital transformation and I believe that this will continue to be the main driver for network innovation. Cloud applications, more mobile devices and IoT are challenging existing network designs and forcing organizations to think of the network more holistically while ensuring their users and data are secure.

 

So what does a digital-ready network look like? While SDN has been about a software driven approach to networking, it has also resulted in more automation capabilities that can help free up the time IT spends managing and operating their networks. More programmability allows organizations to tap into the intelligence within the network and create more business value. Lastly in order for networks to become more proactive and resolve issues before they occur, analytics will be key to making networks smarter.

I hope to see these topics and more discussed at the conference. If you are attending the ONUG spring conference, this is a quick overview of the activities Cisco will be participating in:

Day 1: Tuesday 25th April

12:20-1:20pm. The Lunch and Technology Showcase will give you an opportunity to see a demo of Cisco’s SD-WAN and Branch Virtualization solutions

2:05-2:50pm. Open SD-WAN OSE Exchange Update. Steve Wood, Cisco’s Principle Architect will give an update on the work he is doing to drive open standards into SD-WAN working group.

Day 2: Wednesday 26th April

POC Theater. Kishan Ramaswamy, Senior Product Manager, will give an overview of Cisco’s Enterprise NFV solution with Intelligent WAN and how it will enable the branch for digital.

12:30-1:30pm. The Lunch and Technology Showcase will give you an opportunity to see a demo of Cisco’s SD-WAN and Branch Virtualization solutions

3:45-4:15pm.  Jaeson Schultz, Talos Technical Architect will participate in the panel on Security Threats and Vulnerabilities in a software defined world.

4:35-5:30pm. Dave Ward, Cisco’s CTO of Engineering and Chief Architect will participate in the ONUG Town Hall Meeting – The New Vendor/Buyer Role

 

I look forward to seeing you there. For those of you who can’t attend in person, follow me on twitter @ghodgaonkar for updates.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

3rd Annual Interoperability Testing: Converging on a Standard

– April 20, 2017 – 0 Comments

For the third year in a row, Cisco participated in the VXLAN BGP EVPN interoperability testing at the European Advanced Networking Test Center (EANTC).

The interoperability showcase and test results are featured in the EANTC white paper as a part of the MPLS + SDN + NFV World Congress. This year, a total of seven vendors participated in the VXLAN BGP EVPN interoperability testing, showing significant industry adoption year over year.

VXLAN BGP EVPN – 2 RFCs, 3 Drafts, 7 “Options”

As with many early-stage technologies, VXLAN BGP EVPN has seen many proposed implementation options and variations. The latest solution is comprised of many pieces of the standard bodies. For example, VXLAN data-plane itself is covered in RFC 7348 while the overarching definition of the BGP EVPN control-plane is covered in RFC 7432.

In order to define the operational models for VXLAN BGP EVPN in more detail, additional drafts have been issued. While the EVPN-overlay draft (draft-ietf-bess-evpn-overlay) specifies the control-plane for Layer-2 operation, the routing functions are separated into the first-hop routing (draft-ietf-bess-evpn-inter-subnet-forwarding) and the IP subnet routing (draft-ietf-bess-evpn-prefix-advertisement) drafts. This amounts to 2 RFCs and 3 IETF drafts.

IETF draft/RFCs

IETF draft/RFCs

The various RFCs and drafts provide different implementation options. There are two to three options per draft, but paired with the amount of guiding documents, the permutation becomes quite large. In order to provide some additional context to the testing results, below explains the different implementation options and what they entail.

Layer-2 Service Interface

In “draft-ietf-bess-evpn-overlay”, there are two modes of operation describing how the EVPN Instances or EVIs are configured and how the associated information is carried over the BGP control-plane. EVPN Instance or EVI are Virtual Private Networks (VPNs) in the terminology of EVPN.

The first option, called “VLAN-based”, is described as “single broadcast domain per EVI”. The EVI is the equivalent of a VPN in EVPN terminology. In this option, the tenant VLAN is mapped to a single EVI where the entire routing-policy is applied (Route-Target in BGP terminology). In this 1:1 mapping approach, the single broadcast domain is represent with a VLAN or a VNI respectively.  The VLAN/VNI is associated with an EVI which provides the most granular control for importing routes, specifically the MAC addresses.

The second option, termed “VLAN-aware,” is represented in the section “multiple broadcast domain per EVI”. For this option, multiple VLAN/VNI combinations are bundled into a single EVI. This bundling requires slightly less configuration, as multiple VLANs/VNI only require a single route-target. However, when automated configuration options are considered, this is no longer an advantage. In fact, the loss of granular control of importing and injecting routes on a per VNI basis now becomes a disadvantage. In both VLAN-based and VLAN-aware options, the data-plane is similarly populated where a single VNI identifies the local MAC-VRF. Similar as an IP-VRF for Layer-3 domains, the MAC-VRF defines the logical boundary but this time for Layer-2 domains.

Cisco has followed the VLAN-based approach. This approach, coupled with the auto-derivation of BGP EVPN Route-Targets and Route-Distinguishers, provides the most granular option to populate hardware tables appropriately.

Today, given the obvious advantages, all vendors participating in the EANTC interoperability testing have converged to the VLAN-based approach, allowing broad interoperability testing between vendors.

First-Hop Gateway (Integrated Route and Bridge or IRB)

With the goal of driving benefits from Layer-2 forwarding and inter-subnet forwarding with EVPN, a first-hop gateway option had to be defined based on the approach of Integrated Routing and Bridging (IRB). Just like Layer-2, there are different use-cases with Layer-3, and as a result two different modes of operation were defined in “draft-ietf-bess-evpn-inter-subnet-forwarding”, namely, Symmetric and Asymmetric IRB.

Symmetric IRB / Asymmetric IRB

Symmetric IRB / Asymmetric IRB

Asymmetric IRB follows a more traditional approach, where the first-hop gateway for the End-Points performs the routing operation only at the ingress. This results in a bridge-route-bridge operation or a similar approach as employed for Inter-VLAN routing. With Inter-VLAN routing followed by bridging to the respective destination through the Layer-2 VNI (L2VNI), the device hosting the first-hop gateway function is required to have all possible destination MAC/IP binding information. This implies that the MAC-IP binding information of all local as well as remote End-points needs to be known at the first-hop gateway device which is inherently a scaling limitation.

Symmetric IRB uses a bridge-route-route-bridge approach. Whenever there is a routing operation done at the ingress, there is a symmetric routing operation performed at the egress. Routed traffic from ingress to egress is forwarded via a transit segment, defined on a per-VRF basis and termed the Layer-3 VNI or L3VNI. For all routed traffic that goes in any direction, the L3VNI is stamped in the VXLAN header. This is different from the Asymmetric IRB scenario where, depending on the destination, the routed traffic will carry the L2VNI associated with the destination subnet. The symmetry provided by Symmetric IRB ensures that only MAC/IP bindings associated with locally attached End-Points are required at the gateway, reducing both the required software and hardware state.

With the aim for a distributed first-hop gateway approach paired with optimal scale and no hair-pinning, Cisco implemented the Symmetric IRB approach. Symmetric IRB provides the most scalable approach by not creating a pollution of MAC/IP adjacency information across all the devices performing first-hop gateway function.

While there was initially a wide adoption of Asymmetric IRB among the original authoring vendors other than Cisco, most of the newer entrants have implemented the more scalable Symmetric IRB approach.

IP Subnet Routing (IP-VRF)

Most vendors agreed on how IP Subnet routing can be done using two primary options, along with a third combination of the two. In “draft-ietf-bess-evpn-prefix-advertisement”, all three options are defined with very creative names. The “interface-less” approach embeds the next-hop’s MAC address (RMAC) as part of the same BGP NLRI where the IP Subnet prefix is sent. With this approach, all information necessary for routing is embedded in a single BGP update.

This is different from the other two “interface-full” approaches, where for every next-hop, an additional BGP advertisement is created to provide the next-hop’s MAC address. The reason for this is in the primary implementation of the “interface-full” approach, the next-hop is numbered, and for each of these next-hop IP addresses, the respective IP-to-MAC mapping is required. In one flavor of the ‘interface-full’ approach, the next-hop is numbered and it is necessary to send IP-to-MAC mapping for each next-hop IP address. Even with the unnumbered option for “interface-full,” the additional BGP prefix is still required for serving the recursive look-up on the next-hop.

Cisco implemented the more natural way “interface-less” operates, with no additional MAC route advertisement required on top of the IP subnet route. Cisco also adopted the variation of the “interface-full” option based on the unnumbered making. Cisco is the only vendor with both unnumbered based approaches for IP subnet routing in BGP EVPN implemented and with the ability to host both options at the same time. Cisco can also route between “interface-less” and unnumbered “interface-full” based implementations.

Convergence to a Common Implementation

As you can see, the various VXLAN BGP EVPN options presented in the IETF drafts can result in quite a few permutations. This, together with the respective vendor decision regarding choice of implementation, made interoperability quite difficult in the early days.

Over the last year or so, there has been a convergence to a specific set of options. This convergence has reduced the complexity for users. The original co-authoring vendors along with the late-comers are fairly aligned with the following common set of options for EVPN:

  • For Layer-2 Service Interface: VLAN-based
  • For First-Hop Routing: Symmetric-IRB
  • For IP Subnet Routing:  Interface-Less

The test results from the EANTC Interoperability Showcase of 2017 highlight the convergence to a common implementation set, mirroring the functionality Cisco currently supports and has been shipping for the last 2 years for VXLAN BGP EVPN.

Cisco continues to drive enhancements on top of its VXLAN BGP EVPN solution, always with industry standards and openness in mind. Together, the BGP EVPN control-plane and data-plane agnosticism has helped to drive new approaches and data-planes like Segment Routing. The single control-plane approach across multiple domains makes intra Data Center use-cases very attractive, while also enabling seamless elasticity to other domains.

MPLS + SDN + NFV World Congress Public Multi-Vendor Interoperability Test 2017

Interoperability Showcase 2017 – Whitepaper

 

 

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Cisco CloudCenter (formerly CliQr) Celebrates One-Year Anniversary of Acquisition by Cisco

– April 20, 2017 – 0 Comments

As the saying goes, time flies when you’re having fun. And there probably isn’t a better phrase to sum up our experience since Cisco acquired CliQr and we became the Cisco CloudCenter team just one short year ago.

A Quick Look Back

Time has moved quickly—and so have we. While many acquisitions kick-off with a slow-pace transition period, we were aggressive and were added to the Cisco Price List in just four short weeks—what we’re told was in record time. We also launched two important new product releases, CloudCenter 4.7, with a deeper Cisco ACI integration and simplified networking; and earlier this month, CloudCenter 4.8, delivering support for brownfield workloads ingested into CloudCenter management – a really important add. Together, these releases deliver on our promise of being able to put the right workload in the right place at the right time.

It’s Not Just About Product

The former CliQr team and Cisco worked hard to enable the field force and global partners to be able to sell, deploy and support CloudCenter. Cisco CloudCenter has experienced more customer wins from around the globe and across every industry, from government and manufacturing to healthcare and insurance, demonstrating impressive business growth along with the satisfaction of seeing customers capitalizing on the real power of cloud computing.

While it hasn’t always been easy—especially in the early days of the cloud—we always held to our convictions of how the cloud delivers value and what is needed to manage what we knew would ultimately be hybrid cloud environments.

We couldn’t have done this without an amazing team and we’re proud that the industry has taken notice as well. Over the years, CliQr and its CloudCenter solution has been recognized by Gartner as Cool Vendor of the Year for Cloud Management, received Best of Show in the Cloud Platform category at Interop Tokyo, and won the Software and Information Industry Association CODiE Awards in the Best Cloud Management category.

Recognition like this means more to us than a trophy in our case. Rather, it is an indicator of our passion for making the cloud, in all of its forms, the new way for businesses to exploit and optimize the use of information technology to run their operations and compete.

It’s Only Going to Get Better

As you can see, it’s been an amazing 365 days and we can’t thank our customers, partners, and employees enough for making this a tremendous first year at Cisco. We’re proud of where we’ve been—and the fun is just getting started. The cloud is at an exciting inflection point where it is both relatively new and being broadly adopted by businesses—all at the same time. This combination proves that the pace will increase, while still incurring a lot of change. It’s in this environment that we can help businesses the most.

Learn more about Cisco CloudCenter.

 

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Cisco-Docker Alliance Update

– April 19, 2017 – 0 Comments

Cisco and Docker continue to expand their partnership to make it easier for enterprise customers to realize the benefits of containers. Today we announced a new application modernization program with Docker and new Cisco advanced services.

This week Cisco is at DockerCon in Austin, as we continue to build on the strategic alliance with Docker we announced last month. Our engineering teams have been working together for many months, and in the announcement blog we explained the first solutions Cisco, Docker and our partners are delivering together. Now the marketing, sales and services dimensions of our relationship are kicking into high gear. Customers, partners and the marketplace are responding very positively to our alliance.

Three weeks ago Docker joined us for Cisco’s Partner Connection Week and Data Center PSS events in Miami. We had the opportunity to explain our strategy and products to Cisco partners and specialists and to hear firsthand how enterprises are adopting containers. Many IT organizations, particularly in larger enterprises, are adopting containers to modernize their traditional applications, so they can realize the benefits and efficiencies of containerization.

Turnkey Acceleration for Customers to Containerize

One of the primary goals of the Cisco-Docker alliance is to make it easier, faster and safer for organizations to adopt containers. That’s why Cisco announced today we are joining the Docker Modernize Traditional Apps (MTA) program. This turnkey program consists of consulting services and products that enable enterprise IT organizations to start reaping the benefits of app modernization in less than 30 days.

Today Cisco also announced the expansion of our Cisco Cloud Professional Services to include Container Networking. These services will help our customers bridge the skillset gap, so they can adopt containers easily in their environment and implement the best-practices that we have developed working with many other customers. They will include software and hardware as well as professional services.

You can read this blog for more details.

Providing the Right Ingredients

Cisco UCS is infrastructure as code (IaC). It is a fully programmable system with a unified API, so you can easily define the desired state of the infrastructure and what you want to do with it. That’s why the Cisco Validated Design for Cisco UCS with Docker Enterprise Edition (EE) is ideally suited for DevOps. As DevOps.com noted in their article describing the alliance:

This strategic partnership brings even more opportunity for these enterprises by running Dockerized applications on a validated Cisco UCS infrastructure that is optimized for security, availability, performance and scale.1

Analyst firm IDC also noted the advantages of Cisco UCS with Docker EE in their recent report analyzing the alliance:

The Cisco UCS system is differentiated by its I/O capabilities, with an API-enabled system around I/O and virtual I/O devices. For the Docker collaboration, Cisco brings these I/O capabilities to containers through its UCS framework of virtual I/O devices, network isolation, policies, templates, and service profiles.2

We are also working with NetApp to include Docker Datacenter support on converged infrastructure. Last month we released a Cisco Validated Design for FlexPod with Docker Enterprise Edition, and we will continue to enhance this solution.

You can learn more about both CVDs in this Docker blog.

Making Container Adoption Easier

Containers are a very important emerging technology. Cisco, Docker and our ecosystem partners are committed to making it easier, safer and faster for you to adopt them in your environment. We are just getting started, so stay tuned for more to come in the near future.

For additional information on Cisco-Docker solutions:

 

* Sources:

  1. Cisco and Docker team to modernize cloud and data center application environments”, DevOps.com, March 2, 2017
  2. “Cisco and Docker Announce Strategic Alliance”, IDC #lcUS41374617, March 15, 2017

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Why the Cultural Side of DevOps Matters to Networks Facing Containerization

– April 19, 2017 – 0 Comments

On Feb 3, Lori guest blogged about “programmability is the future of NetOps”. Lori is a frequent guest blogger and you can check her other blogs here. Cisco Insieme Business Unit and F5 have been innovating on the DevOps front for several years now and Lori is taking us through recent automation and orchestration trends in this blog.

Containers. Even though the technology itself has existed for more years than many IT professionals have been out of college, you can hardly venture out onto the Internet today without seeing an ad, article, or tweet about the latest technology darling.

They are one answer to the increasingly frustrating challenge of portability faced by organizations adopting a multi-cloud strategy, which means most according to every survey in existence. Containers, too, provide an almost perfect complement to development organizations adoption of Agile and DevOps as well as their growing affinity for microservices-based architectures.

But containers alone are little more than a deployment packaging strategy. The secret sauce that makes containers so highly valued requires automation and orchestration of both the containers themselves as well as their supporting infrastructure. Load balancers, application routers, registries, and container orchestration are all requirements to achieving the simplicity of scale at the speeds required by today’s developers and business.

Growth is inevitable, and the speed at which container-based systems grow once they’ve become ensconced in production is astonishing. Per last year’s Datadog HQ survey on the technology, “Docker adopters approximately quintuple the average number of running containers they have in production between their first and tenth month of usage. This phenomenal internal-usage growth rate is quite linear, and shows no signs of tapering off after the tenth month.”

Imagine managing that kind of growth in production manually, without a matching increase in headcount. It boggles the mind.

Even if you can imagine it, consider that the same survey found that “at companies that adopt Docker, containers have an average lifespan of 2.5 days, while across all companies, traditional and cloud-based VMs have an average lifespan of almost 15 days.” So, managing “more” becomes not only more in number, but more frequently, as well. Such volatility is mind-boggling in a manual-world, where configurations and networks must be updated by people every time the application or its composition changes.

NetOps never envisioned such a dynamic environment in production, where the bulk of business “gets done” and reliability remains king. Balancing reliability with change has always been a difficult task, made exponentially more troublesome with the advent of micro-environments with micro-lifecycles.

Therefore, it’s imperative for NetOps to not just adopt but embrace with open arms not only the technical aspects of DevOps (automation and monitoring) but its cultural aspects, as well. Communication becomes critical across the traditional siloed domains of network and application operations to effectively manage the transition from manual to automated methods of adjusting to the higher frequency of change inherent in container-based applications. You can’t automate what isn’t known, and the only what to know the intimate details of the apps NetOps is tasked with delivering is to talk to the people who designed and developed them.

No matter how heavy the drum beats on those siloes, it’s unrealistic to expect those deeply entrenched walls to break down amid their digital transformation. Developers aren’t going to be programmatically manipulating production network devices, nor are network engineers going to be digging around in developers’ code. But communication must be open between them, as the rate of change increases and puts pressure on both groups to successfully deploy new technologies in support of the faster pace of business and the digital economy. The walls must become at least transparent, so the two groups can at least see the grimaces of pain when something breaks. Through that communication and shared experience comes empathy; empathy that’s necessary to bring both groups together in a meaningful way to design and architect full-stack solutions that incorporate both network and app services as well as the entire application architecture.

Scripts are easy. Shared responsibility and effort is not. But it is the latter that is becoming an imperative to designing networks that can support emerging architectures at the speed (of change) required. The technical side of DevOps is not unfamiliar territory for NetOps, though its skills will no doubt requiring sharpening in the coming sea storm of change. The cultural side of DevOps is less familiar (and more uncomfortable – trust me, I get that, really) for all involved. But only by collaborating and communicating will these highly automated, dynamic, and rapidly cycling technologies can succeed and allow businesses to reap the rewards of adoption.

Related Link: http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/f5-devops-innovation-wp.pdf

 

 

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Introducing PnP Connect

– April 18, 2017 – 0 Comments

What is Cisco Network Plug & Play?

Cisco Network Plug & Play (PnP) is a secure and scalable solution for simple day-zero provisioning across all Cisco Enterprise platforms (routers, switches and wireless access points). The PnP application runs on top of Cisco Enterprise SDN Controller – APIC-EM.

What is PnP Connect?

PnP Connect automates the entire day-zero experience from device procurement to provisioning. It is a new service that acts as a discovery mechanism for a network device to discover its controller. PnP Connect re-directs a network device to its controller (APIC-EM) and eliminates the need for DHCP / DNS discovery mechanisms. PnP Connect, also allows configuration provisioning directly without using APIC-EM (this is beta). By simply assigning a Smart Account while ordering PnP eligible devices in Cisco Commerce, the devices automatically populate in the PnP Connect portal.

PnP Connect allows flexibility of implementation: Customers can choose between on premise and cloud based day-zero provisioning.

Why is PnP Connect important?

Cisco PnP Connect solves a problem that is top of mind of most CEOs as 18% of a typical product cost goes into day-zero activities. In fact, 57% of CEOs are worried about IT strategy not supporting business growth but with PnP Connect generated cost savings they can be at ease. For example, the entire PnP solution, in general, will lead to up to potentially 70% reduction in operational costs by (1) eliminating pre-staging, (2) minimizing manual configuration errors, and (3) removing the need for a specialized technical installer at the end site.

Just as a reference, a typical average deployment cost for an access switch is $938. If you extend this to a huge network, the cost savings promised through PnP solution become material for the bottom-line!

Important Links:

Learn about Cisco Plug and Play Connect

Manage your Cisco Plug and Play Connect

Release Notes for Cisco PnP Connect

Solution Guide

Release Notes for Cisco Network Plug and Play

Configuration Guide for Cisco Network Plug and Play on Cisco APIC-EM

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Is Facebook’s new storage platform performance hungry?

– April 17, 2017 – 0 Comments

I promised a follow up in my recent blog about Facebook’s new Bryce Canyon storage platform. While the title of this blog may sound like it questions the new platform’s capabilities. It like the previous blog aims to help you understand why it is perfectly suited for Facebook’s unique needs.

The first blog was focused on storage attributes which is near and dear to my heart. But the other thing I noticed about Facebook’s new storage platform was the use of two Mono Lake micro-servers. This little micro-server is pretty cool because Facebook uses it everywhere to maximize component re-usability. When managing infrastructure at massive scale, component commonality can lower equipment costs and increase your operational efficiency.

But while the micro-server is cool, it also has tradeoffs. It’s called “micro” for a reason. It’s a small board with limited real estate offering only a single socket configuration for a low power “system on chip” (SoC) processor, up to four DIMM slots and two M.2 SSD boot drives.

The Mono Lake micro-server is designed to accept a purpose built Intel Xeon D-1500 SOC processor with options up to 16 cores and support for up to 128GB of DDR4 memory. This line of processor was actually jointly designed with Intel to reduce power for the massive scale distributed infrastructure providing service to the Facebook web application.

As with most web applications, data at Facebook is tiered across different types of racks depending on how often it’s users need to access that data; i.e hot, warm, or cold. Tim Pricket Morgan details this out quite well in a blog on The Next Platform. These racks are workload optimized for each specific tier utilizing uniquely configured platforms like Bryce Canyon for storage services or their new Tioga Pass server for compute services. The Bryce Canyon hardware specification does a good job showing the various updated rack configurations and their intended deployment scenarios.

Facebook’s data tiering is far more sophisticated than what you see in general purpose data centers even when you are hosting your own web front end. But the workload right-sizing concept is still important. Especially if you have data intensive workloads that need higher core to spindle ratios like you see for hot to warm data store environments. Or on the other end of the spectrum when data is cooling down and want to retain everything but do so cost effectively and incrementally scalable. Regardless of data temperature, picking the right infrastructure for the right job can help you realize immediate ROI but also deliver tremendous long term value when done right.

Now comparing Bryce Canyon to the Cisco UCS S-Series. The S3260 similarly is a modular system architecture offering hybrid storage support in single or dual server node configurations. Utilizing dual server nodes, a wide range of processors combined with a big footprint of DDR4 memory enable workload right-sizing support for the most demanding data intensive workloads. In single server node configuration, you can transform the S3260 with expander card options changing it’s functionality for increased storage, I/O connectivity or application acceleration with flash memory. These capabilities combined with the power of advanced compute and storage automation provided by UCS Manager make the S3260 the most versatile storage server in the industry hands down.

Last November we released new M4 server nodes that support Intel’s Xeon E5-2600 v4 processors. These processors are optimized to perform well for a wide range of workloads striking the right balance between performance and power consumption. The S3260 M4 server nodes support processor options ranging from 8 to 18 cores and up to 512GB of DDR4 memory.

A major difference with our S3260 server nodes vs. Mono Lake is form factor. Our dual-socket architecture combined with the current Intel Xeon E5-2600 v4 processors create a total of 36 cores per server node satisfying the most performance hungry use cases. This creates a 1:28.1 core to spindle ratio per server that is attractive for performance hungry workloads like our Big Data and Analytics solutions. As a proof point, check out this great blog from my colleague Rex Backman on recent S3260 big data performance benchmarks validated independently by certified TPC Auditors.

You can also maximize for capacity by scaling down to a single server-node configuration with 16 cores per server node. This particular use case combines a disk expander card to maximize storage capacity and further lowering your $/GB. This flexibility is great for warm storage and active archive use-cases we regularly see from our software defined storage and data protection solution partners.

A great example of this is a recent customer engagement with our data protection partner Veeam. Together we helped Texas Medical Association reduce their backup storage costs by $600K while shortening their backup windows from 12 to 24 hours to 3 to 5 hours by eliminating costly NAS appliances. This disaggregation of legacy storage systems is becoming more commonplace as best of breed storage software is coming together with industry leading data center infrastructure like Cisco Unified Computing System and are delivering more value at much lower cost.

While Facebook being founded as a digital business might be ahead of the game. The rest of us are going through a major digital transformation and are seeking insights from our own data. And to get there we need to start a journey using products that can be easily adopted without throwing the baby out with the bath water.

If you are interested in learning how Cisco Unified Computing System can help you with your data storage needs, here’s a great brochure to get you started. Also be sure to download this free White Paper from Moor Insights & Strategy to learn how active data is creating new business insights.

If you liked this blog please stay tuned for more on data center storage solutions at Cisco and be sure to follow me on Twitter.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Cisco HyperFlex vs. Public Cloud

– April 17, 2017 – 0 Comments

Late last year, I compared Cisco’s S3260 Storage Sever to Amazon’s S3 service. The results shocked a lot of people and spurred a lot comments on the blog.

It’s worth repeating from the last conversation on this topic that there is a long list of pros and cons between the on-prem and public cloud approach that will influence your decision. Depending on your individual situation there are factors that will favor one option over the other. Here we’re just trying to get a clear-eyed view of basic cost elements.

If you are considering replacing, upgrading, or buying a new computing solution, you’ve probably come across hyperconverged infrastructure as well as public cloud offerings and are wondering which to choose. HyperFlex Systems (HX-Series) is Cisco’s hyperconverged infrastructure platform.

I wanted to find out what a large, highly virtualized server environment would cost with either HyperFlex or Amazon’s EC2 service over three years. HyperFlex can save you 50% or more. Now that I have your attention, keep reading!

Configuring the Public Cloud

Amazon offers 57 different instance types. After a discussion with some system architects and consulting system engineers, the m4.large instance type was chosen as a good general purpose VM for us to base this comparison on. Amazon describes it as “a high level of consistent processing performance on a low-cost platform.” It has 2 vCPUs and 8GiB of RAM. There is also different networking performance for each instance type which will not factor into the comparison here, but you should factor into your choice.

For each EC2 instance, we need to add EBS storage which can be SDD (gp2 or io1) or HDD (st1). We’ll keep our storage requirement modest at 100GB gp2 (general purpose) per VM.

Configuring HyperFlex

Cisco HX240 All Flash x8When sizing the number of VMs that can be placed on a HX node, there are three sizing items to consider: CPU, memory, and disk space. You also have to take into account the overhead for the virtualization (VMware vSphere) and HyperFlex Data Platform software. You should also consider storage performance. The cluster I will describe below is capable of 100 IOPS per VM but I don’t include the cost of provisioned IOPS on the AWS side.

Using the Intel® Xeon® E5-2650 v4 CPU and 24x 32GB DIMMs, I have 171 vCPUs and 696GiB of memory available per node after accounting for the overhead. After adding in the memory overhead for the VM itself, a node would support 85 VMs maxing out the memory and CPU resources. This is without over provisioning of the memory or CPU.

For HX storage, I’m using the HX240c M4 All Flash Node which have 10x 3.8TB SSD drives. For resiliency, I set the replication factor to three, which allows for two simultaneous node failures in the cluster of eight nodes. Eight nodes will provide ~85.68TiB of usable space. For the purposes of this blog, I’m assuming no capacity benefits from compression or deduplication, even though customers are seeing an average of 48% increase in effective vs. usable capacity.

2x B200 M4 ServersHyperFlex, unlike other hyperconverged platforms, has the unique ability to flexibly scale compute and capacity independently. This ability allows you to better match your resource needs with the right server type. My eight HC240c M4 All Flash Nodes support 680 VMs before exhausting their memory and CPU. Since I’m memory/CPU constrained in this example, I added two B200 M4 Blade Servers. This will balance out the compute and storage resources. On my 10 node mixed HyperFlex cluster, I can support 850 VMs with 2 vCPUs, 8GiB RAM, and 100GB of storage.

The Results

HX vs AWS VM Cost ChartSo what is a VM with two vCPUs, 8GiB RAM, and 100GB storage going to cost me per month? $40 for HyperFlex, $83 for Amazon paid upfront, or $133 on-demand. Amazon has different pricing models: on-demand, one and three year commitments with partial or all upfront payments.1 The most expensive of these is on-demand, but it offers the most flexibility. The least expensive is three year commitment, paid all upfront. Let’s not forget data transfer out to the internet charges. I have omitted them in the spirit of a fair compare. Without them, this comparison probably understates the public cloud costs, perhaps dramatically for some application scenarios. For example, if you were to transfer as little as 1TB per month, per VM, you could add $2,817,342 to your total bill.

When in doubt, I factored things conservatively in favor of public cloud.  For example, I used VMware vSphere 6 Enterprise Plus instead of the less expensive Standard edition, which also impacts the support cost. I applied IT labor costs to the on-prem side of the equation for deployment and on-going management of the physical servers.  I included the cost of Fabric Interconnects, but if you are already a UCS customers, you can add HyperFlex to your existing Fabric Interconnects. Lastly, if you already have a data center, would the addition of a HyperFlex cluster really increase your overhead to go up significantly beyond the incremental power and cooling?

So turning the crank with all of those assumptions, what is the result? Over three years, HyperFlex will save you 51% – 70% over AWS when running 850 VMs with two vCPUs, 8GiB RAM, and 100GB storage.

I’ve broken down the cost assumptions below so you can check to make sure I’m not cherry picking the cost elements. While the percentage of savings (and number of VMs per HX cluster) will differ, using three year all upfront pricing, the: t2.small, t2.medium, t2.large, m3.medium, m3.large, m3.xlarge, m3.2xlarge, m4.xlarge, m4.2xlarge, m4.4xlarge, m4.10xlarge, and m4.16xlarge will all cost more than our example HyperFlex cluster.

HX Solution Cost

AWS EC2 & EBS Cost Table

Final Thoughts

It was correctly pointed out in the responses to my S3260 blog that public cloud provides levels of immediacy and multi-site resiliency (if you pay for it) that are difficult to replicate on-prem. On the flip side, some customers can’t put certain data types in the cloud, or might have application latency requirements that the cloud can’t provide. How these factors translate into cost will clearly vary from customer to customer, depending on the applications and business models in play.  In terms of resiliency, the on-prem costs modeled here cover everything short of a complete site failure (think hurricane). This cluster can tolerate a failure of two storage nodes and the loss of a Fabric Interconnect. Public cloud can buy you multi-site peace of mind with no additional hassles and that’s something that might well be worth the cost premium to you.  But it’s pretty clear that you can save a bundle keeping things on prem….and if you have a ballpark understanding of your growth needs you can use a “cloud in a can” solution like HyperFlex to get very close to public cloud in terms of ease of scaling and speed to deploy.

What if you don’t have large VM environment, but a small one? HyperFlex may still be right for you. A three node cluster supporting only 255 VMs is 37% less than the equivalent AWS solution paid all upfront.

I would encourage you to reach out to your Cisco account team or partner to see if a HyperFlex solution might be a good fit for your environment.

 

Additional Resources

It’s not as if the blog wasn’t long enough already but I thought I would point to a few other items for your consideration.

Chalon Duncan has a great blog that address the strategy of public cloud, private cloud, or both.

Kaustubh Das’ blog on the performance benefits of the HyperFlex All Flash solution as documented by ESG and is worth a review.

Lastly, there is a great TechWiseTV episode on the HyperFlex All Flash announcement.

 

1 There are also spot instances and dedicated hosts. Neither were germane to this analysis. Spot pricing can fluctuate every five minutes so can’t be readily modeled. Dedicated hosts are even more expensive than on demand and reduces your flexibility.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.