Cisco UCS Delivers Industry-Leading SPECjbb2013 Results

On April 17th, 2013 Cisco announced SPECjbb2013 results with the Cisco UCS C220 M3 Rack Server delivering top SPECjbb2013 MultiJVM 2-socket x 86 performances.

Cisco’s results on the SPECjbb®2013 benchmark—41,954 maximum Java operations (max-jOPS) and 16,545 critical Java operations (critical-jOPS)— demonstrate that the Cisco UCS® C220 M3 Rack Server and Oracle Java Standard Edition (SE) 7u11 can provide an optimized platform for Java Virtual Machines (JVMs) and deliver accelerated response to throughput-intensive Java applications.

Exercising new Java SE 7 features, the SPECjbb2013 benchmark stresses the CPU processing, memory speed, and chip set performance capabilities of the underlying platform. The result consists of two metrics: the full capacity throughput (max-jOPS) and the critical throughput (critical-jOPS) under service-level agreements (SLAs), ranging from 10 to 500 milliseconds (ms) from request issuance to receipt of a response indicating operation completion.

 To compete in the SPECjbb2013 MultiJVM category, the tested configuration consisted of a controller and two groups each consisting of a transaction injector and back-end, all running across multiple JVM instances within a single operating system image. The JVM instances ran on a Cisco UCS C220 M3 Rack Server. Two 2.90-GHz, 8-core Intel® Xeon® processor E5-2690 CPUs powered the Cisco UCS C220 M3 server running the Red Hat Enterprise Linux 6.2 operating system and Java HotSpot™ 64-Bit Server Virtual Machine Version 1.7.0_11. The Cisco UCS C220 M3 Rack Server and Oracle Java SE 7u11 delivered fast response times and high transaction throughput on the SPECjbb2013 benchmark. The system supported 41,954 max‑jOPS and 16,545 critical-jOPS,  representing  the best critical-jOPS 2-socket x86 result in the MultiJVM category

 The “Cisco UCS C220 M3 Rack Server Delivers Industry-Leading SPECjbb2013 Results” Performance Brief provides additional benchmark configuration details. Official Benchmark Certification is available at the SPECjbb2013 official web site at: http://www.spec.org/jbb2013/results/res2013q2/jbb2013-20130403-00028.html.

 
spec
 Cisco UCS based SPECjbb2013 benchmark results show that the Cisco UCS C220 M3 Rack Server delivers excellent scalability to JVMs and applications. SPECjbb2013 benchmark results show that the Cisco UCS C220 M3 Rack Server delivers more throughput within specified time frames than solutions from other vendors.

Cisco UCS delivers the scalability needed for large-scale Java application deployments. The dramatic reduction in the number of physical components results in a system that makes effective use of limited space, power, and cooling by deploying less infrastructure to perform more, work. Cisco UCS C220 M3 Rack Servers can operate in standalone deployments or be managed as part of the Cisco Unified Computing System for increased IT operation efficiency. For additional information on Cisco UCS and Cisco UCS solutions please visit www.cisco.com/go/ucs

 Girish Kulkarni

Sr. Product Marketing Manager

Unified Computing System 

gikulkar@cisco.com

SPEC and SPECjbb are registered trademarks of Standard Performance Evaluation Corporation. The performance results described are derived from detailed benchmark results available at http://www.speck.org/ as of April 22, 2013.

Tags: , ,


Going to Interop? Keynote Preview With This Fun Video

If you’re headed to Interop Las Vegas, there are a few things you won’t want to miss. Much like the Keynote (video above) we have a lot of fun and useful things planned

When you walk into the exhibition hall, you’ll see Cisco’s booth (1327)*, front and center.  This is our home-base and where a lot of activity will happen.  Stop by to check out live demos, to get your questions answered, to hear our technology experts give quick theater presentations, and to check out our NOC.  If you use Cisco gear (and I’ll assume that you do if you’re reading this blog) you’ll want to stop by to get the latest updates.

We’re trying something new – we’ll have a working NOC in our booth.  This NOC is based on user research we’ve done over the last few months to find out what makes various users tick.  For example, the Network Managers interviewed indicated their biggest focus is on user experience and reducing deployment costs. Others said it was maintaining security and compliance, while others said it was getting projects done in time and under budget.  While some of these may seem amorphous, we’ve put together solutions that will be demonstrated in the NOC.  The goal is to help network managers and other networkers

Download Cisco's Enterprise App

Download Cisco’s Enterprise App

to be more successful in their jobs.

We’ll also have architecture talks and tours of the NOC given by Jimmy Ray Purser and Rob Boyd.  These are our TechWise TV stars there for you to chat with.  They’ll also be filming a few short interviews that we’ll be posting here later in the week for those not at Interop to check out.

If you’re planning your schedule for Interop, you’ll want to check out a few more of the details on the Cisco Interop footprint – our speaking sessions, sponsored sessions, and customer events.  You can also get daily updates using our smartphone app, QR code to the right.

More about that Opening Keynote:

Your Enterprise Network: Getting You Where You Want to Go [Wed morning, 8:30am]

Opening Keynote delivered by Rob Soderbery,

We are entering an era of the Internet of Everything (IoE), where networks will gain context awareness like never before. With increasing processing power to connect people, process, data and things like never before. This will create dramatically new opportunities for businesses – by connecting the unconnected and better connecting the connected. 

Is your enterprise network ready to take you where you want to go? Will it support the business applications that you are being asked to support? In this keynote, Rob Soderbery, SVP/GM of Cisco Enterprise Networking, discusses the potential and the challenges of IoE. Learn how the next-generation network architecture – programmable, open, and application-aware – can help organizations unleash the power of IoE and get you where you want to go.

 


* Yes, I was super sad it’s not booth 1337.

Tags: , ,


Napkins, Toolboxes, and IT..

What do these three things have in common? For Lone Star College System (LSCS), the fastest growing community college in the U.S., these items helped build a whole new technology foundation.

While at a higher-education conference, CIO of LSCS, Link Alander, and former VP of data center virtualization at Presidio, Steve Kaplan, began hashing out what it would take to deliver the best computing experience—on a napkin. They jotted down all the ways technology could deliver a customizable, optimal, and educational platform to students and faculty.

The vision was a toolbox, not just any one tool: an entire resource pool for professors to contribute to — and students to pull from — anytime, on any device, from anywhere.

Click here to read full story.

Tags: , , , , , , , , , , ,


And now, the Conclusion! VDI – The Questions You Didn’t Ask (But Really Should)

Back in January we launched a blogging series (with the above title) exploring the various server design parameters that impact VDI performance and scalability.  Led by Shawn Kaiser, Doron Chosnek and Jason Marchesano, we’ve been exploring the impact of things like CPU core count, core speed, vCPU, SPECInt, memory density, IOPS and more.  If you’re new to VDI and trying to avoid the pitfalls that exist between proof of concept and large scale production, this has hopefully been an insightful journey which has yielded some practical design guidance that will make your implementation that much more successful.

Here’s a snapshot of the ground
we covered along the way:whitpaper

  1. Introduction
  2. Core Count vs. Core Speed
  3. Core Speed Scaling (Burst)
  4. Realistic Virtual Desktop limits
  5. How much SPECint is enough?
  6. How does 1vCPU scale compared to 2vCPU’s?
  7. What do you really gain from a 2vCPU virtual desktop?
  8. How memory bus speed affects scale
  9. How does memory density affect VDI scalability?
  10. How many storage IOPs?

What?  There’s a Whitepaper? (who doesn’t like free stuff?)

If you’re just catching up with us, and want a nice, complete, whitepaper-ized version of the series, this is your lucky day.  You can get download the paper here.

VDI No-Holds-Barred Webinar!

Finally, last month, as part of the series we also offered a webinar on BrightTalk, where our panel of experts walked us through these design considerations exposed in the series, and fielded audience questions.  It was one of those high quality interactions that hopefully provides great on-going usefulness to those who catch the replay.

If you missed the event, you can watch it here.  The guys fielded a lot of great Q&A from our community, and in fact there were a few lingering questions we didn’t have time to address during the event.  They’ve captured these for me (including their answers) provided below.

What’s Next?  Got a Question?

I hope the journey was as impactful for you as it was for me – I should point out that the guys are considering what to attack as part of the next phase of their lab testing.  I would highly encourage you to provide your input (or questions) be emailing us at 9questions@cisco.com  Let us know what’s on your mind, where we should take the test effort to better align with the implementation scenarios you’re facing, etc.  Thanks!

 

Q&A From Our Web Event:

1)      I have used Liquidware labs VDI assessment tool to help me understand how to accurately size my customer’s virtual desktops.  Should I not be using tools like these?

Answer:  These tools do a great job of looking at utilization on existing environments.  The potential issue is that most of these tools only aggregate MHz utilization, there is no concept of SPEC conversion to properly map to newer processors.  The other thing that we have seen with using this raw data and trying to fit it all in a particular blade solution is that there is usually no “overhead” of the VM taken into consideration.  So sometimes it looks like you can have a 20 Desktop to a single physical core on a server and that’s just too aggressive when you look at typical vCPU oversubscription, etc.  The bottom line is that these types of tools are great initial sanity checkers to validate the possibility of VDI consolidation.  If you are involved in these types of assessments and are working on a Cisco UCS solution, we have tools that can assist in importing this type of data and helping you make more pointed recommendation’s as well.  Just email 9questions@cisco.com and we can discuss!

 

2)      Do you find a performance increase/higher density hosts by scheduling similar vCPU count VM’s on the same hosts?

Answer:  We did not test the mixing of 1vCPU and 2vCPU workloads to technically qualify an answer to see that impacts this would have – but this is a great idea and we will definitely consider this in our phase 2 testing.

 

3)      Did you find giving more RAM to a VM caused the performance figures to decrease? E.g. 100 VMs at 4GM/VM compared to 100VMs using 1.5GB.VM

Answer:  Since our testing was a static memory allocation of 1.5GB, we do not have the data to answer this particular question – again, another great idea to possibly include in our phase 2 testing.

 

4)      Hi. A bit unclear on the last slide.  150 simultaneous desktops produce 39000 IOPs.  Is this assuming physical desktops and figures were based on IOPs on each physical desktop.  If so, I don’t see how the IOPs figure is relevant as it only on local disk, not SAN.  Think I misunderstood the last slide!!

Answer:  The 39000 IOPs was measured by both vCenter and the storage array controller as the total number of IOPs to boot 150 virtual desktops. No testing was done with physical desktops.

 

5)      Loved the Cisco blogs regarding vCPU, SPEC, memory speed, CPU performance.  Is there a similar piece of research that has been done regarding server VM performance rather than VDI?

Answer:  Not *yet….  Hint hint.  :)

 

6)      Are there unique considerations for plant floor VDI deployments?  The loads on those systems are typically higher on a continuous basis.

Answer:  Specific use cases for VDI with different workloads definitely do exist and you should definitely size based on those requirements.  If you feel your individual application requirements are not close to one of the pre-configured LoginVSI tests, the LoginVSI tool does allow for custom workload configurations where you can have it simulate working against your own apps.

 


MegaTrends: The Need for Securing Data Center Traffic

Data Centres are evolving rapidly, in response to the many industry IT Megatrends we have previously discussed. Services and applications are increasingly being delivered from very large data centres and, increasingly, from hybrid and public clouds too.

Specifically, a good example of services being delivered from data centres is Hosted Desktops. I discussed in my last post how technologies such as TrustSec can help secure VXI/VDI deployments. VXI is a good example of a service originally delivered only from private data centres, now being delivered As A Service as well.

Video is (and will be) increasingly delivered from data centers as a service. Infrastructure services (servers/VM, storage…) are also delivered internally more and more through Private Clouds.

Consequently, securing those environments is now perceived by our customers CTOs and architects, as the biggest barrier to adopting clouds on a much larger scale.

We will therefore look at how TrustSec can pervasively help secure all data centre traffic.

I have asked Dave Berry, Cisco EMEAR Security Architect, to answer a few questions for me. Dave has held several positions at Cisco where he assisted many very large Enterprises customers in transforming their IT securely.

Q. Dave, how do you see Data Centre security evolving?

“The modern Data Centre is a dynamic environment, with rapid changes to where services are instantiated and then moved.  In addition, the on-going consolidation of servers using virtualisation increases the density of Virtual Machines, this challenges traditional approaches to maintaining separation between services for different security levels, segmentation in general and access policy control.

Q. How has this been done traditionally?

Traditionally, the separation of services has been implemented by using VLANs, PVLANs and other similar technologies, combined with the deployment of Access Control Lists on switches and/or by using a Stateful Firewalls to control the flow of traffic.  Supporting this on a very large scale can quickly become too complex and expensive, both because of requirement to insert control points between the different security levels/layers but also due to the complexity of managing this dynamic environment.  The task has become so huge, that it can lead to misconfiguration and security vulnerabilities due to these configuration errors. Perhaps even worse, recent studies have suggested, that because it has become so challenging, some companies are failing to properly secure their virtual server environments to the same level they do their physical servers.

Q. I know you have been involved recently on really big Data Centre security architectures, using TrustSec?

Using TrustSec Security Group Tagging, the problems above can be dramatically simplified by using the network to identify which “group” the data packets belong to and then “Tagging” all of the packets from those sources with the appropriate Group Tag.  In a similar manner, where the packet is delivered to the destination (either a Physical or Virtual Server port) the destination group is also identified and access is allowed or denied by the network according to security policy.

Q. Ok, got it Dave. Could you describe that further for us?

You can see this more clearly by referencing the diagram below.  For example, a data packet coming from the Developer Zone can only reach other devices in the same zone or selected servers in the Storage Zone according to the Security Access Policy.

EMarin SG-ACL Server Segmentation

Using the same principles for all Zones, it becomes possible to control the flow of traffic inside the Data Centre, by assigning Security Group Tags to devices within each Zone.  The actual control is done using a simple control matrix, reducing or minimizing the need for using expensive separation technologies.  The control function can be done on the ASA Firewall (v9.0.1) or on TrustSec -enabled switches like the Nexus 7000, 5500 and 2000. The SGT assignment functions are provided in many Cisco products, but I would highlight one as an example of DC Security automation; the Nexus 1000V. With the Nexus 1000V, the SGT assigned to a virtual server stays with that VM as and when it is moved in the virtualised infrastructure, so live migration of virtual servers need not require security policy adds, moves and changes

Q. Feedback I have personally received is that Trustsec might be perceived as complex. Seems like it is quite the opposite, doesn’t?

Deploying TrustSec enabled devices in the Data Centre can dramatically simplify and reduce the cost of separating resources.  By using TrustSec Group Tagging, the job of controlling and separating the data traffic can be moved to the edges of the network, reducing the need for inserting control mechanisms into the path of the traffic.

In our efforts to ensure that the technology is easy to deploy and works as intended, we have just completed solution-level testing of our TrustSec v3.0 solution. This work is undertaken to validate operation of the advanced DC TrustSec features referred to in this blog entry on the Nexus 7000, 5500, 2000 and 1000v, working with the TrustSec features across the BN portfolio, including Catalyst switches, Wireless LAN Controllers, ISR and ASR 1000 routers. Please look for forthcoming docs on the TrustSec 3.0 solution at the following page.

Sharing intelligent ‘context’ information from one part of the Enterprise with another using TrustSec can significantly reduce the operational effort and time involved in implementing, managing and auditing security rules; it can also help address new challenges, such as controlled access to critical DC applications for BYOD users

Thanks Dave.

Please tell us if that is has been useful.

As I already said in my previous blog, Megatrends bring their own set of security challenges. Solving them architecturally using technologies that can be pervasively deployed throughout the network is in my opinion the only sustainable and cost effective way.

Tags: , , , , , ,


#EngineersUnplugged S2|Ep12: The Evolution of Virtualization

In this week’s episode of Engineers Unplugged, join Gabriel Chapman (@Bacon_Is_King) and Dave Henry (@davemhenry) as they chart the evolution of virtualization, from mainframes up to software defined data centers. This is a technical deep-dive you don’t want to miss:

One thing that hasn’t evolved as much, the unicorn, shown here, fully virtualized:

Introducing the fully virtualized unicorn, courtesy of Gabriel Chapman and Dave Henry.

Introducing the fully virtualized unicorn, courtesy of Gabriel Chapman and Dave Henry.

Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

We’ll see you hear next week! As always, if you have show ideas or will be at EMC World and want to be internet-famous, follow the steps above and become part of Engineers Unplugged.

Tags: , , , , , , , , ,


On The Edge: How can Network Managers win at Interop Las Vegas?

“Before anything else, preparation is the key to success.”   Alexander Graham Bell

Sometimes it seems like our senses are being assaulted by product pitches from virtually everywhere. Most of these pitches have one thing in common – they assume you’re interested. What if someone actually took the time to find out what’s important to you before proposing anything? What if they figured out how you can be more successful, and started from there?  Well, that’s exactly what we did.

Over the last few months, we’ve interviewed hundreds of IT experts in many different roles at dozens of organizations worldwide. Our mission was to find out what their most important business and technology initiatives are, and how people in various IT roles – Network Manager, Server Manager, Storage Manager, Applications Manager, CIO, etc. – define success for their specific roles.

As you might expect, each person within these IT organizations has a slightly different definition of success, despite having common technology and business goals.  For example, while everyone in IT is impacted in some way by cloud and mobility technology transitions, Network Managers are more focused on improving user experiences and reducing deployment costs than their Security brethren who concentrate on compliance and data protection.

According to the Network Managers interviewed so far, their most important success factors are:

  1. User satisfaction and reducing the overall number of help desk calls
  2. On time, on budget deployment of new projects
  3. Security and compliance
  4. Delivering new business services

With these customer priorities in mind, at Interop Las Vegas we’ll showcase several solutions – both Cisco and Partner ones — that directly help Network Managers and other IT professionals achieve their most valued success factors. So regardless of how well you do off the show floor, we’ll help you find a way to win with your customers when you get back home.

All these solutions will be displayed in a working NOC on the show floor, so you can see exactly how they apply in the real world in real time.  There are, of course, many solutions that can directly address these priorities, so how will you find out which ones are most applicable to your situation? Here are a couple of examples:

  • Right after the holidays, IT help desk calls increase dramatically as people with new mobile devices bring them to work and try to access the network. What if you could track all the devices – wired and wireless – that are registered on your network from one centralized view? Then, with policies in place to restrict access to certain applications and services, what if you could have users self-register their devices, so help desk calls only occur when there’s an isolated on-boarding issue, instead of every time someone brings a new device into the office? Apply this concept to guest access, and you can see how the help desk calls could either increase or decrease, depending on whether or not you use a “unified access” approach that includes one network, one policy, and one management, and addresses these challenges proactively.
Marlowe Prime AVC5_JPG

Is Hulu, Facebook, or Netflix the culprit on your network? See which specific web applications are using the most bandwidth on your network, and use this intelligence to prioritize services and reduce help desk calls.

 

  • New cloud applications can result in a wave of new help desk calls when those they compete for bandwidth over your busy network. What if you could easily see, prioritize, and manage all the applications running throughout your network – including routers, switches, and wireless controllers – all from one common management GUI, without probes? With recent innovations in Application Visibility and Control, you can see how your mission critical applications are performing in real time, so you can tune them before your SLA’s are compromised, and before you help desk phones start ringing off the hook.

In both of these cases it’s easy to see how intelligent solutions can directly help Network Managers be more successful by making users happier and preventing help desk calls.

We’ll be sharing many different solutions that align to other IT-specific success factors and challenges over the next few weeks. If you’re headed to Interop Las Vegas May 7-9, or Cisco Live Orlando June 23-27, come visit the Cisco booth for experiences that are designed with your success in mind. And by the way, rumor has it that some of our TechWise TV friends may make a special guest appearance, so you may also see some super-geeks in action in our NOC on the show floor.

 


Have suggestions on what you’d like to see in our booth NOC?  Want to know more about the experts we’ve interviewed?  Ask in the comments!

Tags: , , , , , , , , , ,


Introducing MDS 9710 Multilayer Director and MDS 9250i Multiservice Switch – Raising the Bar for Storage Networks

The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto standard with a big increase in VM density. And there is a move towards world of many clouds.  Then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increased adoption of solid-state drives (SSD).   All of these megatrends demand new solutions in the SAN market.  To meet these needs, Cisco’s introducing the next generation Storage Network innovations with the new MDS 9710 Multilayer Director and new MDS 9250i Multiservice Switch.  These new multi-protocol, services-rich MDS innovations redefine storage networking with superior performance, reliability and flexibility!

We are, once again, demonstrating Cisco’s extraordinary capability to bring to market innovations that meet our customer needs today and tomorrow.  

For example, with the new MDS solutions, we are announcing 16 Gigabit Fibre Channel (FC) and 10 Gigabit Fibre Channel over Ethernet (FCoE) support. But guess what? This is just couple of the many innovations we are introducing.  In other words, we bring 16 Gigabit FC and beyond to our customers:

A NEW BENCHMARK FOR PERFORMANCE

We design our solutions with future requirements in mind. We want to create long term value for our customers and investment protection moving forward.

The switching fabric in the MDS 9710 is one example of this design philosophy. The MDS 9710 chassis can accommodate up to six fabric cards delivering:

  • 1.536 Tbps per slot for Fibre Channel   – 24 Tbps per chassis capacity
  • Only 3 fabric cards are required to support full 16G line rate capacity
  • Supports up to 384 Line Rate 16G FC or 10G FCoE ports
  • So there is room for growth for higher throughput in the future …without forklift upgrades

This is more than three times the bandwidth of any Director in the market today – providing our customers with a superior investment protection for any future needs!

While we provide the performance, we also enable our customers to preserve their existing IT operations and what they already know. In other words, we provide consistent operations and easy migration with NX-OS and DCNM tools, the single Operating System and Management across MDS and Nexus portfolios.


INDUSTRY’S MOST RELIABLE STORAGE DIRECTOR

MDSInfographic

That brings us to reliability. In this environment, downtime is simply not acceptable especially given the business cost of it.  If you can’t write to a storage array, then where do you go from there?

With MDS, we have a proven heritage of 8+ years of non-stop operations in our customers’ mission-critical production environments. This zero-downtime is even during Software upgrades. We are building on MDS nonstop operations heritage and taking it to the next level with our new MDS solutions.

For the MDS 9710, we used a combination of redundant components and a fault tolerant design to achieve the level of reliability we know our customers demand.

Take the switching fabric…

  • 3 fabrics needed for a fully loaded chassis
  • Add a fourth fabric, then you get… N+1 redundancy… no loss of bandwidth even if a fabric fails!
  • Unlike other vendor where their Director loses 1/2 the bandwidth if a fabric fails!

We’ve taken a similar approach with power supplies…

  • 3 power supplies needed for a fully loaded chassis today…
  • Add a 4th power supply for N+1 redundancy…
  • Two sets of 3 provides N:N grid redundancy
  • Similar approach with the fans, redundant supervisors…

Cisco MDS customers have come to expect In-Service Software Upgrades (ISSU). This allows software updates and upgrades to be applied to a fully operational platform with no interruption in service. That feature continues to be available on the new MDS platforms.

While both the MDS 9710 and MDS 9250i bring the most efficient airflow of front-to-back, we’ve also reduced failure domains. Features implemented in hardware and in ASICS include distributing port channels across line cards, checking for corrupted frames at ingress as well as egress, and managing buffers in hardware.  All of these innovations protect the SAN and end devices from negatively impacting performance, helping ensure that line rate consistently remains line rate.

UNMATCHED FLEXIBILITY

As a key component of Unified Data Center, these MDS innovations provide unmatched flexibility with its multi-protocol solutions supporting Fibre Channel, FICON, Fibre Channel over Ethernet, Fibre Channel over IP, and iSCSI.

Take the other’s offerings in the market – It takes several products put together to offer multiprotocol. Either they have FC, but no FCoE. Or they have both FC and FCoE, but greatly oversubscribed bandwidth.  

Unlike disparate solutions… our MDS 9710 supports both FC and FCoE. Cisco MDS is all about allowing customers to preserve their investments in Fibre Channel storage and in addition providing an option to FCoE. It’s about FLEXIBILITY. Customers have freedom of choice.  This enables consolidation of LAN and SAN into a single high performance network over lossless Ethernet if they choose to go that route.  

We’ve recently achieved a number of “firsts” for FCoE on the Nexus and UCS front as well:

  • Industry-first 40G FCoE with Nexus 6004, announced last January
  • First to prove multihop FCoE compatibility across 10GBaseT – standard Cat6a copper wire – with UCS, Nexus 5000, and Nexus 2000

SWISS ARMY KNIFE OF STORAGE NETWORKING

We are also introducing MDS 9250i Multiservice Switch with:

  • Up to 40 ports of 16 Gig Fibre Channel (FC) or FICON
  • 8 ports of 10 Gig FCoE
  • 2 ports of 1/10 Gig Ethernet for FCIP for SAN extensions or iSCSI
  • All ports at line rate

On top of these it will consolidate storage services into a single platform such as:

  • SAN extension (FCIP) – connectivity between data centers for Business Continuity Disaster Recovery
  • IO Accelerator (IOA) to accelerate tape backup and disk replication
  • Data Mobility Manager (DMM) to migrate data between heterogeneous arrays

It will also eliminate service device sprawl as it can be used as a single storage services platform across MDS and Nexus. And guess what? Once again competition requires several boxes to deliver what our MDS 9250i can do.

BOTTOM LINE IS THIS

These new MDS innovations redefine storage networking with superior performance, reliability and flexibility.  With this launch….Cisco is raising the bar for storage networks! Also please don’t forget that we formed strong industry partnerships with EMC, Hitachi Data Systems, IBM, NetApp etc. in order to provide complete solutions to our customers.   

..One of our key business advantages is – our unified approach to the data center –which storage networking is an integral part of… In other words, the Cisco Unified Data center is an architectural approach combining compute, storage, network and management… across physical, virtual, and cloud environments… resulting in increased budget efficiency… more agile business responsiveness, and simplified operations.… more than just a collection of products compared to other vendors… not only in LAN, but also in SAN.

Please join our panel of experts in an online event I will be hosting as we discuss how new storage networking solutions from Cisco address new requirements for the SAN while establishing new benchmarks for performance, reliability, and architectural flexibility. 

Register here

https://twitter.com/Berna_Devrim

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,


Evolution of Virtualized Routing

Hello, and welcome to my blog. As a new member of the Enterprise Networking’s Solutions Marketing team, I’ll be writing about connectivity to the cloud, Software Defined Networking (SDN) and virtualized routing. You can expect to learn details around Cisco’s architecture and product offerings in these topics. Further, based on your comments, I can go into as much detail as necessary.

First, a brief background. I moved to the Bay Area last November from Boston after almost 20 years in New England (in Boston), so I will be musing about culture shocks between the two coasts. I may also learn to like the Warriors and Niners, but I will always be a Celtics and Patriots fan.

Second, a promise I make. One of my favorite authors, Mark Twain/Samuel Clemens wrote in a letter to a friend in 1880 “I notice that you use plain, simple language, short words and brief sentences. That is the way to write English — it is the modern way and the best way. Stick to it; don’t let fluff and flowers and verbosity creep in” My promise is that I will try my best to adhere to this standard of writing, especially for such a technology focused blog. If I let any fluff and flowers in, please let me know in the comments and I will make it up to you.

With that, let’s jump into our first topic: The virtualization of routing. Specifically, the Cloud Services Router 1000V (CSR 1000V), which was announced at Cisco Live-San Diego on June 12, 2012.

CSR 1000V: Evolution of virtualized routing

While a previous blog covered the introduction of the CSR, I am happy to report that the CSR 1000V is now in general availability, starting March 29, 2012, a significant milestone for Cisco and for virtualization of full-fledged routing in general. After extensive field trials with 50+ customers, the CSR is being deployed in production networks across enterprises and cloud service providers around the world.

Features:

  • In the GA release CSR is a full-fledged secure virtual router, with IOS-XE routing, NAT, DHCO, IPSec, DMVPN, FlexVPN, HSRP, AppNav, FireWall, MPLS, LISP, Multicast, L2TP, QoS, NetFlow, AVC, Full IPv6. It runs on VMware ESXi.
  • Support for VXLAN and GETVPN is coming soon.

Virtualization:

  • Runs in VMware vSphere Ent. (vMotion, DRS); support for
  • Citrix XenCenter, and Red Hat KVM is coming soon.

Elasticity:

  • 4-vCPU/4GB @ 10/25/50 Mbps
  • Higher throughput options are coming soon.

Management:

  • Cisco Prime NCS, VMware vCenter, vCloud Director

Licensing:

  • Currently offered with term-based licensing for 1, 3, 5 year licenses;
  • Other flavors of licensing are coming soon.

CSR 1000V and next generation virtualized routing

CSR 1000V offers cloud service providers an unprecedented scalability option to host multiple tenants while delivering differentiated services tailored to each tenant. By creating a full-fledged virtual routing infrastructure to complement the physical infrastructure, the CSR 1000V and other Cisco virtualized services products enable a 10x scaling capability for cloud service providers. Where they could only host 250 tenants per pod, cloud service providers can now host up to 2500 tenants with the same exact physical infrastructure by adding the CSR 1000V.  This picture shows a typical deployment at a cloud service provider with the CSR 1000V operating as virtual infrastructure.

Virtual networking and cloud services

Combined with the Cisco Nexus 1000V virtual switch, the CSR 1000V offers a full-fledged routing and integrated services architecture. CSR 1000V has Cisco AppNav built in, which allows load balancing across virtual WAAS devices (Cisco vWAAS). When connected to the Cisco Nexus 1000V through Cisco vPath, the service applies security policies at Cisco Virtual Security Gateway VSG and the Cisco ASA 1000V Cloud Firewall. This allows a complete service chaining in the virtual infrastructure at the cloud service provider and more importantly, on a per tenant basis.

CSR as an MPLS CE Router:

A specific use case for the CSR 1000V is that it can be deployed as an MPLS Customer Edge (CE) router. With a typical (non-virtualized) deployment, a cloud service provider’s PE router acts as a MPLS tunnel termination point, which then funnels per-tunnel traffic into VRF instances (with iBGP and eBGP peering) which are then mapped to L2 VLANs. Normally, the cloud service provider would have to allocate a VLAN at the PE and carry that all the way through to the server aggregation switches. With a single PE router limited to 4000 VLANs and even less VRF instances and BGP sessions (typically to 1000 BGP peers), the number of tenants is limited to around 250. (The detailed network topology and BGP peering numbers have been calculated in a technical paper).

With the CSR 1000V, a cloud service provider can now terminate the MPLS tunnels of tenants at the CSR 1000V in the virtual infrastructure and not be limited by the VRF and BGP scaling limitations of the physical infrastructure (PE router and Aggregation switches). This picture shows the deployment of a CSR as a MPLD CE router and how a cloud service provider can overcome the VLAN and other scalability limits.MPLS Customer Edge Router

Cloud service providers who have seen this in action are excited by the results of the EFT deployments.

For additional validation, here is a Network World test of CSR in action.

What’s next:

In the next few blogs I’ll cover additional aspects of virtualization of routing, including the architectural and product perspective.

  • CSR 1000V and virtualization of services (vWAAS, AppNav and vPath)
  • Other use cases for the CSR 1000V
  • SDN and CSR 1000V: Cisco ONE strategy’s Virtual Overlays pillar and how the onePK API can provide programmability to the CSR 1000V
  • I just hosted a webinar with Ovum titled “How to choose a cloud service provider?” where we discuss the various factors to consider, especially the virtual infrastructure, when selecting a cloud service provider to host your virtual private cloud. Please see the recording here.

Thanks for reading and please comment on any and all aspects. I look forward to your comments. Stay tuned for the next blog post.


Cisco at CA World 2013

I am writing this from CA World 2013 in Las Vegas where the atmosphere is charged up.  The theme for this conference is “Go Big. IT with Impact”.  The idea is that IT departments have to think big to make material business impacts and succeed.  At the end of the day IT needs to solve business problems. Companies need the right strategy and technology solutions to harness the Cloud, Internet of Everything, mobile and Big Data. In a “partner with impact” session exclusively for partners, David Bradley, SVP Channel Sales, CA Technologies featured Rick Snyder, VP Global Partner Organization, Cisco.  The complete video is below

In this video, Rick mentions Cisco Validated designs (CVD), which are reference architectures, and blue prints for success. These designs incorporate a wide range of technologies and products into a portfolio of solutions developed to address the business needs of our customers.  These CVDs document solutions that are tested to facilitate and improve customer deployments.

CA Technologies stressed the importance of the SaaS delivery model and the benefits of providing application services.  CA also thinks that the opportunity for enterprise class application services offered as managed services will grow rapidly.  Service providers should be beneficiaries of this shift. A common customer of Cisco and CA Technologies – Logicalis, won the partner impact award for their managed private cloud solution for a UK bank.

There were several interesting announcements in the past 24 hours. The following caught my attention.

  1. CA in association with SAP will provide Mobile device Management solutions for enterprises. IT Management and Security are very high on the agenda for most enterprises. As more enterprise applications run on mobile devices securing and managing these devices becomes critical.
  2. CA also announced devops automation capability with the Nolio products.  With application aware system monitoring and management enterprises can expedite application development and reduce costs.
  3. CA Technologies announced the acquisition of Layer 7 technologies, a leader in the API management space.  APIs have become a necessity for application developers. REST-based APIs have allowed application developers to build rich web and mobile apps by providing the ability to integrate multiple services into individual application.

These approaches complement the way in which we help our own customers. The ISR-AXs are application aware and improve user experience at remote branch offices. The Cisco UCS has an API and facilitates infrastructure programmability.  This enables integration, orchestration and execution of automated workflows. We will have two breakout sessions tomorrow at the conference:

TD055SN- Cisco UCS and CA: Operational efficiency with Converged Infrastructure (Speaker : Mark Balch, Director of Product Management, Data Center Business Unit)

TB056SN- Delivering Optimal Application Experience with Cisco ISR-AX (Speaker: Liad Ofek, Manager, Technical Marketing at the Service Routing Group)

I am eagerly looking forward to the keynote this evening by Richard Branson, Founder of Virgin Group.

Tags: ,