Over 1400 Customers Have Set the Foundation for Their Digital Transformation

– November 29, 2016 – 0 Comments

Just over a year ago Cisco announced the general availability of APIC-EM (#APICEM), a software defined networking (SDN) platform for the enterprise branch, campus and WAN. In just over 12 months, we have seen over 1400 enterprise customers deploy APIC-EM in their production environments managing over 600,000 network devices and connecting over 1.5 million hosts! APIC-EM went on to win TechTarget’s Network Innovation Award and was a finalist for the Best of Interop SDN category.

Five months later in March 2016, Cisco launched its Digital Network Architecture (DNA) a strategy for enterprises looking to win against the competition by using new innovative technologies to transform their business. Cisco has aligned its Enterprise Networks product portfolio around the key pillars of DNA – Virtualization, Analytics, Automation and Management – the central component being Cisco APIC-EM.

apicem

Driven by the demand we are seeing in the market for SDN solutions, we have added a number of new applications and features that have enhanced customer value when using APIC-EM. Last month, Cisco released Version 1.3 of APIC-EM and while there is a long list of exciting new features some of the key highlights are:

  • General availability for EasyQoS, an end-to-end QoS management application.
  • Enhanced certificate authority management for greater security.
  • Faster branch deployments with the IWAN App that now supports ISRG2 routers.
  • Improved network assurance capabilities with Path Trace.
  • Expanded automation capabilities with Plug and Play (PnP).

Openness is a key mantra of DNA. In this latest release, we have also enhanced our API support. The APIC-EM APIs are published for anyone to use. If you would like a deeper dive into API support for release 1.3 you can learn more in this blog series from my co-worker Adam Radford a Cisco Distinguished Systems Engineer.

As IT moves faster to meet business demands, Cisco is moving as fast with an agile development process that enables us to deliver customer driven features faster than ever. Don’t believe us? Brian McEvoy, Sr. Network Engineer at Symantec has seen the following benefit of APIC-EM,

QoS rollouts were once 6-month projects costing over $200K. With Cisco APIC-EM EasyQoS, we will go from months to minutes with nominal costs.”

Get started on your digital journey today by downloading APIC-EM free of charge from here and start laying the foundation to your digital transformation today.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

10 ways Cisco CloudCenter simplifies AWS

– November 28, 2016 – 0 Comments

I’m getting ready for AWS re:Invent Nov 28 – Dec 2 in Las Vegas. There are so many great ways that CloudCenter makes it easy to add AWS to your data center-based hybrid cloud service offerings, I thought a “Top 10” list was in order.

CloudCenter is an application-centric hybrid cloud management solution. It lets you build on your Cisco infrastructure foundation, and extend application deployment and management capabilities to include public clouds like AWS.  Most enterprise IT organizations I work with already have experience with a public cloud like AWS. And, are now looking to broker multiple public cloud services to IT consumers. CloudCenter has a significant TCO advantage over hybrid-cloud or multi-cloud solutions that are environment specific or use hard-wired automation.

CloudCenter integrates seamlessly with AWS and “Abstracts the cloud” so developers and users get the power of automated application deployment and management, without having to understand AWS API calls.

1 – Deploy a virtual machine on demand. Easily integrate with service catalogs such as ServiceNow or Cisco Prime Service Catalog, a custom IT front end, or use the out-of-the-box enterprise marketplace. Give your IT consumer self-service on demand “One click” deploy an OS image with CPU and memory to user’s choice of regions, Virtual Private Cloud (VPC), and availability zone. The IT organization centrally controls who, what, where, when and for how long OS images are deployed. You can track costs and usage with roll-up or drill down reporting by application, cloud account, user group, and more.

2 – Manage images in multiple regions. Automate management of OS images across multiple AWS regions. Whether you build cloud specific images, check out and harden an Amazon provided AMI, or rent vendor updated images, a simple CloudCenter API call updates logical to physical OS image mapping to simplify maintenance, and make sure users are always consuming the latest IT approved OS images.

3 – Deploy any application stack on demand. Users can self-service deploy a fully configured infrastructure and application stack, including databases, middleware, application and web servers, and load balancers. CloudCenter automates deployment of existing enterprise applications or cloud-native micro service architectures. You get cloud-scale features with traditional applications without refactoring or changing application code. And you get full flexibility with composite topologies including a mix of OS images, application services, containers, configuration tools, and unique AWS services.

4 – Automate Continuous Deployment. Did you know you can deploy from Jenkins to any data center or cloud with one CloudCenter plugin? A code change can trigger a build which then triggers a deploy of a full stack environment including the latest build. CloudCenter makes it easy with a Jenkins plugin, and simple API call integration with other popular build automation tools. And CloudCenter abstracts the cloud so your developers don’t have to spend time learning cloud specific API calls, or writing hard-coded scripts for different AWS regions and availability zones.

5 – Auto scale across availability zones. You need to deploy applications in multiple AWS availability zones in order to get AWS 99.995% uptime guarantee. You can deploy master and slave components in different availability zones and autoscale across them both. You don’t need complex scripting in a cloud formation template. You don’t need deep knowledge about security groups or access control groups. CloudCenter makes it easy.

6 – Migrate across regions. You can use powerful migration features to move an application from one AWS region to another. Once an application is deployed, users can select an application, pick target region, and “one click” migrate the application and optionally the data if needed.

7 – Automate micro-segmentation. When a cloud agnostic Application Profile is deployed in AWS, CloudCenter automates creation of Security Groups and Access Control Lists that deliver micro-segmentation with white list communication. You can easily deploy and manage a large number of applications without using shared segmentation that opens security risk of East-West traffic.

8 – Including AWS specific services. In general, we recommend you use cloud-agnostic services to model Application Profiles to a single profile that can be deployed to data center, AWS, and other clouds. That is key to lower hybrid cloud TCO. But you also have the choice to model AWS-specific services as part of an application profile, or use call outs to call unique AWS services when needed.

9 – Benchmark price and performance. Compare price performance metrics to determine when AWS is the most cost effective choice. Or determine price differences between AWS regions. Performance across regions shouldn’t vary. But, price in different regions can. Also, use benchmarking to find out when multiple small instances are more cost effective than one large.

10 – Stay in control. CloudCenter is an enterprise-class solution that includes governance and security features that meet the needs of the most demanding and complex IT organizations. Multiple AWS accounts? Multiple groups using a single AWS account? No problem. Control usage and get complete cost and usage visibility with policy-base guardrails that give users self-service on demand deployment, with IT oversight that help users make the right choices every time.

Stop by booth #200 to see CloudCenter and AWS in action!

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

AWS or Private Cloud or both, what’s your strategy?

– November 22, 2016 – 0 Comments

Contributors: Bill Shields

Well we just opened up a hornet’s nest didn’t we?! Our recent announcement of the new Cisco UCS S-Series Storage Server definitely caused a stir in the industry. We launched into the emerging scale out storage market with a broad ecosystem of solution partners backed by amazing customers like Cirrity and Green Cloud who use our Unified Computing System We received really great responses from analysts, press, and independent bloggers alike. But you know you hit the nail on the head when you garner the attention of people selling the opposite of what you are selling. We received passionate comments directly from people either working for Amazon Web Services or whose careers are linked to AWS.

These are the sort of reactions we were looking for that validates we are onto something bigger. The message landed with these folks as “on-premises storage or cloud storage” while our message is “on-premises storage and cloud storage.” Our successful acquisition of CliQr which is now Cisco Cloud Center is one of those proof points. As part of our hybrid cloud storage strategy, we view cloud storage as complimentary to on-premises data center storage. This is why we started an initial TCO analysis to compare the two and the initial results were surprising.

 
hybridcloud

 
A true hybrid cloud approach creates a symbiotic relationship where data should be able to fluidly move to and from public cloud depending on your business needs. Public cloud service providers have done an amazing job creating easy to adopt services that are highly scalable. For example take a look at the AWS S3 pricing web page, you can see transferring data into AWS S3 is completely free so you are only paying for the storage you use. Sounds great right? They even take all major credit cards making adoption almost as easy as submitting an expense report to your employer for a monthly mobile phone bill. Who wouldn’t want to start storing their data in the cloud when it’s this easy? It’s no wonder shadow IT has become “a thing.” The unfortunate downside is it enables an individual or line of business to bypass any level of strategic vendor management or corporate level long term ROI analysis. But it definitely gets around having to ask your boss for CAPEX funds especially when everyone is trying to reduce short term costs and do more with less.

The excitement to get started often overlooks that these services are built on a series of one-way bridges to import your data. This is strategic as Cloud storage services generate tremendous revenue based on how often you need access that data and where it is distributed geographically so keeping data in the cloud is vital to their success. This requires a great feature breadth to cover all customer use-case scenarios for importation. It also requires features for “value added services” to keep you happy.

In reality many of these features are proprietary and create “cloud lock-in” making it difficult to repatriate your data or even switch to a different cloud service provider should pricing change or you are not happy with the service you are receiving. Due to this lock-in, customers and analysts have shared stories with us on out of control costs for interacting with stored data in the cloud over time resulting in a sort of “cloud debt”. This reminds me of all the cheap credit available in the 1990s resulting in the great recession when the bubble burst. Everyone seems to be well aware of the problem but nobody has put data forward to quantify how much of a problem this phenomenon actually is. This is what we are trying to do by providing a simple starting point for a much needed longer conversation.

 
cloud-costs

The nail we specifically hit was that of an initial TCO study comparing on-premises storage infrastructure to public cloud storage, specifically AWS S3. In this situation creating a baseline is very difficult to do as costs vary dramatically by customer, workload, and how often you need to access the data. Even AWS and Google’s own TCO calculators make many assumptions to create a simple baseline which is the basis of their ROI sales pitch. In this situation we put out a simple analysis for adding a high capacity raw storage option to existing environments instead of rushing out to augment data center resources with cloud storage. This is a great option for our converged and hyperconverged infrastructure customers using our new S-Series as low $/GB solution for scale out secondary storage or standalone on-premises bulk data storage.

We took every effort to not sound like cloud curmudgeons because we believe there are many use-cases where it is more advantageous for customers to use public cloud vs. on-premises infrastructure. Netflix and Zynga are two really good examples whose business rely on Cloud Native applications catering to geographically diverse users. Additionally, for many startups it would be impossible to invest in their own infrastructure to scale globally and would be a potential investment risk if user demand doesn’t grow.

On the flip side however, we also found there are numerous examples where high growth customers like Apple, The Weather Company, Dropbox, Instagram, General Motors, Target, and Hubspot either outgrow public cloud or use a multi-cloud strategy to take advantage of price wars between the major providers. But in many of these cases these companies hit a forecasted financial break even point and moved to a managed private cloud service or took everything wholesale back in house and on-premises. Reasons we found were usually to achieve better performance, lower cost, or increase control. The company Moz was one of the more interesting departures as they shared hard data on saving $3.4 Million when returning to building their own data centers which is what got us thinking to dig into this a bit further.

With our S-Series launch we first decided to tackle the same problem by helping customers reduce their data center storage costs. For a foundation we started with a single 4RU storage optimized server which is far more efficient in terms of $/GB as compared to scaling out storage on 2RU fat nodes like our UCS C240s. The UCS S3260 has the ability to let customers start small at 56 TB and scale incrementally up to 600 TB. Customers can also implement data tiering in a box using 28 of it’s 56 data drives for high performance SSDs resulting in an 89.6 TB flash front end for high performance localized caching with 280 TB SAS on the back end.

storage-table

The S3260 is not just a high capacity storage server, it is the Swiss Army Knife® of storage optimized servers offering unparalleled flexibility for being deployed in brownfield environments or cost effectively scaling for greenfield projects. The uniqueness of this product however is its ability to support file, block, or object storage depending on the software abstraction layer you choose from our many solution partners. You can terminate Gigabit, 10 Gigabit or 40 Gigabit Ethernet up to 160Gbps depending on your throughput requirements but also have native Fibre Channel connectivity to an existing SAN breaking down silos and unifying your storage. Last but not least where Cisco UCS truly shines and where no other vendor can even compare is management. We unified compute, storage, networking, and management systems into a centralized management stack with an embedded policy based automation framework for elastic infrastructure management. This simplified approach significantly reduces the number of physically and logically managed devices but also enables zero-touch provisioning for adds, moves, and changes providing industry leading OPEX savings.

But a product alone doesn’t enable customers to experience a “cloud like” TCO. We reached out to our Cisco Capital team who put together some interesting numbers for how we can make financing storage easier. In our specific offer, Cisco Capital not only offered a lower total cost as compared to AWS but also provided lower monthly payments. The great thing about these type of offers is that you can use either CAPEX or OPEX budgets giving you greater flexibility to augment data center resources and our staffing quickly. If the ability to have your employees use corporate cards to expense these purchases we can cover that too through our many worldwide partners providing that capability.

storage-costs

The cost for active data varies widely but everyone needs to access their data sooner rather than later. Our analysis only compared the cost of raw storage and purposefully did not include the cost to interact with that data. This multiplier is what makes your storage costs skyrocket when storing data in the cloud vs. your own infrastructure. We encourage customers to do their own analysis and make sure to include how often they expect to interact with their data.

If this is a challenge for you, feel free to reach out to your Cisco account team or contact us directly, we are happy to help you do the compare and even help you with a custom tailored financing offer that is unique to your business. The great thing about our Cisco Capital program is that it is all inclusive of the entire Cisco Data Center portfolio offering single monthly payment financing solutions for products, support, and services. Whether you are building a new data center or expanding your existing environment. You can buy everything you need to run your business from Cisco and even augment your own staff with qualified industry experts unique to your needs.

We appreciate the attention this topic has created and plan to continue this research to better help our readers understand the true costs of cloud storage vs. on-premises storage before they send their data into the abyss. In the meantime, we invite you to share your own experiences and ask for anyone willing to present hard data on their own hybrid cloud successes and associated cost savings.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Cisco and Intel on Hyperconvergence and HyperFlex

– November 22, 2016 – 0 Comments

Contributors: Eugene Kim

 

HyperFlex

Cisco’s UCS portfolio has grown significantly over the past few months. Just recently we announced our new storage server brand, the S Series. Before that, a few months earlier, we brought to market HyperFlex Systems, Cisco’s complete hyperconverged infrastructure offering engineering on Cisco UCS.

Intel has been by our side for each of these product introductions with their Xeon Processor powering both. A long time Cisco UCS partner, Intel’s architectures and processor efficiencies help to deliver the performance and scale available to you with the UCS portfolio.

Cisco Hyperconvergence may be new to some of you so to help you learn about HyperFlex we are teaming once again with Intel and will be delivering a joint webinar on the topic of extending hyperconvergence into your UCS environments. Please feel free to join us live on Dec. 1 and listen to Cisco’s Jeff MacTavish and Intel’s Craig LoConti. Please join us HERE.

Jeff and Craig will explore how you can achieve a truly adaptive infrastructure using complete hyperconvergence which extends the benefits of simplicity and speed to more applications and use cases. They will also describe how to fully unlock the potential of Hyperconvergence as part of a comprehensive data center strategy with HyperFlex Systems, powered by Intel Xeon Processors.

If you want to do a little webinar pre-study, here’s a fun way to learn about HyperFlex. You can decide on your own what side you want to be on!

Video: Introducing Cisco HyperFlex Systems – Choose Wisely

We often learn best from others and this is true in the case of HyperFlex, too. Customers such as Ready Pac Foods and the Blue Pearl Veterinarians are yielding business benefits based on their Cisco HyperFlex. Your organization can, too.

We look forward to having you online December 1. If you can’t make it you can learn more on Cisco HyperFlex, please visit www.cisco.com/go/hyperflex.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

5 Steps to Zero Trust Data Center with Cisco Tetration Analytics

– November 20, 2016 – 0 Comments

Tetration_Analytics_Logo_ColorIn a recent cybersecurity study, Gartner reported that data center protection is the foundation of digital business and innovation.  With organizations embracing digital business, they need to address the lack of directly owned IT infrastructure and the prevalence of services outside of IT’s control. These services represent some of their biggest security risks.

Furthermore, Gartner predicts that by 2020 sixty percent of organizations will experience a major service failure due to security incidents.  And service outages damage the brand. Yet according to the latest Cisco cybersecurity survey, 71 percent of large enterprises believe data protection concerns are impeding their innovation.

The key to successfully navigating these data center challenges is pervasive visibility – if you don’t see something, then you can’t possibly know how to protect against it. Cisco Tetration Analytics provides pervasive, unprecedented visibility across every node in your data center and cloud infrastructure.  Coupled with the right dynamic policy enforcement, it allows you to efficiently and effectively manage your organization’s risk posture.

Cisco Tetration Analytics incorporates a mix of network/hardware sensors that monitor every single packet at line rate and server/software sensors with very low overhead.  These sensors work with a big data analytics cluster that operates in real time, presenting actionable insights with easy to understand visuals.  Additionally, it provides application dependencies, automated white-list policy recommendations, policy impact analysis, detection of policy deviation and network flow forensics.

If you want a zero trust data center model and are interested in migrating from black list to white-list security to shrink the surface of attack, but lack information and resources to implement or maintain it, you can follow simple five step Cisco Tetration Analytics implementation guide below.

Tetration 5 Steps Pic

1 – Gain Real-time Visibility: Gaining pervasive real-time visibility is the foundation for zero trust operations in data center. This step entails installing on-prem Tetration Analytics platform, software host and network hardware sensors.  You can immediately start collecting and storing all data flows in your data center.  All searchable when, where and how you want.

Tetration Blog Step 1 Image 

2 – Map Application Dependencies: Tetration Analytics provides application behavior based mapping of processes and data flows using unsupervised machine learning algorithms.  Smart enough to map and group business applications, Tetration autonomously creates application cluster views.

Tetration Blog Step 2 Image 1

If any adjustments are needed, Tetration also allows you to redesign or model new groupings so that you can get to confirmed Application Dependency Map (ADM) template to be used for creation of white list policy.

Tetration Blog Step 2 Image 2

The application mapping then can be exported in various data interchange formats for the next step – whitelist policy enforcement on data center infrastructure.

Tetration Blog Step 2 Image 3

3 – Recommend White List Policy:

Enabling “zero-trust operations” in data center hinges on Tetration’s automated whitelist policy generation that dramatically reduces the surface of attack and simplifies compliance. In a whitelist policy nothing communicates by default, and exceptions are explicitly allowed. This prevents attacks from propagating across applications, tenants and data.

Tetration Blog Step 3 Image 1

Publishing application dependency policies obtained in the previous step to a policy compliance workspace allows you to run policy experiment.  You can then verify compliance of the past network traffic with respect to recently calculated policies. Simply select the policy group you wish to test and the time period in which you wish to test.

Tetration Blog Step 3 Image 2

Next, run the simulation of whitelist policy and assess its impact before applying it in the production network.   You can immediately see which flows will be classified as compliant, noncompliant or dropped when this policy is enforced.

Tetration Blog Step 3 Image 3

The end result is the new whitelist policy, deployment of which will result in zero trust data center environment.

4 – Deploy White List Policy: With the compliant and confirmed white list policy model available, it is time to export the policy model to SDN controllers, such as Cisco ACI. SDN controllers then enforce this new white list policy model in provisioning data center infrastructure to create zero-trust operations in production environment.

5 – Operate Zero Trust Data Center: On an ongoing basis, Tetration continues to collect network flows and uses them to generate application connectivity patterns and detecting any deviations from baseline application behavior. Combining unsupervised machine learning, anomalous behavior detection, and intelligent algorithms, Tetration Analytics brings new levels of network and security analytics to the data center.

Change is inevitable, as forces outside the business demonstrate. It’s up to businesses to decide how they’re going to respond to change. IT infrastructure and operations organizations need to quickly and flexibly deliver business services and enterprise apps that best equip a modern workforce. Cisco Tetration Analytics will accelerate your organization’s digital transformation journey by giving you previously unattainable knowledge about data center. Respond quickly to changes in your digital business and to changes happening in your data center and cloud environment, while becoming more secure each day, as additional policy is discovered, simulated, applied, and verified!

This is the only way your business can create competitive advantage and stay relevant in today’s digital world!

Learn More:

Cisco Tetration Analytics Overview

Cisco Tetration At a Glance

Cisco Tetration Video – Cisco IT implementation of zero trust data center operations across 50,000 servers with 70% less staff time by using Tetration Analytics platform

IDC White Paper – See the details in this storyline and the many additional benefits Cisco IT achieved with Tetration Analytics deployment – check it out!

 

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

“P” is for Performance… The new Cisco UCS S3260 has plenty of it!

– November 15, 2016 – 0 Comments

Contributors: Raghunath Nambiar

S Series

Earlier this month we introduced our new Cisco UCS S-Series Storage Servers, with the S3260 being first in market. The S Series is a new storage-optimized server category in the Cisco Unified Computing System™ (UCS) portfolio designed specifically to address the needs of data intensive workloads such as Big Data, and for deploying software-defined storage, object storage, and data protection solutions.  It can handle the rapid growth of unstructured data created by the Internet of Things, video, mobility, collaboration and analytics so that businesses can access and analyze data quickly to generate insights in real time. In addition our modular UCS architecture, of which the S Series is part of, lets you right-size infrastructure for the workload and operate with the efficiency and predictable TCO you need.  

Video: Introducing Cisco Series Storage Server

Now we have S3260 performance information to share with the official publication of our TPCx-HS benchmark results. For background, TPCx-HS is the performance benchmark and industry standard for Hadoop systems. Cisco has been a regular contributor of leading TPCx-HS results based on UCS.  Today we are announcing TPCx-HS performance results with Cisco UCS S3260 at two scale factors: 30TB and 300TB.

S Series TPC

Figure:  TPCx-HS Audited Results with Cisco UCS S3260. From 30TB to 300TB

The highlights: We delivered the industry’s first ever result at the 300TB scale factor accomplished with our Cisco UCS Integrated Infrastructure for Big Data with UCS S3260 validated design. We continue as well with these new results to maintain our leadership in the 30TB space with a slight improvement in performance compared to our previous results. Note these results are audited and certified by independent TPC auditors.

The benchmark results show some of the breadth of our S Series performance capabilities. Be it a Big Data environment, a Security environment, or Data Protection/Back Up situation we feel the S3260 is a great offering to “Unstore your Data”. Let’s get a customer’s point of view here. Atlanta based Service Provider Cirrity uses Cisco UCS Storage Optimized Servers to improve reliability, security, and manageability:

Video: Cirrity on Cisco UCS S Series

To learn more about the new Cisco S Series Storage Server offerings please visit www.cisco.com/go/storage. To investigate the value and benefits of our Big Data Hadoop offerings please visit www.cisco.com/go/bigdata.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

“S” is for Speed… The new Cisco UCS S3260 has plenty of it!

– November 15, 2016 – 0 Comments

Contributors: Raghunath Nambiar

S Series

Earlier this month we introduced our new Cisco UCS S-Series Storage Servers, with the S3260 being first in market. The S Series is a new storage-optimized server category in the Cisco Unified Computing System™ (UCS) portfolio designed specifically to address the needs of data intensive workloads such as Big Data, and for deploying software-defined storage, object storage, and data protection solutions.  It can handle the rapid growth of unstructured data created by the Internet of Things, video, mobility, collaboration and analytics so that businesses can access and analyze data quickly to generate insights in real time. In addition our modular UCS architecture, of which the S Series is part of, lets you right-size infrastructure for the workload and operate with the efficiency and predictable TCO you need.  

Video: Introducing Cisco Series Storage Server

Now we have S3260 performance information to share with the official publication of our TPCx-HS benchmark results. For background, TPCx-HS is the performance benchmark and industry standard for Hadoop systems. Cisco has been a regular contributor of leading TPCx-HS results based on UCS.  Today we are announcing TPCx-HS performance results with Cisco UCS S3260 at two scale factors: 30TB and 300TB.

S Series TPC

Figure:  TPCx-HS Audited Results with Cisco UCS S3260. From 30TB to 300TB

The highlights: We delivered the industry’s first ever result at the 300TB scale factor accomplished with our Cisco UCS Integrated Infrastructure for Big Data with UCS S3260 validated design. We continue as well with these new results to maintain our leadership in the 30TB space with a slight improvement in performance compared to our previous results. Note these results are audited and certified by independent TPC auditors.

The benchmark results show some of the breadth of our S Series performance capabilities. Be it a Big Data environment, a Security environment, or Data Protection/Back Up situation we feel the S3260 is a great offering to “Unstore your Data”. Let’s get a customer’s point of view here. Atlanta based Service Provider Cirrity uses Cisco UCS Storage Optimized Servers to improve reliability, security, and manageability:

Video: Cirrity on Cisco UCS S Series

To learn more about the new Cisco S Series Storage Server offerings please visit www.cisco.com/go/storage. To investigate the value and benefits of our Big Data Hadoop offerings please visit www.cisco.com/go/bigdata.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

How To Avoid New Hyperconvergence Silos?

– November 15, 2016 – 0 Comments

Contributors: Eugene Kim

Hi again!

Many of you are looking to refresh your current hardware, deploying your first or second cluster of hyperconverged infrastructure (HCI). You are likely starting with a VDI project, virtualized infrastructure in a branch office or perhaps a cluster of applications for a specific department in the organization. Hyperconvergence allows you to dramatically streamline deployment, eliminate storage complexity, simplify daily operations and increase scaling capabilities – sounds great! Right?

Hyperconvergence is awesome. But I invite you to think strategically, looking at what happens 1-3 years down the road. You probably plan to move more applications to HCI, but at the same time already have many applications in place. Things won’t happen over night, and you are unlikely to be comfortable migrating ALL of your applications to HCI.

New silos emerging?

Many solutions in the market put you in a tough spot. To enjoy the benefits of HCI one must create new silos. There’s traditional or converged infrastructure (CI) with one set of tools that you know how to use, and then there’s HCI with separate management tools and interface. Pretty ironic given that we were trying to eliminate datacenter silos!

Single platform for complete data center strategy

Here’s where HyperFlex fits in. Cisco always takes a broad and strategic architectural approach: The same platform you love from converged infrastructure with UCS is now used for both CI and HCI. This means you can use the same UCS Manager automation for UCS AND HyperFlex clusters, streamlining daily operations and increasing infrastructure reliability. Also, since both infrastructure types play well together you can add UCS Blades or Racks to a HyperFlex cluster, expanding compute-only resources, and thus optimizing your TCO for your specific needs. In fact, the same compute-only nodes can be shifted between HCI and CI clusters based on your changing application needs, enabling dynamic utilization of resources and cloud-like experience for your infrastructure.

Check out this video for information on Cisco HyperFlex and how it fits in your overall data center architecture:

 

 

 

Want to learn more?

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Cisco UCS Programmability

– November 14, 2016 – 0 Comments

Infrastructure automation, continuous deployment and efficient operations require programmable infrastructure.

 

Developers want to treat physical infrastructure the way they treat other application services, using processes that automatically provision or change infrastructure resources. Your operations staff needs to provision, configure, and monitor physical and virtual resources. This allows them to automate routine activities and rapidly isolate and resolve problems. Programmable infrastructure naturally facilitates DevOps methodologies which makes development and operations teams more efficient and productive.

The Cisco UCS management framework is completely programmable – it manages infrastructure as code (IaC). Four innovations provide the foundation for IaC in Cisco UCS and Cisco HyperFlex by making the infrastructure programmable:

  • Software object model
  • Unified APIs
  • Virtual Interface Cards (VICs)
  • Service profiles and templates

 Unified APIs

The software object model and unified APIs in the Cisco UCS management framework work in conjunction with the Cisco® fabric interconnects and the VICs to facilitate IaC. Unified APIs provide a common control plane to manage IaC. They give you programmatic access to every system component. The APIs also facilitate custom development through Cisco UCS PowerTool Suite for Microsoft Windows PowerShell and a Python software development kit (SDK).

The UCS Director APIs provide access to tasks and workflows that can be used to automate and orchestrate Cisco and third party infrastructure resources. It supports a wide range of servers, storage, network, converged infrastructure and hyperconverged infrastructure. The API gives the application and DevOps developers the complete programmatic access they need.

Figure 1. Overview of Cisco UCS Management Programmability, Object Model and APIs

UCS Programmability Overview

The infrastructure service policies and templates are created by server, network, storage and other administrators, and they are stored in the Cisco UCS fabric interconnects. The software object model abstracts the hardware and software into programmable tasks. Service profiles allow you to define connectivity, computing, storage, chassis, and firmware settings once and then roll out the components with the same settings every time, with confidence that the settings will stay the same over time.

We’ve developed a new set of Learning Labs to give you experience with the UCS unified APIs and UCS Director APIs. Training modules feature UCS PowerTool Suite, the Python SDK and custom tasks and workflow creations. There’s a learning track for infrastructure developers and a separate track for application developers.

For more information on the Learning Labs

For UCS management, https://learninglabs.cisco.com/tracks/ucs-compute-prog

For UCS Director, https://learninglabs.cisco.com/tracks/ucsd-resource-automation

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Your Branch Could Save You Millions or More from Security Attacks

– November 14, 2016 – 0 Comments

Like a trusted partner, Cisco is there.

With the ISR 4000s, you’re in good hands.

Why all the cleverly-crafted references to insurance companies and security at the branch?

Much like how there are unknowns around our cars, houses, and lives, there are also a growing number of challenges that your branch offices face – these realities are amplified by trends such as BYOD, the need for direct Internet access (DIA), or the increased likelihood of the branch office being targeted as an entry point for attacks. Check out our latest episode of TechWise TV below to learn more about how your router can provide the best insurance for you against security threats in the branch. 

(TechWise TV’s Robb Boyd and security experts Jason Wright and Kural Arangasamy discuss the latest router security innovations)

Making sure your branch network, applications, devices, and users are protected from today (and tomorrow’s) sophisticated threats means having a multi-layered security strategy and model while still maintaining network performance and lowering costs. In other words, you need insurance for your network to protect against the known and unknown.

The Cisco 4000 Series Integrated Services Routers (ISR 4000s) provide those layers of security protection for your branch offices, with:

  • IOS zone-based firewall
  • Snort IPS
  • Umbrella Branch
  • Firepower Threat Defense
  • Stealthwatch Learning Network License

With security integrated into your router, you can simplify branch management with an all-in-one platform, get visibility and analytics into your network, gain security intelligence to respond quickly to threats – all while lowering costs and maintaining performance.

Interested in hearing about one of the newest innovations in router security? Click below for a replay of a TechWise TV workshop on Stealthwatch Learning Network License and learn more about identifying behavioral anomalies and automating threat detection with intelligent machine learning sensors:

Screen Shot 2016-11-07 at 2.09.17 PM

If you have any additional questions, feel free to check out the router security webpage here or check out Robb Boyd’s blog here.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.