Krones Optimizes Virtual Machine Load Balancing and SAN Performance

– June 23, 2016 – 0 Comments

In order to be more responsive in a highly competitive marketplace, organizations are transforming to support the 21st century needs by being more agile and efficient in delivering centralized applications and services to a geographically distributed customer base. To accomplish these goals, organizations are consolidating data centers, adding virtualization technologies, and leveraging cloud architectures. These initiatives however, along with others, are imposing significant strain on storage networks. The following discusses how Krones uses Cisco technologies to improve storage infrastructure uptime, performance, and scalability.

Krones built scalable SAN using Cisco MDS  

KRONES AG, headquartered in Neutraubling, Germany, plans, develops, and builds manufacturing plants and machinery for companies that require advanced process, bottling, and packaging technology. The company produces the machines that make millions of bottles, cans, and specialty containers daily for producers of beer, soft drinks, wine, sparkling wine, spirits, chemicals, pharmaceuticals, and cosmetics while accelerating mass production. It also provides logistics and IT solutions for its customers.

Krones

Krones’ manufacturing and services rely on three data centers to support business-critical production systems including SAP, databases, Microsoft Exchange, EMC storage, and VMware virtualization. Two data centers are deployed in an active-active configuration, and the third is used for control, backup, and recovery.

All three data centers connect over a Fibre Channel fabric. Krones has 1200 virtual machines and 200 physical servers, including Cisco UCS® servers in conjunction with EMC VPLEX virtualized, EMC VMAX, and EMC VNX hybrid flash storage.

During a typical day, the three facilities handle 3.5 petabytes of data and serve up to 13,000 users.

Business Challenge:

  • Improve virtual machine load balancing and performance
  • Reduce downtime due to high error rates and broken links
  • Gain SAN management self-sufficiency

Solution:

After comparing SAN switching solutions from two vendors, Krones chose

The Cisco MDS 9706 Multilayer Director addresses the stringent requirements of large virtualized data center storage environments. They chose the Cisco MDS 9148s Multilayer Fabric Switches at the SAN edge as a high-performance, flexible Fibre Channel switch platform. It offers high density with up to 48 line-rate 16-Gbps ports in just one rack unit (1RU) and the industry’s lowest power consumption. With Cisco Prime Data Center Network Manager, Krones gained enhanced visibility into the SAN to identify and remedy bottlenecks, enhance link utilization, and analyze events to optimize performance.

Business Results

“We no longer experience errors or unbalanced links. Every link to the SAN fabric carries the same bandwidth, and all deliver outstanding performance.”
Michael Wein, System Administrator, KRONES AG

Highlights:

  • Gained significantly improved performance and uptime with 16-Gbps capacity across all nonblocking ports
  • Eliminated downtime due to broken cables or links through stateful failover
  • Gained visibility into the SAN environment and simplified management with a common operating system and Cisco Prime Data Center Network Manager

Krones dramatically improved failure handling with the new solutions. In the rare event that a supervisor module is reset, the active and standby Cisco MDS 9700 Series Supervisor Modules are synchronized, helping ensure stateful failover with no traffic disruption. The Supervisor Modules also automatically restart failed processes.

Nonblocking 16-Gbps ports are fully utilized for high scalability, and Krones looks forward to upgrading to 32-Gbps line cards for all ports in the future. In addition, one of the most significant benefits of the MDS 9706 and MDS 9148 is the ability to aggregate links and optimize bandwidth utilization.

Bottomline:

Krones was able to optimize Virtual Machine Load Balancing and dramatically increase SAN Performance by implementing Cisco solutions. As Krones moves forward with its new SAN switches, it is ready to handle the most demanding virtual environments and production-critical applications.
For More Information:
Read complete Casestudy
http://www.cisco.com/c/dam/en/us/products/collateral/storage-networking/mds-9700-series-multilayer-directors/manufacturer-optimizes.pdf

To find out more about Cisco Storage Networking Solutions
http://www.cisco.com/en/US/products/ps5990/index.html

For more information about The Krones Group
http://www.krones.com/en/index.php

Tony Antony
Sr Marketing Manager

Storage Networking and IP Storage networking

 

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Big Data is a Big Deal for Your Infrastructure (and it is Cool!)

– June 22, 2016 – 0 Comments

Contributors: Rex Backman

RHCisco

As I author this, I am in my hotel room in Chicago and its Thursday evening. Our core Cisco Big Data team has been here for the past two days focused on planning our worldwide 2017 activities. One thing that is stuck in my mind now after our meetings is the phrase “Infrastructure is Cool!” – it’s a phrase that was repeated often during our meetings where we discussed Cisco; our Data Center solutions for Big Data leveraging products such as UCS, Nexus, and Cisco Management software; and our customers Data and Analytic needs.

Cisco is a proven and recognized leader in Big Data infrastructure solutions. Offering customers broad choices for their Hadoop environments covering Cloudera, Hortonworks, IBM, and MapR. We extend as well into the world of Analytics with solutions from Splunk, SAP, and SAS.  All of these offerings are applicable to your Red Hat ecosystem, too.

Our “cool” factor is apparent in many areas… First, performance and speed. Here Cisco has led the way in Big Data benchmarks. We are #1 in several TPC-H benchmark areas.  Second, ROI and TCO, helping to make the most of your budgets. Our UCS platform is a proven contributor to positive ROI and TCO measurements. The Analyst firm IDC reports on UCS here. Third, productivity, people working faster and smarter. Our Big Data solutions, as recognized by many customers, drive increased productivity – be it for the I.T. staff (300% improvement in cases) or Data Scientists (30% improvement). Read Forrester’s report on Cisco and UCS.

RHCisco2Next week we will be at the Red Hat Summit in San Francisco. Cisco’s technologies and solutions will be there, including Big Data & Analytics. As is typically the case at trade shows, our Cisco booth at will be active with theatre presentations running throughout the show and cool prizes available to win. So, stop on my booth #401 to learn and maybe win something, too!

From a speaking session point of view check out Cisco’s Duane DeCapite’s session on Hot Topics in Containers, OpenStack, and Hadoop:

  • Date: Tuesday, June 28th
  • Time: 10:15am-11:15am
  • Location: Moscone Center West, Level 2 | Room #2009
  • Abstract: Containers, OpenStack, and Hadoop are three of the most talked-about topics in the industry today. This session will highlight some of the hot topics related to the convergence of containers and OpenStack, including projects Magnum, Kolla, and Calico. Join us and learn about new communities and products, including Open Container Initiative (OCI), Cloud Native Computing Foundation (CNCF), Cisco NFVI, and Mantl / Shipped. This session will also feature a deep dive on how Hadoop can be deployed on OpenStack with the Hadoop-as-a-Service (HaaS) Cisco Validated Design (CVD).

To learn more about Cisco’s Data and Analytics solutions for the Data Center, please visit us here.

 

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Swiss City Finds There’s Never Been a Better Time for Digitization

– June 22, 2016 – 0 Comments

HIG00822

When we talk about digitization and “There’s Never Been a Better Time”, the assumption is that we’re only talking about the future—and putting aside the past.

That’s not true.

The city of Biel, Switzerland has been settled for hundred of years, but they are getting in on digitization too. For older cities, this can be quite an undertaking, but the rewards are rich.

The network administrators in Biel were faced with two problems: manage a network in an old building that hasn’t been updated in years and proactively address the city’s future needs. That’s not easy when you’re talking about a network that supports 2,400 employees over 60 locations – including 20 schools.

Thomas Bodenmann, CTO, City of Biel says that he chose Cisco because it allows them to leverage their existing foundation for future initiatives.

What Biel needed was an uncomplicated network that is scalable enough to meet any future demands. Armed with Cisco Catalyst 6800 Series and Nexus 5000 Series switches along with Cisco 5508 Wireless Controllers and the Cisco Catalyst Virtual Switching System, Bodenmann and his network engineers were able to create a robust wireless network backbone.

Once that backbone is in place, network administrators can work on making sure that the rest of the city is connected and communication can be delivered with ease. For example, the city’s ambulance division has already received network services and it has worked out so well that other divisions are requesting the same. Biel is pleased with the way that their improvements have been received and are on record as saying that they anticipate growing their wireless network.

“We plan to implement Cisco ASR 920 Series Aggregation Services Router to push MPLS on a Layer 3 architecture,” said Bodenmann. “Cisco’s robust portfolio of solutions will help us reach the next level in the quality and responsiveness of our services.”

To date, the upgrades have saved the City of Biel a lot of time and money, making the program a big success.

Saving time and money, while planning for a more connected future? Sounds like there was never a better time for the City of Biel to upgrade its networks!

To read more about this case study, click here.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Deus Ex Machina: Machine Learning Acts to Create New Business Outcomes

– June 22, 2016 – 0 Comments

The term deus ex machina means “a god from a machine.” “Machine,” in this example, pertains to a crane that held a god over a theater stage in ancient Greek drama. Typically, the playwright would introduce an actor portraying a god at the end of his play who, from his elevated perch on the crane, would magically provide a resolution to an impossible dilemma to advance the plot to its end. Over the centuries “deus ex machina” has evolved to mean the intervention of unlikely saviors, devices or surprising events that bring order out of chaos in fast and often remarkable ways.

Today, machine learning is acting in much the same way. The technology is providing new and surprising solutions to seemingly unsolvable problems. And, in doing so, it’s producing insights that that are changing business outcomes as never before. It is turning the improbable to the probable – using predictive analytics. And, it’s enabling companies to march faster toward digital transformation.

Predictive Analytics Using Present Results

So, what is machine learning? Put simply, machine learning digitally processes data – millions upon millions of bytes of data – to run predictive models that learn from existing data and/or data generated in real time to forecast future behaviors, outcomes and trends. In turn, these “predictions” can make the applications or the devices you use smarter and more adaptable to both you and/or the context in which they are being used.

Each of us experiences the benefits of machine learning intelligently applied each day, whether it’s a recommendation engine used on an online shopping site or when your credit card is swiped and the transaction is automatically compared with a database to help your bank determine possible fraud.

Simply Algorithmically

The underlying software technology that makes machine learning possible is optimization algorithms. These dynamically leverage data from both sensors and intelligent devices. But because the conditions in which they operate are highly variable, the algorithms need to sense, respond, and adapt within broad parameters.

One example in our personal lives is automobiles. Using onboard computing, they automatically interact and respond to environmental data – outside objects, lights and weather conditions to name a few. On the business side, machine learning applications in logistics, industrial automation, utilities and security systems can let machines speak directly with other machines. Installed algorithms can evolve and adapt based on continuous data analysis, so that machine and/or system performance is constantly optimized based on operational parameters.

Continuous Learning Creates Consistent Success

Research from TDWI indicates that the number of companies planning to use machine learning is expected to triple over the next 3 years, bringing market penetration to over 50%. From speaking with Cisco customers, I think the research may understate the case, especially among companies with large data sets and good data quality. Machine learning has the potential to create so much value for corporations and users that it has the power to transform entire industries.

Machine Learning Secures Machines from Risk

Cisco is adopting machine learning in many ways both for itself and its customers. One of the most important areas is security. Unfortunately malware is ever increasing and ever changing. With 50 billion devices expected to be connected via the Internet of Things (IoT) by 2020, the network risk is exponential. One way to identify and stop malware is by analyzing the communications that the malware performs on a network.

Thanks to machine learning, network traffic patterns can be analyzed to identify the culprit. Cisco OpenDNS is a great example. Think of Pandora automatically learning from your music listening habits or Amazon learning your preferred shopping patterns, cloud-based, OpenDNS is constantly learning from new Internet activity to prevent malicious attacks. Its underlying algorithms are always adapting to live events. Rather than reverse engineering malware reactively, OpenDNS focuses on removing the biggest obstacle to blunting attacks — humans — by building machine learning systems that provide advanced threat protection before, during and after an attack.

The God in the Machine Moves to the Factory Floor

Moving from the network to the factory floor, Cisco is working with Mazak a world leader in innovative design and manufacturing of machine tools. It produces more than one-hundred models of turning and vertical machining centers.

Mazak asked us for a solution that would help them significantly improve machine efficiency for its customers. The answer is “SmartBox” powered by Cisco Connected Manufacturing software. The solution enables real-time manufacturing data and related analytics to be gathered from machines operating on the factory floor using software embedded on Cisco Industrial Ethernet (IE) 4000 switches.

The solution lets Mazak easily connect any off the shelf sensor to the system for machine data gathering. Advanced manufacturing cells and systems, along with full digital integration, can then achieve free-flow data sharing, i.e., process control and operation/equipment monitoring. Using Cisco Streaming Analytics, the company gains immediate visibility and insight to vital data produced by its manufacturing equipment. As a result, machine utilization is maximized and downtime is minimized through predictive maintenance.

Cisco edge analytics solutions provide a secure, end to end connection from any device to the cloud. In a manufacturing environment, this means that the software can listen to “hot” data produced by machines and provide real-time insights for operational decisions impacting product or operational requirements. The software can even monitor specific zones within a factory by residing on a larger computing system.

Are you currently using machine learning in your company? As all of us continue to push new boundaries with machine learning, sharing stories with each other is the fastest way to expand our efforts. I would love to hear yours!

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Choice and Flexibility in deploying L4-L7 services with Cisco ACI and Cisco CloudCenter: An in-depth Journey

– June 21, 2016 – 0 Comments

Two weeks ago, I posted a blog highlighting the growing customer momentum for Application Centric Infrastructure (ACI), particularly in the context of customer successes in ACI-F5 joint deployments. I am seeing a growing trend among customers of late to directly deploy the Nexus 9k series of switches in ACI mode. At F5 Agility Vienna, in May, I met with quite a few customers who expressed a keen interest in integrating L4-L7 network and security services with ACI.

I had the privilege of understanding customer perspectives on how they plan on deploying the device package, the glue software that integrates Cisco ACI with a given vendor’s L4-L7 service device. In recent months, the Insieme Business Unit has built flexibility and choice with regard to the modes customers can deploy the L4-L7 service devices with ACI and Cisco CloudCenter. This blog is intended to create an expanded awareness on various L4-L7 service integration options with Cisco ACI.

It is best if I start by explaining the various modes of deploying L4-L7 services with ACI. They include:

– Managed Mode (a.k.a Service Policy Mode)

– Unmanaged Mode (a.k.a Network Policy Mode)

– Hybrid Mode (a.k.a Partially Managed Mode)

Service Policy Mode: When ACI first launched with the concept of L4-L7 service automation, it went to market with service policy mode (also known as managed mode). The main purpose and forward looking approach at that time was to have one single source of management, the Cisco APIC, to fully automate the entire L2-L7 stack. This was done through a device package uploaded to the APIC that contained a list of features and policies to configure the service device (ADC or FW for example). You can think of the device package as a plugin with a list of features presented to configure your service device. An important thing to keep in mind is the device package is provided by ACI L4-L7 ecosystem partners who decide which features are exposed or hidden compared to going directly to the device.

We have several customers who have deployed this mode and experienced smooth and seamless operational experience. Whether you have a small DevOps team or separate teams to manage various sized data centers, this can be a solution for you. You can use the GUI, CLI, or API to configure the APIC, which configures all of your network (ACI) and L4-7 services. Another major benefit is when you automate everything through northbound APIs without the need for a GUI or CLI. Again, many different use cases can be addressed, but ultimately, Cisco stands behind this operational model and its ecosystem partners who support this methodology.

I found this great section that goes into more depth on the managed mode, and here’s the link if you need to read more.

Network Policy Mode: With everything being said about the Service Policy Mode, some customers were not yet ready for the APIC to be in full control of their service devices. So, the second mode that was brought to market is called Network Policy Mode (also known as unmanaged mode – since the service device is not being managed by the APIC). ACI is still automating the network for you until the traffic gets to the device. Here’s the simple flow:

  1. ACI Fabric will deliver the traffic to the service device (ADC or FW, etc…)
  2. The service device will perform its task(s)
  3. The ACI Fabric will handle the traffic once it comes out the service device

This mode requires manual network stitching, meaning you provide information about the ports to which it connects, the ports that are part of a cluster, and the device operation mode: go-to mode, go-through mode, or one-arm mode (link to read more). With the proper hardware and software (ACI or service devices), customers can always migrate to Service Policy Mode at a later time when they are ready. This is to show again that ACI is able to adapt to the market, and address customer needs.

DP1

Hybrid Mode: The Hybrid mode, also known as the partially managed mode, is the third model of managing your service devices. Hybrid mode enables L4-L7 service devices to be jointly managed through Cisco APIC and a service device controller. It enables L2-L3 network configuration of service devices through APIC. With Hybrid mode, more nuanced L4-L7 feature configuration can be done through a specialized service device controller. Hybrid mode requires a device package. The key difference here between Service Policy Mode and Hybrid Mode is the function of the device package. Hybrid mode allows device package developer to customize and manage subset of L4-L7 features through APIC. To keep things simple, the APIC has a version of a device package that enables it to communicate with service device controller, and there can be many different flavors.

The configuration command comes from the APIC to the service device controller and then pushed down to the service device with the full configuration. This allows simplicity on the device package side and management through the APIC while keeping the full native functions and customizable parameters available for the 3rd party vendors. Hybrid mode enhances security devices like firewalls, IPS, IDS etc., management through APIC. It allows security administrator to manage security policies through a dedicated security controller, while configuring the network parameters and associating security policies to a network through APIC.

The one thing that is common in all Hybrid Modes is that the network portion is still fully automated through the APIC. In other words, don’t worry about L2-L3, but we’re still configuring L4-L7 in a slightly different way.

I am excited to say that we have more for you. Cisco’s recent acquisition of CliQr CloudCenter (now called Cisco CloudCenter) is a strategic tool for delivering simple, self-service application deployment to end users while letting network and IT administrators apply complex automated rules and policies in the background. During the application deployment process, Cisco CloudCenter dynamically deploys and creates objects like APIC managed services, ACI contracts, and endpoint groups. In the image below, you can see how the Cisco CloudCenter application profile models the inclusion of a load balancer managed service (1), the resulting translation of that model into an APIC application profile (2), and lastly the load balancer managed service supplying L4-L7 features, displayed in the APIC service graph (3).

Maybe the best part of all this — no APIC API coding for the networking administrator (Cisco CloudCenter has all of that built in already)!   All objects are managed throughout the application lifecycle–ultimately cleaned up upon application termination.

DP2

Cisco continues to innovate in the SDN market and address customer needs. Whichever mode is needed, green or brown field deployments,we highly encourage you to get on the SDN journey with us.

In closing, I want to extend my thanks to Insieme’s L4-L7 engineering experts Sameer Merchant and Ahmed Desouki for their insightful conversations with me on this topic. Also, I extend my appreciation to Zack Kielich of Cloudcenter team for sharing what’s new and exciting on ACI-CloudCenter front.

Related Links

www.cisco.com/go/aci

www.cisco.com/go/cloudcenter

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Developer Productivity, PaaS, Compliance, and Policy Driven Infrastructure

– June 20, 2016 – 0 Comments

If you are heading up an application develop team or are running an IT organization and you are looking at your developers initial steps to developing cloud native applications, I want to share some experienced insights gained from PaaS customers.  For me it is a view of the path many of you may be heading down applying some best practices that can optimize Dev and Ops together.

Apprenda, one of our ACI ecosystem partners, and Cisco recently hosted a series of presentations, we called PaaS Days, about the Application-Centric Enterprise to share how private PaaS and policy driven automation together can address real-world problems such as application time-to-market, datacenter security and corporate compliance.   I want to share some insights I gained there about different application strategies for cloud. Depending on the company’s cloud maturity level and whether the management is centralized or not, the application strategy can take on a project oriented or an organization level scope.

2_SSchuller_AppStrategies-1The primary value drivers for an application strategy scoped at the project level were to address specific application concerns quickly such as

  • scalability and availability, or
  • the need to use cloud native patterns not yet supported by IT.

In this case, developers are adopting technology independent of IT and are driven by immediate business needs.

With an application strategy scoped at the organizational level, the primary value drivers are

  • to support the business need to rapidly deliver new software,
  • standardize the architecture for the organizations applications, and
  • to provide developers a service that parallels a pure public cloud offering.

For a  business transforming itself into a digital enterprise,  the composition of applications in the application portfolio changes over time.  The percentage of cloud native applications expands but existing traditional applications will still exist to sustain key systems of record.  The new systems of engagement built on cloud native applications are expected to deliver greater customer value, drive revenue growth, and contribute to improved bottom line results for the enterprise.

Developer Productivity and Compliance2

If application developers take a project oriented approach they may simply focus on a cloud native platform for their one project.  When there are many projects, developer efficiency improves by using a Platform-as-a-Service approach.

2_SSchuller_AppStrategies3

When this approach can support cloud native applications AND existing applications, value delivery by the application portfolio can be better optimized.  IT leadership will want to see shared efficiencies and controls to ensure scale, performance, and compliance with security mandates as the value associated with the cloud native applications expands to real significance and strategic importance.

A simple example to appreciate the value and synergy by integrating the cloud stack from the PaaS layer to the infrastructure is to look at security policy for customer data in cloud native applications.  Tenant isolation and white list policies can be specified at the PaaS layer for the cloud native application and automatically applied by Cisco’s Application Centric Infrastructure.  See my blog ACI Policy Enables Secure PaaS and More at CLEUR for more details.

If the application platform strategy supports a common policy framework based upon the ACI policy model, the enterprise gets the operational simplicity, security, and agility delivered by ACI.  The Developer team benefits from speed, efficiency, and secure policy driven application deployment using Apprenda PaaS for example.  The Operations team benefits from automated workflows linking the application and infrastructure control and monitoring.  And the end result is good for scaling the number of applications and delivering greater customer value using an open, agile, and secure architecture for IT.

To Learn More:

How Cisco is Using Platform as a Service (PaaS) delivery and ACI

Cisco ACI and Apprenda: Today’s Most Secure and Advanced Enterprise Hybrid Platform-as-a-Service Solution Overview

Three Ways Apprenda Puts the “A” in Cisco ACI Blog

Use Cases for Apprenda with Cisco ACI White Paper

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Visibility Into Tetration Analytics, Part 2

– June 17, 2016 – 0 Comments

This is the second of a two part series on Tetration Analytics, a platform designed to help customers gain complete visibility across their data centers in real time. Part 1 covered challenges, an overview of the solution, and components. Today, I’ll cover use cases, benefits and additional resources for you to get more details.

 Use Cases

There are 5 key use cases I want to call out. These will help illustrate how Tetration Analytics provides such pervasive visibility and will help clarify why we call it a Data Center Time Machine.

Screen Shot 2016-06-13 at 9.39.48 AM

Application Insights: I spent quite a bit of time covering the problems associated with this use case in the Challenges section of yesterday’s blog. The bottom line is that most folks simply don’t have visibility into their infrastructure. They don’t know all the apps on their network and even if they do, they don’t know where/how they’re communicating, or all the components a given app relies on to function properly. Tetration Analytics provides application behavior based mapping of processes and flows based on unsupervised machine learning (i.e. it figures stuff out so you don’t have to). It collects all of the flows North-South as well as East-West. It is smart enough to map and group your applications autonomously. You can also intervene and teach Tetration Analytics to learn new groupings if you have unique circumstances. As it maps the application components and all network traffic between them, this ultimately results in simplified application operations, migration and disaster recovery planning. It also allows you to convert this information into policies for ACI.

Policy Simulation & Impact Assessment: When we call Tetration Analytics a time machine for your DC, the inference is that you can look at the past, present and future. Here is an example of how you can look in to the future by simulating policy. With this use case, a user can essentially do an impact analysis using historical or realtime data – and NOT affect production traffic. This lets you see how new policies would affect actual traffic flowing through the network. You can also assess which flows will be classified as compliant, noncompliant or dropped. It lets you, for example, simulate a whitelist policy and assess its impact before applying it in the production network. Seeing the impact of a change before you make it clearly has huge benefits and can keep you out of trouble.

Automated Whitelist Policy Generation: I’m guessing you come from a blacklist world, i.e. any source can talk to any destination by default, unless you explicitly deny communication, through an ACL, for example. With whitelist policy, it’s just the opposite – nothing talks by default, unless you explicitly allow it. This is a beautiful thing, because it reduces your attack surface – exploits are kept from propagating across applications, tenants and data. This has obvious benefits in terms of security, but also compliance, since it is basically self documenting. This means no more scrambling to collect ACL’s for an audit, since compliance can be validated quickly by comparing actual traffic flows to the whitelist policies.  As compelling as the whitelist policy model is, it can also be a challenge to move to if you don’t have visibility into all your apps, their communications, and their dependencies. Tetration can provide an automated whitelist policy that can be exported and deployed within your infrastructure for a true zero trust model.

Forensics: Tetration Analytics collects and stores all data flows allowing you to search them when, where and how you want. This significantly reduces the time to investigate and solve problems. You can look at things in real time or historical views of what happened in the past. Today, I’m guessing when you have a problem you instrument the area (enable a span port, use a tap, bust out a sniffer or whatever) and see what you can find. There are a whole set of challenges with that approach, but suffice to say they go away when Tetration Analytics is everywhere, constantly watching everything, allowing you to replay whatever you need, whenever you need it.

Policy Compliance: I mentioned this earlier, but in short, Tetration Analytics documents the policies in place and can compare the traffic flows against them, flagging exceptions and providing remediation.

Benefits

Tetration Analytics provides a multitude of IT and Business benefits. Benefits were covered here, and I’ll summarize a few more of them below:

IT Benefits

  • Make informed operational decisions driving intelligent changes with predictable outcomes (e.g. validate a change before it’s executed by understanding the change’s impact on applications)
  • Validate that policy changes have actually been applied and taken full effect
  • Bring greater reliability to data center operations with complete knowledge of interactions and dependencies in the data center
  • Effectively identify application behavior deviation and better manage network policy compliance
  • Long-term data retention supports forensics and analysis with an easy to use interface

Business Benefits

The benefits above translate into superior visibility, which lead to a more secure environment, as well as a more agile and highly available infrastructure. This means the business can better guard against brand damage resulting from security breaches, better avoid unplanned outages as well as move with more speed and confidence.

Resources

There is a lot of Tetration Analytics information in many different forms.   Here is a quick overview to help you sift through some of it more efficiently:

  • 2 minute overview video – what Tetration Analytics is
  • 3 ½ minute overview animation – how Tetration Analytics works
  • Analyst report from IDC
  • Profile of Cisco IT’s experience with Tetration Analytics
  • Technical white papers
  • Data sheet
  • Nexus 9000 hardware sensors at-a-glance
  • A bunch more content at this web page…I’d especially recommend scrolling down to “I need to…” and checking out the whiteboard videos there

Thanks for checking out the blog. I hope you can see why we are so excited about the ways Tetration Analytics will help our customers gain pervasive visibility across their Data Centers.

Image source: Pixabay

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Visibility Into Tetration Analytics, Part 1

– June 16, 2016 – 0 Comments

Yesterday Cisco announced Tetration Analytics, a platform designed to help customers gain complete visibility across their data centers in real time. These posts, Data Center Visibility on a Whole New Scale and A Limitless Time Machine For Your Data Center, provide context and a high level overview of the new platform. This post will provide more insight into the problems Tetration Analytics solves, what it is, and how it works. This is the first of a two part series.

Challenges

Current tools don’t  comprehensively address problems like defining app communication and dependencies, providing requisite info needed to move to a zero trust model, or assessing realtime behavior deviation.  Nor to they provide complete visibility.  Identifying what apps are in the data center, as well as understanding what each of them depend on and talk to is critical, but often times difficult.

These things are important to understand if you are trying move apps from an existing environment to a new one – whether that be a private cloud, e.g. ACI, or a public cloud, or a DR site, or a new data center, etc.   They are equally important as you try to build more secure environments and reduce the attack surface.

One customer gave the example of taking down an app to move without clearly understanding all the literally dozens of other apps depending on it. Some broke. This happened even though there was a small army of people sitting in multiple meetings discussing the planned outage. But they still missed things that resulted in unplanned outages.

I spoke with a CTO of a very large organization who told me they spent several years working on understanding these types of things and got maybe 60% through their DC’s.   At which point, most of what had been collected was invalid anyhow.

I had another customer conversation where they spent $30 million on a major DC move. About $6 million of the $30 million was spent analyzing and trying to understand what was there so they could move it.

These are big problems. This opaque visibility and dearth of cohesive tools often times results in a sense of ‘crawling in the dark, looking for answers’ (from what I deem to be the DC managers anthem).

Why? Let’s consider a few key reasons:

  • Insufficient granularity of realtime telemetry data collected at scale. Existing tools don’t have the ability to see every packet and every flow across the DC infrastructure. Application behaviors are complex and dynamic, resulting in the need for pervasive visibility. However, if you sample, you’re going to miss things. If you don’t sample, you have too much data to get through at today’s DC speeds.
  • Lack of ability to analyze data in realtime. Most of today’s tools do not have the ability to analyze large volumes of data in real time and address operational issues comprehensively. As a result, administrators cannot respond to issues in real time and are forced to interpret or project (that’s a polite way of saying “guess”) about relationships, leading to costly and time consuming errors.
  • Today’s tools cost too much: The gaps in today’s capabilities cost excessive amounts of time, money and lost opportunity. Some customers spend months, or even years trying to identify what apps they have, how they’re related and what their dependencies are…often times with marginal results.

Solution Overview

Cisco Tetration Analytics is designed to address these, and other, challenges through rich traffic telemetry collection and by performing advanced analytics at datacenter scale. This platform uses an algorithmic approach including unsupervised machine-learning techniques and behavioral analysis, to provide a turnkey solution.

The words in the paragraph above, while accurate, hmm, are a bit foreign sounding to somebody like me that was brought up with concepts like subnet masks, Area 0, route redistribution, and the like. Or maybe the words just have too many syllables for me. In any case, lets unpack what they mean below.

 Components

Tetration is comprised of 3 fundamental elements:

  • Data Collection
  • Analytics Engine
  • User Access and Visualization

Tetra components

Data Collection

Data is collected with sensors, of which there are basically 2 types:

  • Software or Host sensors: These can be installed on any end host (virtualized or bare metal) servers.
  • Hardware sensors: These are embedded in Cisco Nexus 92160YC-X, Cisco Nexus 93180YC-EX and Cisco Nexus 93108TC-EX Switches.

Both sensor types reside outside the data path and do not affect application performance. The software sensor uses an average of 0.5% CPU utilization, based on our current experience. The sensor is also configurable, so you can limit the max CPU Utilization.

The hardware sensor adds less than 1% of bandwidth overhead and does not impact the switch CPU at all.

Sensors do not process any information from payloads, and no sampling is performed. Sensors are designed to monitor every packet and every flow. In addition to the sensors, data collection can be done via third-party sources, such as load balancers, DNS server mappings, etc., to collect configuration information.

Analytics Engine:

Data from the sensors is sent to the Tetration Analytics platform, which is the brain that performs all the analysis. This UCS based big data platform processes the information from the sensors and uses unsupervised machine learning, behavior analysis, and intelligent algorithms to provide a turnkey experience for the use cases we’ll discuss tomorrow.

This means that the platform listens and learns what is out there, then identifies who is talking to who, when, where, and for how long. It then builds an understanding of how all these elements behave. Once it has a baseline for their behavior, much can be done. This includes: Replay past events like a DVR for your DC. Alert you to deviations of normal behavior. Tell you what policies will get the objective you want. Predict the impact of what will happen if you change a policy. And much more – all without the need for any fancy data scientists to manage heavy duty big data stuff.

User Access, or Visualization:

Tetration Analytics translates all of this data into useful information through an easy-to-navigate web GUI interface and REST APIs. It also provides a notification interface that northbound systems can subscribe to and receive notifications about traffic flows, policy compliance, etc.

A number of key partnerships will leverage the APIs, complementing the overall functionality of Tetration Analytics and adding value for our joint customers. For more information on these partnerships – who they are and what we’re doing together – please see these quotes from our partners.

Tomorrow, in part 2 of this blog, we’ll cover Use Cases and Benefits, as well as provide additional resources.

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Tetration Analytics: A Limitless Time Machine For Your Data Center

– June 15, 2016 – 2 Comments

In the film Limitless, the main character Eddie finds himself able to learn and analyze information at a superhuman rate. He temporarily has the ability to instantly and meaningfully cross-correlate all of the previously forgotten experiences from his past (1) and assess multiple scenarios in the future. He does this simply by taking a pill.

I don’t have a pill for you, and I’m not going to claim any product can make you Limitless. However, I will say Cisco Tetration Analytics comes closer than anything in the industry to delivering similar capabilities!

What Is Tetration Analytics?

“You know how they say we can only access 20 percent of our brain?  This lets you access all of it.” (2)

head-1058432

Cisco Tetration Analytics provides pervasive and unprecedented visibility across the data center, via a mix of network/hardware sensors that monitor every single packet at line rate and server/software sensors with very low overhead (<1% CPU). The sensors work with an analytics engine that operates in real time, presenting actionable insights with easy to understand visuals. Additionally, it provides application dependencies, automated white-list policy recommendations, policy impact analysis, detection of policy deviation and network flow forensics.   That’s a mouthful, but the essence of it is Tetration Analytics is as close as you’re going to get to becoming Limitless in your data center.

What Problems Does Tetration Analytics Address?

“I was blind but now I see.” (2)

eye-491625

You know the trends, so I won’t babble incessantly about them here. The bottom line is that running a data center is increasingly challenging, with tons of apps, as well as dynamic traffic patterns, workloads and consumption models… all of which lead to seemingly infinite complexity. This leads to numerous challenges:

  • You want to migrate applications, but you can’t. This may mean moving applications from your local data center to a public cloud, moving from one data center to another, moving from a traditional network to ACI or some other programmable infrastructure, or it may mean setting up a disaster recovery site. However, without visibility into your applications, the dependencies between them, and the traffic flows associated with them, you cannot do this effectively, with precision or speed. We’ve talked with many customers who’ve spent enormous amounts of time/money and endured seemingly endless frustration to understand how apps communicate and are dependent on one another.
  • You want a zero trust model, but lack information and resources to implement or maintain it. You want to migrate from blacklist to whitelist security to shrink the attack surface. In a traditional blacklist model, everything can talk with everything else by default, then we create Access Control Lists (ACLs) or firewall rules that deny exceptions.   However, as these ACL’s have grown, sometimes to thousands of lines, it has become nearly impossible to accurately maintain them or to do so with any semblance of agility or accuracy. Gut instinct and empirical data both validate that A LOT of delays and outages result from configuration errors. (3) Admins are left wondering if they delete something, will it create a hole, and if they add something, what might break? With a white list policy nothing talks until it is explicitly allowed, and is thus more secure, prescriptive and accurate.
  • You find it’s impossible to know exactly what’s happening on – or even what’s in – your infrastructure, because you don’t have complete visibility into traffic flows or application behaviors. This results in operational problems and security challenges. This lack of visibility is not unlike crawling around in the dark. About a dozen or so years ago there was a band I really liked called Hoobastank. Yeah, I know, strange name. Anyhow, they had a song that quite accurately reflected the dilemma most folks running a data center have today. It was perhaps – unbeknownst to them – the data center manager’s anthem. The song was called ‘Crawling In The Dark’ and said:

Show me what it’s for

Make me understand it

I’ve been crawling in the dark

Looking for the answer

Is there something more

Than what I’ve been handed?

Yep, there is more than what you’ve been handed, that will provide answers to these problems and help you understand. But it’s not a pill. It’s a time machine for your data center that lets you easily replay the past, reveal the present, and plan for the future. It’s called Tetration Analytics.

How Does Tetration Analytics Address These Problems & How Does It Help You?

“I see every scenario…It puts me 50 moves ahead of you.” (2)

chess-433071

In a perfect world, we would be able to rewind what has happened in the past, view what is happening in the present in realtime, and model what could happen in the future.   H.G. Wells foretold of this capability back in 1895 when he wrote The Time Machine. Granted, he probably wasn’t thinking about data centers when he wrote it, but if he had, the storyline may have gone something like this:

What if you had complete visibility into everything, in real-time?

What if you had a time machine for your data center?

You could look at the past and replay events in real time.

You could plan for the future, and see the consequences of a new policy before you commit to it.

It’s not a ‘what if.’ It’s called Cisco Tetration Analytics, and it’s unlike anything you’ve ever seen.

You can see this storyline brought alive in a very cool video.  It’s only 2 minutes – check it out!

Tetration Analytics is able to address the problems above and provide these new capabilities because it allows you to do things you previously could not. You can:

  • Search billions of flows in less than a second (every packet, every flow, every speed).
  • Do real-time and historical policy analysis, essentially replaying what happened in the network at any time.
  • Continuously monitor application behavior and quickly identify anomalous patterns for compliance exceptions.
  • Validate a change before it’s executed by showing the change’s impact on applications – meaning you can get predictable outcomes.
  • After the change is implemented, you can then validate that policy changes have actually been applied and taken full effect.
  • Get complete knowledge of interactions and dependencies in the data center, bringing greater reliability to data center operations.

So, as I said up front, I don’t have a pill for you. However, I can offer you Tetration Analytics. And though I can’t say your powers will be Limitless, I can say that Tetration Analytics will give you the best semblance of a Time Machine for your Data Center the industry has ever seen.

Tetration_Analytics_Icon

References:

  1. https://en.wikipedia.org/wiki/Limitless
  2. Quote from the film Limitless
  3. A 2015 survey conducted by ESG showed that:
    • 74 percent of respondents took days or weeks to implement security device updates from request all the way through to production implementation.
    • 43 percent of respondents reported a configuration error over the last 12 months that led to a security vulnerability, performance problem, or service interruption.
    • Of those, 87 percent reported multiple service outages over the last 12 months due to technical error with changing or configuring networks.

Image sources: Pixabay

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

2 Comments

    following on DD’s cosmos analogies based tetration analytics blog, Craig brings another exciting blog on this topic, and this time with a time machine based analogy – reads awesome

    Great write up & love the limitless reference.

    Dash Thompson
    Systems Engineer
    Cisco Systems
    CCIE #50903

Accelerate Hybrid IT Deployments with Cisco Data Center Innovations

– June 14, 2016 – 0 Comments

Cloud is now a fundamental part of IT landscapes.  Enterprise and IT executives deal with large numbers of applications that are being versioned faster than ever before. Running IT is increasingly challenged to support line-of-business needs.

To learn how to simplify application lifecycle management across multiple data centers, private clouds, managed private clouds, and public clouds, watch this video,Accelerate Hybrid IT Deployments with Cisco Data Center Innovations” with Carlos Pereira and Dave Cope.

CliQr_VOD_240x200

Learn how application relevancy transforms data center automation and accelerates the adoption of hybrid cloud environments. With this information, you can easily model an application once and deploy it across any combination of data center and private or public clouds.  You also see how to deliver hybrid IT as a service, so users get on-demand experiences while IT gets governance, visibility, and control.

Trimble, a leading customer, talks about how they deployed the Cisco CloudCenter solution with ACI to deliver a powerful hybrid IT solution.

To learn more about hybrid IT and policy driven infrastructure:

Solution Brief – CloudCenter with Cisco ACI Experience the Full Power of Software Defined Networking

White Paper – Cisco CloudCenter Solution with Cisco ACI: Deployment Topologies and Requirements

Blog – Hybrid Cloud Power Trio

Blog – The Data Center Has Changed Forever

Tags:

Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.