Connected Analytics: Learn to Live on the Edge – and Love It!

Not surprisingly, as a networking company Cisco frequently publishes predictions on the growth of Internet traffic. Bragging unintended, typically the forecasts are pretty accurate. In a 2012 report we predicted that by 2017 there would be 2.5 devices and related connections for every person on earth, while there would be 5 devices and related connections for every Internet user in the same year. In the same report, we also predicted that this burst in hyperconnectivity – including machine to machine connections that are increasingly prevalent with growth of the Internet of Things (IoT) – would create more global network traffic in 2017 alone than in all prior “Internet years” combined.

How correct were our predictions? You don’t have to wait until 2017 for an answer. Welcome to the early arrival of the future of networked communications – a future where the hyper-distribution of information is driving new business demands, and where the old rules of data management and analytics no longer apply. Data is no longer passive. Central stores of stale information aren’t sufficient. Analytics can’t be an afterthought. The new rules require that you live your business daily on the edge of your network, where vital customer and market data is created. And you need to be prepared to respond to what you learn immediately. Are you ready to live on the edge?

The Future is Now . . . Like it or Not

Pervasive connectivity and ubiquitous cloud services have reset user expectations for all types of products and services. A wider and wider variety of connected endpoints combined with mobile and cloud service delivery expands both the kinds and types of data generated by and about users, as well as the devices and the processes that connect them. Data may come from various sources – operations, infrastructure, sensors, etc. Machine intelligence will become better and better, replacing human reasoning in some cases.  And, like humans, machines will develop deeper and deeper insights through continuous learning over time. The good news is that increasingly intelligent machines will free humans for even bigger thinking – and the process will keep repeating itself – machines and humans cooperating for a more intelligent whole.

But the network’s edge ultimately belongs to the end-user. Consumers are well positioned to define and demand a technology experience that meets their specific requirements. Enterprises undergoing digital transformation understand this. Using IT automation, these companies are moving intelligence and analytics to the edge of the network to understand how to benefit from this new perspective.  Put simply, analysis is moving to where the data is generated for instant business insights.

The list of challenges for companies coping with the nature and the speed of digital transformation is a long one. Here are a few of the most critical:

  • The variety of data on the network increases with every new application used
  • High velocity, valuable information from market data, mobile, sensors, clickstream, transactions and other sources requires a new approach to data management
  • Almost universal connectivity has reset user expectations for all types of services
  • Data insights are often perishable and need to be acted on immediately
  • Competitive pressures and increasing customer expectations require that businesses anticipate customer needs, react instantly, and make decisions in real-time
Car_shopper_insights

Click image to view a larger version of this graphic

Shape the Edge to Your Requirements

Enterprises of all kinds are responding to these challenges in innovative ways to gain competitive advantage. One example is retailers, which I profiled in an earlier blog on the future of shopping. Merchants understand that the longer a shopper remains in a store the more likely the prospect is to purchase. So, if a retailer can increase a shopper’s “dwell time,” it is more likely to stimulate a purchase.  We’re seeing retailers do this today as they measure where, how and why buyers make decisions on the path to purchase starting at the network’s edge. Through customized applications that permit the retailer to analyze real-time customer engagement with products or in-store displays, the retailer gains immediate insights that let it customize a promotional offer by individual and then push the offer instantly to the consumer’s device. This sort of personalized interaction also creates a better customer experience.

From a service provider’s perspective, knowing the habits of your mobile customers can help it improve service delivery, lower costs and enhance customer loyalty. Again, this knowledge starts at the edge of the network by analyzing continuous feedback on the use habits of mobile subscribers. For example, a service provider can determine unique and new clients, analyze usage by day, week or month, gather active session information to identify network usage patterns or manage promotional programs, determine authenticated vs. unauthenticated associations to identify potential subscribers, or grab information on total data usage to pinpoint network anomalies or usage spikes. These and other edge measurements can then be further analyzed for trends. Automation enables the analysis. The analysis, in turn, creates fast decision-making, which leads to concrete business outcomes.

The Way Forward . . .

If you believe living on the edge is vital to your business, it’s important to have a strategic framework in which to manage your digital transition. First, think of your analytics’ needs in three parts:  1. Real time analysis; 2. Data management; 3. Flexibility of use. Then, demand that the analytics solution you choose to move forward with addresses the requirements for each part as I describe below:

1. Real Time Analysis

  • Real-time trending
  • Dynamic dashboards
  • Predictive analytics integration
  • Continuous queries
  • Event generation

2. Data Management

  • Ability to combine information from network and applications
  • Seamlessly query live and historic data
  • Historic reporting framework

3. Flexibility

  • Analysis of complex queries from fact streams and dimensional data
  • North/south and east/west interfaces for customization
  • Multi-vendor extensibility

Living on the edge of your network doesn’t have to be intimidating. In fact, you’ll come to like the speed at which you’ll find new business insights. Cisco can help in your transformation to a digital business with automation and analytics at its core. I plan to share more on this topic at the Cisco Data and Analytics Conference  on October 20-22 in Chicago. I hope you can join me at that time.

Meanwhile, I’m interested to hear how you feel about the importance of managing and analyzing data at the edge of your network. What are the issues and opportunities that you see?

Please feel free to comment, share and connect with us @CiscoEnterpriseFacebookLinkedIn and the Enterprise Networks Community.

Tags: , , , , , , , ,


What Du Can Do With ACI

It seems people sometimes have this view of SDN as addressing rather esoteric use cases and situations. However, the reality is that while there are instances of ‘out there stuff’ happening, there are many situations where we see customers leverage the technology to address pretty straightforward issues. And these issues are often similar across different business/vertical/customer types.

Aftab Rasool is Senior Manager, Data Center Infrastructure and Service Design Operations for Du.   I recently had the chance to talk with him about Cisco’s flagship SDN solution – Application Centric Infrastructure (ACI) – and Du’s experience with it. I found there were many instances of Du using ACI to simply make traditional challenges easier to deal with.

Du is an Information & Communications Technology (ICT) company based in Dubai. They offer a broad range of services to both consumer and business markets, including triple play to the home, mobile voice/data, and hosting. The nature of their business means the data center, and thus the data center network, is critical to their success. They need a solution to effectively handle challenges of both deployment, as well as operations…and that’s where ACI comes in.

I’ll quickly use the metaphor of driving to summarize the challenges Aftab covers in the video. He addresses issues that are both ‘in the rear view mirror’ as well as ‘in the windshield’ – with both being generalizable to lots of other customers. What I mean is that there are issues from the past that, though they are largely behind the car and visible in the mirror, still impact the driving experience. There are also issues on the horizon that are visible through the windshield, but are just now starting to come into focus and have effect.

Rear view mirror issues – These are concepts as basic as scalability associated with spanning tree issues, or sub optimal use of bandwidth, also due to spanning tree limitations. These issues are addressed with ACI, as there is no spanning tree in the fabric, and the use of Equal Cost Multi Pathing (ECMP) allows use of all links. Additionally, use of BiDi allows use of existing 10G fiber plant for 40G upgrades, thus obviating the expense and hassle of fiber upgrades. As a result, the ACI fabric, based on Nexus 9000’s, provides all the performance and capacity Du needs.

Windshield issues – These are represented by a range of things that result from business’s need for speed, yet are diametrically opposed by the complexity of most data centers. The need for speed through automation is becoming more and more critical, as is simplifying the operating environment, particularly as the business must scale. Within this context, Aftab mentioned both provisioning and troubleshooting.

Provisioning: Without ACI, provisioning involved getting into each individual switch, making requisite changes – configuring VLANs, L3, etc. It also required going into L4-7 services devices to assure they were configured properly and worked in concert with the L2 and L3 configurations. This device by device configuration not only was time consuming, but created the potential for human error. With ACI, these and other types of activities are automated and happen with a couple of clicks.

Troubleshooting: Before ACI, troubleshooting was complicated and time consuming, in part because they had to troll through each switch, look at various link by link characteristics to check for errors, etc. With ACI, healthscores make it easy and fast to pinpoint where the challenge is.

Please take a few minutes to check out what Aftab has to say about these, and other aspects of his experience with ACI at Du.

 


Unleash Your Business Analysts Cisco Data Preparation

As the leader of Cisco’s Data Virtualization and Analytics Business Units, it is my pleasure to announce Cisco Data Preparation, a new big data and analytics offering for business analysts and IT.

What is Cisco Data Preparation?

Driven by Business’s accelerating demand for analytics, Cisco Data Preparation (Data Prep) makes it easy for non-technical business analysts to gather, explore, cleanse, combine and enrich the data that fuels these analytics.DataPrepDiagram1

Primarily designed as a self-service application for business analysts, Data Prep is also a valuable new capability for IT data developers and even data scientists, helping these teams collaborate to achieve the following benefits:

  • Faster Insights: New data sets available in minutes, not weeks.
  • More Comprehensive Insights: Gain advantage from all your data sources.
  • Better Business Outcomes at Scale: Supports hundred of data preparation projects at big data scale.
  • Higher Productivity, with Greater Governance: Both Business and IT gain from stronger collaboration.

Why will Business Analysts like Cisco Data Preparation?

Business Analysts can use Data Prep to address the significant data integration challenges they face when preparing analytic data sets using with a self-service approach.

  • Every analytic project is different making every data exploration effort unique. Cisco Data Prep’s Excel-like interface and machine learning lets analysts explore data freely.
  • Data is messy and everywhere. This results in analysts spending as much as 80% of their time preparing data before analysis can begin. Cisco Data Prep dramatically reduces time required to prepare data.
  • Too few Data Scientists and too long IT backlogs puts the onus on the Business adopt self-service. Cisco Data Preps empowers Business Analysts to do this work themselves.

Why will IT like Cisco Data Preparation?

IT can use Data Prep to work in concert with the business to intelligently balance self-service needs with governance constraints, while optimizing infrastructure.

  • Many requirements are short lived in contrast with IT’s industrial grade orientation. Cisco Data Prep helps IT and Business meet exploratory data needs with the right level investment and when needed even provide working prototypes that IT can quickly reengineer.
  • Independent, ungoverned data prep efforts can lead to duplication of effort, inconsistently transformed data sets of unclear origin, resulting in inaccurate analysis and potentially bad business results. Cisco Data Prep built-in governance and data set sharing increase trust.
  • Rogue data preparation activity in personal sandboxes and myriad tools, prevent IT from delivering scalable, secure infrastructure. Cisco Data Prep’s ability to massively scale allows IT so support thousands of users and multiple terabytes of data with a common, cost-effective infrastructure.

A Complete Data Preparation Solution, Only from Cisco

Cisco Data Preparation is a complete software, hardware and services solution that simplifies adoption and accelerates benefits.

  • Leveraging an easy-to-learn and use Excel-like interface and powerful machine intelligence algorithms from Cisco partner Paxata, Data Prep removes barriers to adoption and elevates business analysts’ skills.
  • Two-way integration with Cisco Data Virtualization helps leverage prior IT investments and closes the loop between the business and IT.
  • Data Prep’s massively scalable Hadoop and Spark-based architecture ensures that Data Prep users won’t be constrained by size of data sets or complexity of analysis.

DataPrepDiagram

Plus, a complete set of Cisco and Partner provided “Plan” and “Build” services ensure Data Prep implementation success.

Learn More About Cisco Data Preparation

There are lots of ways you can learn more about Data Prep. You can:

  • Join us at Strata+Hadoop World this week from September 29 through October 1 at the Javits Center in New York. Stop by Cisco Booth 425. There you can get a Data Prep demo from Cisco Sales Engineer Bill Kellett as well as attend with Cisco Data and Analytics Director Bob Eve and Paxata Product VP Nenshad Bardoliwalla.

StrataHadoop

  • Join us at the 2015 Data and Analytics Conference, October 20-22, at the Hilton Chicago. Register now and join my breakout session, “Data Preparation for Self-Service Analytics.”
  • Review the Cisco Data Preparation data sheet to learn more about Data Prep functionality.
  • Talk to your Cisco or Cisco partner account manager to arrange a conversation with a Cisco Data Preparation product specialist.
  • And look for upcoming blogs relating Cisco Data Preparation with Cisco UCS, Cisco Data Virtualization, Advanced Services and more. Stay tuned!

 

Join the Conversation

Follow @CicsoDataVirt and @CiscoAnalytics, #CiscoDAC.

Learn More from My Colleagues

Check out the blogs of Mala AnandMike FlannaganBob Eve and Nicola Villa to learn more.

Tags: , , , , ,


Valley Proteins Looks to Cisco for Network Improvements

No matter what type of IT department you run, if you have numerous plants in over 20 states ranging from New Mexico to New York, you need to have a properly built network infrastructure. Anything less and you’re flirting with disaster.

valley-proteins-220x140Due to the nature of its business—recycling restaurant grease and animal byproducts—the 25 processing and transfer facilities of Valley Proteins are often in remote locations. It’s because of the remoteness of these facilities that network connectivity is often an issue.

When networks aren’t talking to one another, it can lead to a host of different tech issues. And that can lead to confusion and ultimately 25 different facilities pulling in 25 directions—all with different results.

Valley Proteins understood that they had a cohesion problem amongst its business divisions and decided to implement SAP as a way of getting a holistic view of customer profitability, supplier pricing and individual factory performance. But before the SAP solution could be put into place, it was important that the more than two dozen facilities needed to get under the same network umbrella. If that didn’t happen, this endeavor was going to be a failure.

Valley Proteins turned to Cisco to come up with a reliable, scalable and secure infrastructure.

Cisco went into Valley Protein’s existing data centers and created an entirely new infrastructure by installing Cisco servers, switches, routers and firewalls. In addition, access points were placed at all of the facilities allowing for the business data and operations to be centralized.

This new Cisco infrastructure has worked out very well for Valley Proteins and has allowed for:

  • Improved profitability
  • Real-time visibility
  • Modular scalability
  • Better customer service
  • Business continuity

These infrastructure benefits have impacted Valley Proteins for the better and have allowed day-to-day operations to run much smoother.

“We can make better business decisions without employing an army of analysts,” says Brad Wilton, Director of IT.

To learn more about this case study, point your browser here.

Please feel free to comment, share and connect with us @CiscoEnterpriseFacebookLinkedIn and the Enterprise Networks Community.

Tags: , , , , , ,


RISE on Nexus 7k switches with Cisco Prime NAM

Cisco Prime Network Analysis Module (NAM) has been integrated with Nexus 7k/7700 Series using Cisco® Remote Integrated Services Engine (RISE) technology providing a powerful story for data center integration. RISE with Prime NAM provides high performance monitoring and packet analysis on multiple virtual device contexts along with switch interface statistics for all modules.

Cisco RISE is being used by a large number of customers to tightly integrate the Cisco Nexus series switches with the Cisco Prime NAM to provide VDC awareness and SPAN traffic across multiple VDCs without burning slots on the switch. RISE overcomes the limitation of applying SPAN configuration only in the VDC to which the management cable is attached by intelligently managing the movement of NAM data ports and SPAN configuration to other VDCs as needed. The integration includes the following main features:

  • NAM appliance acts as a module on Nexus switches
  • One NAM appliance can receive traffic from multiple Nexus VDCs without re-cabling
  • One NAM appliance can collect interface statistics for multiple VDCs
  • Dynamic vdc-aware SPAN configuration on Nexus switches using NAM GUI
  • Up to 4 NAM ports can be automatically assigned to Nexus VDCs using NAM GUI
  • Graph of per-interface ingress and egress statistics for multiple VDCs
  • Auto-discovery and bootstrap of NAM appliance from Nexus switch
  • Health monitoring of NAM appliance
  • Visibility to multiple VDCs from one NAM appliance with ongoing VDC configuration updates
  • Configurable timer intervals and VDC list for interface statistics collection
  • User-friendly error handling for SPAN creation/deletion/modification
  • Order of magnitude OPEX and CAPEX savings: reduction in configuration, simplified provisioning and data-path optimization

Screen Shot 2015-09-28 at 5.54.19 PM

Figure 1. RISE Physical and logical topology

Deployment Modes

Cisco RISE supports attachment to the NAM appliance in the following modes:

Direct Attach mode with single NAM: The appliance has a management link that is directly attached to the Nexus switch. Up to 4 data links on the NAM can be attached to one or more VDCs on the Nexus switch to send SPAN traffic (Figure 2).

Screen Shot 2015-09-28 at 5.54.28 PM

Figure 2. Direct Attach Mode with single NAM

Direct Attach modes with multiple NAMs: The appliance has a management link that is directly attached to the Nexus switch. Up to 4 data links on each NAM can be attached to one or more VDCs on the Nexus switch to send SPAN traffic (Figure 3).

 

Screen Shot 2015-09-28 at 5.54.33 PM

Figure 3: Direct Attach mode with multiple NAMs

Indirect Attach modes with multiple NAMs: The appliance has a management link that is attached via an L2 network to the Nexus switch. Up to 4 data links on each NAM can be attached to one or more VDCs on the Nexus switch to send SPAN traffic (Figure 4).

Screen Shot 2015-09-28 at 5.54.41 PM

Core Features

Cisco RISE with NAM provides the following key features that allow the solution to provide traffic and performance analysis across all the VDCs on the Nexus switch without changing the wiring connections.

Dynamic VDC-aware SPAN Configuration

  • Configure SPAN sessions for up to 4 NAM dataports from NAM GUI.
  • Create, edit, delete SPAN sessions, select destination ports and source ports for the SPAN sessions.
  • SPAN sessions can be configured in other VDCs by selecting VDC and data ports from NAM GUI. Dataport will be automatically moved to required VDC.
  • The options of SPAN configuration available to N7K CLI users are available via NAM GUI using RISE.
  • Provides visibility to all VDCs from one NAM.

Multi-VDC Interface Statistics

  • Retrieve interface statistics of all VDCs on N7K via RISE
  • Set short term and long term polling intervals for getting interface statistics
  • Set the interested list of VDCs from which statistics needs to be retrieved
  • Statistics can be viewed on per interface basis as a graph or data points

Main Benefits

  • Enhanced application availability via simplified provisioning and efficient manageability.
  • Data path optimization: ADC off-load, low latency policy engine.
  • Dynamic VDC-aware SPAN configuration: Create SPAN sessions on any VDC
  • Multi-VDC awareness: Deliver traffic and performance reports in multiple VDCs
  • Cisco RISE provides significant savings in capital expenditures (CapEx) and operating expenses (OpEx) through simplified provisioning and data-plane optimizations
  • Dramatic OpEx savings: Reduction in configuration time and ease of deployment
  • Dramatic CapEx savings: Reduced wiring, power, and rack-space needs
  • The solution provides enhanced business resiliency and stickiness to Cisco products.
    Ordering Information

Cisco RISE is supported in Cisco NX-OS Software Release 7.x and requires the Enhanced Layer 2 Package license. Please contact nxos-rise@cisco.com if you have any questions.

References

http://cisco.com/go/rise

http://blogs.cisco.com/datacenter/rise

http://blogs.cisco.com/datacenter/rise-nam

 


Thinking Bigger! Cisco + IBM – Collaboration of giants brings industry-leading solution for big data analytics

In December 2014 we announced VersaStack, an integrated infrastructure reference solution for enterprise applications that combines technologies from Cisco and IBM. Further extending this partnership, today we are announcing support for IBM BigInsights for Apache Hadoop on our Cisco UCS Integrated infrastructure for Big Data – an industry-leading platform widely adapted for enterprise big data application deployments. The joined solution encompasses disruptive innovations in Cisco UCS and the robust and industry-compatible Apache Hadoop distribution from IBM. This solution can be installed as a standalone Hadoop cluster with powerful analytical tools or can be integrated into existing VersaStack deployments that will benefit from a common fabric and unified management capabilities to deliver the deepest possible insight into your data to help you gain a sustainable competitive advantage.

We are also announcing the availability of Cisco Validated Design (CVD) that provides step by step design guidelines comprehensively tested and documented to help ensure faster, more reliable and predictable deployments at lower total cost of ownership.

Highlights:

  • Combines innovations from Cisco UCS such as programable infrastructure with best of open source software with enterprise-grade capabilities in IBM BigInsights for Apache Hadoop
  • Designed and optimized for common use cases, pre-tested, pre-validated and fully documented by Cisco and Cisco engineers to ensure dependable deployments that can scale from small to very large as workload demands
  • Provides enterprises with extensive platform management and data visualization capabilities and integration of  big data with other information solutions to help enhance data manipulation and management tasks
  • Brings the power of SQL to Hadoop at the performance and scale ever than before accelerating data science and analytics leveraging SQL – arguably the most beautiful programming language – and integration with business applications to access data stored in HDFS and HBase with JDBC and ODBC
  • Deep technical expertise, global resources, and world-class support and services from Cisco, IBM and partners

This solution is built on Cisco UCS infrastructure using Cisco UCS 6200 Series Fabric Interconnects and Cisco UCS C-Series Rack Servers optimized for IBM BigInsights for Apache Hadoop with scalability to thousands of nodes with Cisco Nexus 9000 Series Switches:

IBM BigInsightsonUCS


For more information, please visit:

Follow me on Twitter: https://twitter.com/raghu_nambiar for real time updates.

Tags: , , , , , ,


iOS 9 – A Growth Hacking Opportunity Awaits

The Economics of Network Downtime

Infonetics Research recently released a study that claims businesses (just in North America alone) lose as much as $100 million a year due to network downtime. Let us dissect that into numbers you and I can relate to.

  • On average, businesses suffer from 14 (CA Technologies) to 87 hours (Gartner) of downtime per year.
  • A conservative estimate pegs the hourly cost of network downtime at $42,000 (Gartner).
  • The cost of unplanned downtime per minute is between $5,600 and $11,000 (Ponemon Institute).
  • MTTR (mean time to resolution) per outage, on average, is 200 minutes (ITT Process Institute).

For a quick/rough calculation of your own potential revenue lost, use this equation provided by North American International Systems (NASI).

LOST REVENUE = (GR/TH) x I x H
Where:
GR = gross yearly revenue
TH = total yearly business hours
I = percentage impact
H = number of hours of outage
Service costs are rarely zero.

Most businesses associate network downtime with major events or security breaches, but such isn’t always the case. Their own employees could induce it too. For many, that was the case every September. This blog is not about a challenge, but a growth hacking opportunity (transformed from an IT challenge). 

Come September, Apple Fans Rejoice.

Every year, Apple fans like myself, anticipate and rejoice in the new features and products Apple introduces at its annual keynote event. This year was no different with the iOS 9 release. However, upgrading five iOS devices (three iPhones, an iPad, and one iPad mini), each at 1.4GB, meant my home internet connection wasn’t the ideal choice. So, like many Apple users, I leveraged my corporate Wi-Fi to do the upgrade.

IT Apple Fans Rejoice, But Also Dread

The next business day, I brought in three iOS devices and started the upgrade simultaneously. I wasn’t the only one doing so. While waiting for the upgrade to finish, I then watched several Apple key feature videos and did a quick surf on social media for the latest on #iOS 9, which brought up these.

Tweets iOS9

 

Hmm…One person with three devices, each consuming 1.4GB of bandwidth per upgrade. Do the math. It doesn’t take much to overwhelm even that 100 Mpbs line. Soon the iOS 9 beast (an influx of iOS 9 upgrades) could degrade application performance and potentially cause network downtime. As excited as IT could be about Apple’s new products and feature releases every September, I bet they also dread how the network will respond to the influx of traffic. Fortunately, I did not experience any application issues, nor was my corporate network down as a result of this. What’s our secret?

Superior App Experience

The secret is part of our Apple + Cisco partnership: our Application Experience solution a.k.a Cisco IWAN with Akamai Connect. With HTTP/S caching and application optimization technology, iOS 9 was cached locally in the branch. Instead of three redundant instances of 1.4GB of data, like in my scenario, it was one. (Did I mention my upgrade was lightning fast too?) As a result, network bandwidth was offloaded and through Quality of Services (QoS) business critical applications got prioritized for superior user experience.

To see a throughput summary for this iOS 9 use case as well as graphical comparisons for many more, for example, retail and education, check out the presentation: Accelerating Applications – iOS 9 Use Case in the Cisco Enterprise Networks community.

iOS – A Growth Hacking Opportunity for One Retailer

iOS features and products bring benefits not only to consumers, but also businesses. One world-renowned fashion brand outfitted many sales associates with iOS devices in its stores. At any given time, for example, at the point of sales/engagement with shoppers or at the customer service desk, associates could access product catalogs via an iOS device and complete the sales transaction locally at the kiosk. For training purposes, associates would use their iOS devices to access Intranet videos over the WAN during off peak time, maximizing both product knowledge and productivity. Now imagine the effect 10-15 company supplied iOS devices plus all of the personal iOS devices upgrading would have on the store’s network. During the week of the initial iOS 9 upgrade, this particular retailer was able to offload close to 5 TB of data off the store’s network while improving the performance of iOS downloads and the application experience for all in-store apps. (See Figure 1 below).

 

AppEx_iOS9

Retailer offload 5TB of data using Cisco IWAN with Akamai Connect

By embracing technology (i.e. iOS and Cisco IWAN with Akamai Connect) with a keen focus on growth rather than an IT problem, this retailer innovated its sales floor activities, increased customer engagement, and improved associate’s productivity. The result: higher sales per square foot.

Embrace iOS and Growth Hack Your Business

Opportunity doesn’t always come knocking. Sometimes, you have to go seek it out or transform a challenge into an opportunity. For many, every September is a time when IT dreads, but as seen in our retailer example, it’s a chance to growth hack the sales floor activities (this time) and maybe a new business model (next time). Learn more about Application Experience with the resources below and go growth hack your business.

I’d love to hear your experience, so do share (@annaduong).

Tags: , , , , ,


Cisco ACI – A Hardened Secure Platform With Native, Built-in Security

This blog has been developed in association with  Javed Asghar, Insieme Business Unit

The Cisco ACI Platform consists of the Cisco APIC controller and Nexus 9000 series switches connected in a spine/leaf topology in a CLOS architecture configuration. All management interfaces (REST API, web GUI and CLI) are authenticated in ACI using AAA services (LDAP, AD, RADIUS, TACACS+) and RBAC policies which maps users to roles and domain.
The ACI fabric is inherently secure because it uses a zero trust model and relies on many layers of security: Here are the highlights:

  • All devices attached to the ACI fabric use a HW-based secure keystore:
    – All certificates are unique, digitally signed and encrypted at manufacturing time
    – The Cisco APIC controllers use Trusted Platform Module (TPM) HW crypto modules
    – The Cisco Nexus 9000 series switches use ACT2 to store digitally signed certificates
  • During ACI fabric bring-up or while adding a new device to an existing ACI fabric, all devices are authenticated based on their digitally signed certificates and identity information.
  • Downloading and image bootup:
    – All fabric switch images are digitally signed using RSA-2048 bit private keys
    – When the image is loaded onto an ACI fabric device, the signed image must always be verified for its authenticity using HW certificates in the ACT2 keystore
    – Once the verification is complete “only then” the image can be loaded onto the device
  • The ACI fabric system architecture completely isolates management vlan, infrastructure vlan and all tenant data-plane traffic from each other. (The Cisco APIC communicates in the infrastructure VLAN (in-band))
  • The infrastructure VLAN traffic is fully isolated from all tenant (data-plane) traffic and management vlan traffic.
  • All messaging on infrastructure vlan used for bring-up, image management, configuration, monitoring and operation are encrypted using TLS 1.2.
  • After a device is fully authenticated, the network admin inspects and approves the device into the ACI fabric.

These are various layers of security built into ACI’s architecture to prevent rogue/tampered device access into the ACI fabric.

Please stay tuned for a blog posting by Praveen Jain (ACI Engineering VP) which will cover the APIC and Fabric security is more detail in coming weeks

Praveen Jain’s recent blogs:
New Innovations for L4-7 Network Services Integration with Cisco’s ACI Approach

Micro-segmentation: Enhancing Security and Operational Simplicity with Cisco ACI

Additional Information:
The Cisco Application Policy Infrastructure Controller 

 

Tags: , , , , ,


An Overview of Network Security Considerations for Cisco ACI Deployments

Security continues to be top of mind with our customers and frequently comes up with customers who are evaluating new architectures. I have been in the networking industry for over two decades involved in multi-billion dollar product lines like Catalyst 5K, 6K, MDS-9000, UCS and now with Application Centric Infrastructure (ACI). I don’t claim to be a security expert by any means, but have gained good insight into what’s important based on numerous conversations with customers over the years thereby allowing me to write about it with some degree of authority.

That said, security is a very broad topic and there are myriad products in the industry to deal with the various types of attacks that infrastructure and applications are exposed to today. For purposes of this blog, I mostly wanted to focus on the network security aspects and how they intersect with Cisco ACI.

Accordingly, I will touch upon aspects of the four pillars as shown in the image below.

sec-pic1-png

  1. Isolation-based Security

Isolation is one of the most fundamental building blocks of security. Virtual Routing and Forwarding (VRF) a.k.a tenants allow private address space, while the ACI fabric guarantees separation of traffic among these VRFs. Similarly Bridge Domains (BDs) allow separation of broadcast traffic. The ACI fabric places emphasis on optimization and efficiency. It does not flood unknown packets and instead forwards it based on its station table. However the “flood mode” is still preserved as an option for legacy needs.

The concept of tenants also allows for shared services in a very elegant manner. Let’s assume a hosting provider wants to provide DNS, Active Directory services for their tenants.  The ACI architecture allows these infrastructure services to be hosted in a separate tenant – say “service hoster” and these can be made available to selected tenants by use of  explicit ‘contracts’.  This is achieved by leaking routes across VRFs along with contracts/filters. The ACI architecture is designed to maintain strict isolation in all these scenarios.

  1. Stateless Contracts

The concept of stateless contracts is made possible by the ACI architecture that allows for the abstraction of policy definition from the state of the underlying infrastructure. This helps in scaling the system considerably, but also reducing operational complexity and manual errors that have manifested themselves during the definition and enforcement of traditional Access Control Lists (ACLs)

For anyone who’s been involved in networks and security over the last decade or so, has been exposed to Access Control Lists (ACLs) with the standard <5 tuple, flag> match in packet headers.  Based on the subnetting design, we either use /32 causing an explosion of ACLs or use prefixes. With the concept of workload mobility the traditional subnet boundaries are rapidly dissolving.

The concept of ACI ‘contracts’ addresses both of these issues by defining application tiers a.k.a End-point Groups (EPGs) and filters among them. In other words a TCAM entry like <src epg, dst epg, proto, src port, dst port, flags> allows full portability and mobility of the IP address anywhere while substantially reducing TCAM space and management complexities.

Further, EPG filters prevent an explosion of Access Control Lists (ACL) entries drastically simplifying the operational aspects of security definition and governance. EPG/Micro-segments as well as attribute-based security are covered in the blog posted by Shashi Kiran and myself here.

  1. Stateful Firewalls

Stateful firewalls are a very well known concept. Cisco ASA and many of the L4-7 partners in the ACI eocosytem have exceptional products in this space. In addition, a lighter version is available for VMware ESX using the Cisco Application Virtual Switch (AVS) and for Hyper-V for OpFlex extensions

  1. Deep Packet Inspection and Analysis

I will not dwell on this. Deep Packet Inspection and Analysis is again a very mature security area. Cisco’s Sourcefire product line as well as those of other L4-7 partners are very well suited to this space.

sec-pic2-png

It is important to note that each of the above pillars that I described are important to ensure that the overall security posture is maintained and continually strengthened. The architecture ensures that the unintended and malicious traffic is filtered out at every level and the subsequent pillar has to process lesser traffic. With ACI the isolation and stateless contracts are applied in hardware at line rate. This enhances performance considerably and is as good as it gets in the industry today.

The role of the Application Policy Infrastructure Controller (APIC)

The APIC automates the processes across all the four pillars that I described above. As the application life-cycle changes, the security policies are automatically added/modified/removed in all the four areas of isolation, stateless contracts, stateful firewall and deep packet inspection/analysis. Our customers can customize and leverage all of these together for the most optimal security deployment. APIC vastly simplifies and automates a lot of these operations.

Finally, I believe that breaking security is harder at the infrastructure layer than closer to the application where vulnerabilities can quickly amplify.  I would therefore recommend not to just rely on network security implemented inside a guest VM/compute by using linux IPTables/Conntracker/Windows firewall or other means. Any intruder getting unauthorized access to guest VM/compute could compromise that. However these methods can be supplemented with infrastructure network security as described in this blog.

 

 

Tags: ,


Is your data giving you the costly cold shoulder?

2nd Guest Blog by Ron Graham
Ron_graham[1]

Ron Graham had served as a Data Center Architect and Systems Engineer for some of the largest IT companies in the U.S. including Cisco Systems, NetApp, Sun Microsystems, and Oracle. He is currently working for Cisco Systems as a Big Data Analytics Engineer.


 

What I mean is, is your data not being used that much or is the temperature of the data going from hot to cold? Hot data is being used a lot and cold data is being used sparingly. I think every one runs into this problem at some point where they are store cold or frozen data on high performance compute resources. Does it make sense to move unused data to an archive directory as long as it is still in the same cluster and can still be accessed? In the majority of cases this makes sense.

We have hot data and cold data, so what about warm data? Warm data is giving off a moderate degree of heat and data is used less frequently than hot and more than cold. Take a look at the graph below. I interpolated the graph based on tech posting from Ebay and interviews with a former Disney admin.

On the business side, my analysis proved a 15.9% saving in CAPEX for a 1 petabyte (PB) Hadoop cluster. With a hot and cold storage ratio of 4:1, which means that 80% of my data will be on high performance storage platforms and 20% of my data on storage optimized platforms.

Slide09

As data become the New Oil, Cisco and Hortonworks are working together to efficiently and cost effectively handle data of all temperature. Having the right storage infrastructure and management is imperative as unprecedented amounts of unstructured data flows in from all different sources such as email, file service (video and audio), wearable medical technology, log files, appliances, and thousand of different sensors.

Hortonworks Data Platform supports storage tiers with the ability to move data between tiers using placement policies. Cisco and Hortonworks are working on an integrated solution to identify data temperature and automate its movement across storage tiers. For now, I am thinking there is a python script in my near future.

Is Data Really the New Oil?

Cisco and Hortonworks are already building tomorrow’s infrastructure

In a world that creates 2.5 quintillion bytes of data every year, how can organizations take advantage of unprecedented amounts of data? Is data becoming the largest untapped asset? What architectures do companies need to put in place to deliver new business insights while reducing storage and maintenance costs?

Cisco and Hortonworks have been great partners and we offer operational flexibility, innovation and simplicity when it comes to deploying and managing Hadoop clusters. UCS Director Express for Big Data provides a single touch solution that automates deployment of Apache Hadoop and gives a single management pane across both physical infrastructure and Hadoop software. It is integrated with major open-source distributions to ensure consistent and repeatable Hadoop UCS cluster configuration, relieving customers from manual operations.

Today, a number of enterprises have deployed HDP and Cisco UCS, not only to deploy big data solutions faster and lower total cost of ownership, but also to extract new and powerful business insights. For instance, a leading North American insurance company allowed their analysts to run 10 to 15 times the number of models they could run before, leveraging 10 billion miles of data on customers’ driving habits. It used to take 14 days for them to process queries, which meant that, by the time they obtained insights, the information was old and potentially inaccurate. Their existing system also placed limitations on the amount of data it would support, their ability to blend their data with external data, and was expensive to operate. The combination of HDP on Cisco UCS now gives them the flexibility to merge all types of data sources, with no limitations on the amount of data they can analyze, ultimately improving customer relationships and driving revenue.

These transformations do not only happen in insurance. In every industry, from healthcare to retail to telecommunications, big data allows companies to create business intelligence they could never have dreamed of and dramatically change the way they do business. Are you ready to leverage the new oil too?

Next Steps

– Meet us in person at Strata in NYC:

  • Cisco: booth #425| Hortonworks: booth #409

– Learn more about our joint reference architecture.

– Check out our tutorial.