Making IP Video Better Than Broadcast

– August 31, 2017 – 0 Comments

As we wrap up the summer vacation season and settle back into work and school, I can’t help but reflect back on my personal summer vacation and share a few observations. My family and I had the chance to travel and while we were on the road, away from home, I shared with my wife that, in this bold new world, we could watch the Red Sox wherever we were. However, every night around 7 pm, we’d tune into the game and we’d wait. Sometimes the video wouldn’t start. Other times it would pause … buffering. And sometimes the video resolution would become so low that you couldn’t tell what was happening. Consumers, including myself, have come to expect a pretty high standard for video experiences. It begs the question, what if we could make IP video experience better than the broadcast experience?

Consumer Demand:

It should come as no surprise to anyone with a video subscription and a connected device that video is shifting to IP and mobile. According to Cisco’s Virtual Networking Index, 82% of IP traffic will be driven by video by 2020. Consider this – those same bright minds also believe that IP Video Traffic will surpass Broadcast by 2022.

Here’s a big secret; video providers who can take advantage of IP protocols to capture this market transition and meet the growing demand are going to win in the next generation of video. Why? Because the present generation of technologies are designed to operate on a lowest common denominator basis. Operators who recognize the importance of starting with the highest quality experience will be better positioned to address consumer demand, scale appropriately, better control operational costs, and deliver a great IP video experience to customers.

I’ll let you in on another secret; Cisco has a plan and a set of services to do just that.

IP Video Better than Broadcast:

For us, making IP better than broadcast means delivering consistent, great looking video on any device, on any type of network. It is about the quality of experience; buffering and low quality video have no place in our vision. It means that consumers can access their Live, VOD, and Recorded content on the go, with a seamless experience across all screens. And for the Service Provider, it means ensuring that their subscribers consistently have a positive experience with the Service Provider’s brand. For our friends in advertising, it means delivering targeted ads to the consumers who are most likely to buy their products.

Managing Scale, Cost and Quality

Cisco is committed to helping our customers capture the transition to the next phase of IP video. Making IP video better than broadcast requires a scalable, future-proof infrastructure, complete with micro-services, and all of the bells and whistles to enable ease of operation and service velocity. It means keeping costs down by leveraging an IP converged video core, pushing decisions to the edge, simplifying the network, and ultimately reducing operational costs. It is also important to take into account the current assets and investments that Service Providers have made. With our rich history in networking, and ability to closely collaborate with our customers and understand their needs, Cisco is uniquely positioned to help migrate Service Providers from traditional broadcast to a hybrid IP or all IP infrastructure.

How do we do it? Through a more intuitive, smarter network.

  • Converged IP Video Core Architecture: By pushing decisions to the edge of the network, Service Providers are able to improve feature velocity, reduce operational costs, and deliver more personalized experiences to the consumer.
  • Intelligent Edge: Format decisions for the subscriber device can be made at the edge of the network, freeing up bandwidth.
  • Cloud Recording: The cloud takes on an even bigger role with cloud recording, developed with cloud-native micro-services to increase feature velocity, improve scale, and reduce operational costs. But wait, there’s more. The cloud brings greater operational intelligence, analytics, and enhanced automation into the mix.
  • Smart Streaming: Cisco offers a toolkit of technologies that improve the streaming experience such as low latency, SVQ generation, chunked packaging, and fast channel change on top of point products, both at the edge and the core of the network, to optimize and balance the quality of experience with the bandwidth available.

The Next Generation of Video

The transition towards IP and ultimately making the IP video experience better than broadcast is a journey that we’re all invested in. Like other generational transformations in the industry, the companies who proactively plan will win. The challenge is great, but we are invested and eager to help our customers on this journey. This is the path to IP video that is better than broadcast, and we are well on our way. Play on.

Fabio Souza contributed to this blog. For more details about Cisco’s Infinite Video Platform, click here.


Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Back to Basics: Worm Defense in the Ransomware Age

This post was authored by Edmund Brumaghin

“Those who cannot remember the past are condemned to repeat it.” – George Santayana

The Prequel

In March 2017, Microsoft released a security update for various versions of Windows, which addressed a remote code execution vulnerability affecting a protocol called SMBv1 (MS17-010). As this vulnerability could allow a remote attacker to completely compromise an affected system, the vulnerability was rated “Critical” with organizations being advised to implement the security update. Additionally, Microsoft released workaround guidance for removing this vulnerability in environments that were unable to apply the security update directly. At the same time, Cisco released coverage to ensure that customers remained protected.

The following month, April 2017, a group publishing under the moniker “TheShadowBrokers” publicly released several exploits on the internet. These exploits targeted various vulnerabilities including those that were addressed by MS17-010 a month earlier. As is always the case, whenever new exploit code is released into the wild, it becomes a focus of research for both the information security industry as well as cybercriminals. While the good guys take information and use it for the greater good by improving security, cybercriminals also take the code and attempt to find ways to leverage it to achieve their objectives, whether that be financial gain, to create disruption, etc.


Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

VXLAN Innovations on the Nexus OS: Part 1 of 2

– August 31, 2017 – 0 Comments

Posting this blog on behalf of Babi Seal, Senior Manager, Product Management, INSBU and Lukas Krattiger, Principal Engineer, INSBU

Virtual Extensible LAN or VXLAN for short has been around since 2011 as an enabling technology for scaling and securing large cloud data centers. Cisco was one of VXLAN’s lead innovators and proponents and have demonstrated it with a continual stream of new features and functionality. This momentum continues with our announcement of the newest Nexus OS release NX-OS 7.0.3I7(1), also known as the “Greensboro” release; available for the Nexus 3000 and 9000 family of switches. This release is jam-packed with NX-OS innovations in the areas of security, routing and network management, only to name a few.

The series of blogs will highlight some exciting new VXLAN-related features shipping as part of the Greensboro release. In this blog, we’ll look closely at three individual features: Tenant Routed Multicast, Centralized Route Leaking support, and Policy-Based Routing with VXLAN. In the next blog we give a closer look on VXLAN Ethernet VPN (EVPN) Multi-Site support.

Tenant Routed Multicast (TRM)

This feature brings the efficiency of multicast delivery to VXLAN overlays. It is based on standards-based next-gen control plane (ngMVPN) described in IETF RFC 6513, 6514. TRM enables the delivery of customer Layer-3 multicast traffic in a multi-tenant fabric, and this in an efficient and resilient manner. The delivery of TRM fulfills on a promise we made years ago to improve Layer-3 overlay multicast functionality in our networks. The availability of TRM leapfrogs multicast forwarding in standards-based data center fabrics using VXLAN BGP EVPN.

While BGP EVPN provides control plane for unicast routing ngMVPN provides scalable multicast routing functionality. It follows an “always route” approach where every edge device (VTEP) with distributed IP Anycast Gateway for unicast becomes a Designated Router for Multicast. Bridged multicast forwarding is only present on the edge-devices (VTEP) where IGMP snooping optimizes the multicast forwarding to interested receivers. Every other multicast traffic beyond local delivery is efficiently routed.

With TRM enabled, multicast forwarding in the underlay is leveraged to replicate VXLAN encapsulated routed multicast traffic. A Default Multicast Distribution Tree (Default-MDT) is built per-VRF. This is an addition to the existing multicast groups for Layer-2 VNI Broadcast, Unknown Unicast and Layer-2 multicast replication group. The individual multicast Group addresses in the overlay are mapped to the respective underlay multicast address for replication and transport. The advantage of using a BGP-based approach allows TRM to operate as fully distributed Overlay Rendezvous-Point (RP), with the RP presence on every edge-device (VTEP).

A multicast-enabled data center fabric is typically part of an overall multicast network. Multicast sources, receivers and even the multicast Rendezvous-Point might reside inside the data center but might also be inside the Campus or externally reachable via WAN. TRM allows seamless integration with existing multicast networks. It can leverage multicast Rendevous-Points external to the fabric. Furthermore, TRM allows for tenant-aware external connectivity using Layer-3 physical interfaces or sub-interfaces.

TRM builds on the Cisco Cloud Scale ASIC enabled Nexus-9000-EX/FX Series switches which are capable of VXLAN encapsulated multicast routing. Nevertheless the solution is backwards compatible with earlier generations of Nexus 9000-series of switches. It provides Distributed Anchor Designated Router (Anchor-DR) functionality to translate between TRM capable and non-TRM capable edge-devices (VTEPs). In this co-existence mode, multicast traffic is partially routed (on the TRM capable devices), but primarily bridged. One or more of these TRM capable edge-devices will perform the necessary gateway function between the “two worlds”. Not to forget, the co-exist can also extend to the Nexus 7000 family of switches.

Centralized Route Leaking

Segmentation is a prime use case for VXLAN based data center fabrics and requirements like common Internet access or shared services are not only in the WAN existent. Multi-Protocol BGP enables safely route leaking between Virtual Routing and Forwarding (VRF) instances by defining Route-Target policies for import and/or export respectively. Centralized Route Leaking enables VXLAN BGP EVPN with this well-known function and the related use cases.

Centralized Route Leaking enables customers to leak routes at one centralized point in the fabric, typically at the border leaf, which reduces the potential for introducing routing loops. Route leaking leverages the use of route-targets to control the import and export of routes. To attract the traffic traversing VRFs to the centralized location, we need to introduce default routes or less-specific subnet-routes/aggregates on the leaf-switches.

For the “Shared Internet Access” or “Shared Services VRF” use case, we allow the exchange of the BGP routing information from many VRF to a single “Internet” VRF. In this case, the “Internet” VRF can either be a named VRF or the already pre-defined “default” VRF. While the pre-defined “default” VRF has an absence of route-targets, Centralized Route Leaking incorporates the ability to leak route from and to the “default” VRF. While we highlighted the one-to-many or many-to-one possibility, Centralized Route Leaking also provides the same function in a one-to-one manner, where one VRF must communicate to another VRF.

All the various use cases have some commonalities, the exchanging of information between VRFs. As routing table can grow, Centralized Route Leaking uses a limit of prefix count as well as import- and export-filters. Not to forget, Centralized Route Leaking is a drop-in or on-a-stick feature; while all your VTEPs can reside on the existing Hardware and Software level, only the leaking point must support the feature of Centralized Route Leaking.

Policy-Based Routing with VXLAN

Cisco leap-frogged VXLAN routing years ago and extended its capability with a BGP EVPN control-plane. Beyond the traditional routing, there were always use cases that required additional classification for forwarding decisions. While in routing the destination IP network and longest-prefix match is till today the main criteria to forward, more sophisticated routing decisions might become necessary. Policy-Based Routing is an approach to manipulate forwarding decision by overruling the IP routing table. With an 5-tuple match and uses an adjacent next-hop for its decision.

VXLAN enabled Policy-Based Routing allows to leverage the traditional functions available to PBR, while now the next-hop can exist behind a VXLAN adjacency. With this approach, routing decision can be influence to forward across a VXLAN BGP EVPN fabric. Use cases like redirecting specific traffic to a Firewall without VLAN or VRF stitching is only one of the cases.

With the added support of Policy-Based Routing for VXLAN, the latest advancement in a rich history of Cisco innovations extend across a data center fabric.


Stay tuned! In our next blog, we’ll examine the features and benefits of hierarchical VXLAN BGP EVPN based data center fabrics, that allow not only scaling and fault containment within a data center but also enhanced scalability, fault domain isolation, improved administrative controls, and plug-and-play extensibility.





Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Real-Time Optimization Ensures Workload Performance

– August 31, 2017 – 0 Comments

Everywhere you look, the industry is talking about applications. They generate revenue and run your business. Assuring your applications’ performance requires stable data center resources allocated in just the right size to meet demand.

How does your business allocate resources?  Most over provision for peak demand which leads to higher capex and idle resources once demand returns to everyday levels.  If this over provisioned workload lives in the cloud, you can see crazy high bills.

Allocating just the right amount of resource for workloads at the right time is a complex problem. Cloud infrastructure, containers, micro services and public cloud services have driven up both the number of workloads and the devices that need to be monitored and managed. It has become too much for humans to handle. A recent Storage Switzerland study indicated that an environment with 3,000 virtual machines would need to make 300 changes per day.  That’s one change every 5 minutes!

New solutions are available that solve this problem with advanced analytics and automation.  These solutions free humans from these complex decisions and let software do what it does best: manage these decisions in real-time. Watch this entertaining video to explore further.

Cisco Workload Optimization Manager delivers a real-time decision engine that automatically adjusts workload placement and resource allocations in response to changes in demand.  Your organization benefits from higher efficiency across your data center stack.  It assures performance while minimizing costs.  And it does this for any workload, on any platform, at any time.

The latest release of Workload Optimization Manager takes it one step further by integrating with Cisco UCS Director and CloudCenter to deliver true elastic infrastructure at scale.  When infrastructure capacity is insufficient to meet demand or house a new project, the solution leverages UCS Director’s workflows to turn up a blade, rack server or data store. It also decommissions idle resources or resizes data stores automatically. Not sure there is adequate capacity to deploy your application? CloudCenter integration automates this verification process preventing applications from being deployed into under powered instances.

When you move a workload to the cloud do you typically move the next instance size? Your not alone. As we already mentioned, on premise workloads are over provisioned to meet peak demand.  Moving to the next instance size simply duplicates over provisioning and results in higher bills.

Protect your cloud budget with Workload Optimization Manager’s built in modeling that ensures the right size instance for your workload.  As shown below, the modeling capability delivers an understanding of your costs before you migrate.


How do you ensure the performance of your workloads on premise or in the cloud?  Download Workload Optimization Manager and experience the power of software to manage your workload performance.


Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

FDA announces first-ever recall of a medical device due to cyber risk

– August 30, 2017 – 0 Comments

This week, the FDA took the unprecedented step of recalling a medical device – a pacemaker – because it was found to be vulnerable to cyber threats. The recall arose from an investigation by the FDA in February that highlighted a number of areas of non-compliance. While there are no known reports of patient harm related to the implanted devices affected by the recall, the step was taken as a preventative measure. A firmware update has been developed (and approved by the FDA) that can be applied during a patient visit with their healthcare provider.

Medical device vulnerabilities have been on the FDA’s radar for some time. In July 2015, the FDA issued an Alert highlighting cyber risks related to infusion pumps. Then, at  the end of 2016, it issued what it called “guidance” on the post-market management of cybersecurity for medical devices. But aside from market pressure, there was no enforcement mechanism for any of these alerts and statements. To make matters worse, a recent study revealed that only 51 percent of medical device manufacturers and 44 percent of healthcare organizations currently follow the FDA guidance to reduce or mitigate device security risks. Many thought leaders in the healthcare security space have been pushing for greater governance of medical devices as more and more security vulnerabilities and back doors to these devices have been discovered.

While “homicide by medical device” may seem like a far-fetched Hollywood-esque scenario right now, it’s not completely out of the realm of possibility. “The potential for immediate patient harm arising from hackers gaining control of a pacemaker is obvious, even if the ability to do so on a mass scale is theoretical,” Fussa pointed out. “For example, imagine a ransomware attack that threatens to turn off pacemakers unless a bitcoin ransom is paid. In this week’s recall alone, 465,000 devices are affected. An attack of this type would pose an immediate risk to all of these patients and would likely overwhelm the ability to respond.”

While it’s good news that the FDA is acting to protect patients from harm due to cyberattack, connected devices continue to pose a threat to both patients and facilities. There’s been no shortage of press on the subject, and most healthcare executives are keenly aware of the problem. However, very few have an effective or scalable solution.

Many hospital systems have in excess of 350,000 medical devices, before you even start to count the implantable ones that leave with patients. Most of these devices were never designed with security in mind, and many have multiple ways in which they can be compromised by a hacker. The fact that we are not aware of any reported patient deaths yet is a good thing, but the industry has a very short window to secure its medical device arsenal before hospitals and patients get held to ransom. Health systems need to be looking at segmentation as a compensating security control to prevent attacks, until the medical device industry catches up.

Do you have a plan in place to secure your facility’s medical devices? Are you able to segment and isolate traffic to them?

Do you have visibility into who and what is communicating with your biomed systems and do you have ransomware protection?

Having specific answers to these questions will be key to a strong, ongoing defense against attacks.

For more information on cybersecurity solutions, get the details on Cisco’s Digital Network Architecture for Healthcare and IoT Threat Defense for network-connected devices.


Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Customer-led Innovation: Digital Cybersecurity Feed

– August 30, 2017 – 0 Comments

How Customer feedback is leading Cisco to develop better experiences.

In digital marketing, it can be easy to let our hunches and personal preferences pave the way.  But as Cisco Digital Marketing’s New Experiences team has discovered, talking to customers can unveil some pretty amazing things – from finding out what they love, to uncovering pain points, or even to unlocking new innovation paths and dreaming up new tools.

The New Experiences team constantly checks in with customers to ask ‘why’ and to see where customer feedback can lead them.  It’s actually how the team’s latest project, the Cybersecurity Newsfeed, a third-party content aggregator got started, built, redesigned, and launched.

Time and time again as the team spoke with customers, a common theme rose to the surface. Customers were frustrated that they needed to monitor several sites and feeds constantly in order to do their buying research and to stay up-to-date on the latest security trends, news, and alerts.

This common theme got the team wondering: was there a way to address this customer frustration head on? This customer-first question kicked off an exciting design thinking projectthat led to the creation of a brand-new website tool,The Cybersecurity News Feed, that pulls in content from third party and Cisco resources, and allows users to filter, share, and even bookmark their favorite articles for later.

The new Cybersecurity News Feed

Read on to meet our team and learn more about their customer-focused design thinking process that that guides them and be sure to check out the new Cybersecurity News Feed at



Lauren Wright is the Customer Researcher on the New Experiences Team. Here, she discusses design thinking, why she loves talking to customers, and how these conversations help the team to unlock innovation.

1. What is design thinking, and why does your team use it?

To me, Tim Brown from IDEO says it best, “Design Thinking is a human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.” There are five essential steps to Design Thinking: Empathize, Define, Ideate, Prototype, Test.  On our team, we like to add that, in addition to it being an innovation methodology, it requires a change in mindset too.

2. Why do you find talking to customers so valuable?

Building empathy is the first step in Design Thinking and, in my opinion, the most important one. In order to build empathy, you need to talk and interact with people, to get to know them and their individual perspectives.  I

The New Experience Team talking to customers in the Customer Experience Lab at Cisco Live. Melbourne.

absolutely love talking to our customersbecause I learn so much from them.  Talking to them helps me understand their points of view and who they are as a people, as well as their pain points and pleasing moments (with Cisco and with their job). This knowledge and deep understanding is valuable and essential in create improvements to the customer experience.

3. What did you hear from customers that caused you to uncover the need for this type of tool?

When I spoke to customers about their day-to-day and what they do when they’re considering making a purchase, they told me that they spent a lot of time researching. These purchases are important to their jobs and they want to make sure they’re making the right decision.  These decisions tend to be complicated, with a lot of moving pieces and parts and more in-depth knowledge is often required. In addition to doing research when making a purchase, our customers need to stay up-to-date on what’s going on in technology.  Time, being the scarce resource that it is, and our customers saying they spend a lot of time on researching and staying up to date on trends, we saw a need to create a tool to help customers do their research and stay up-to-date, while saving time – and the Cybersecurity Newsfeed was born!

4. What types of research techniques do you use to get feedback on prototypes along the way?

I primarily use in-depth interviews for getting to know our customers.  Once we have built a prototype, I’ll loop back with them to show them the prototype and do more of a usability study (with some follow-up questions).  In the future, I would like to do more ethnographic research, and our team is exploring cross-tabulation of quantitative data with our qualitative research.

 5. Is there a way for customers to reach out and provide feedback to the team?

Yes, please do! If you’re a past, present, or future Cisco customer, we would love to hear from you. You can reach us at



Alexa Michael is the UX Designer on the New Experiences Team. Here, she explains why customer feedback is so important to her process, and how it pushes and informs her work.

1. Why is customer feedback so important to you in your design process?

We need lots of detailed feedback feedback from customers – pre–release and post-release, in the form of qualitative and quantitative testing so that we can prove or disprove our assumptions about the project.

2. What did you learn throughout this project that you’ll integrate into your design approach next time?

I learned that having customer research support is extremely helpful (shout out to Lauren!), especially as a newcomer on a new project. I also learned a lot from the A/B Testing team about methodology and the importance of rigorous testing in order to deliver the best possible product to customers.

My biggest takeaway from this project is that the key to successful user design comes from an iterative process of listening to customers and tweaking the design approach, based on real-time feedback–which we did with the Cybersecurity Newsfeed. Next time, we plan to explore a scientific design approach where we start with tons of ideas, eliminate less-promising ones, and eventually throw our arsenal behind the most credible ideas. Once we have our winning ideas, I’d love to go even further and conduct usability testing to explore various design solutions, examining user behavior closely. These measure will help us pinpoint a variety of user problems and improve our outputs. My personal challenge as a designer will be balancing the need for rigorous testing with the need to move quickly and efficiently.


A/B Testing

Christina Wong is an OmniChannel Manager on the New Experiences Team. Here, she discuss the questions that she sought to answer in her testing approach, the insights that the results unlocked, and if testing is ever really done.

1. What were the main questions you were trying to answer in your testing approach?

Once the feed was built, we began our testing to see how our customers interacted with it ‘in the wild’. While a feed seemed super useful and interesting to us, we needed to see how it would impact someone’s usual routine on the site. Observing user behavior with the feed on various places on  led to new learnings regarding user intent, content desirability, and the level of customer engagement with the tool. We also ran multiple tests to discover if the tool would inspire repeat return visits to Gaining insights from our testing has helped us to continuously iterate and improve the feed for our customers. The great thing is that our work isn’t done. There is always more to learn from how our customers naturally interact with the site.

2. What key insights did you discover during A/B testing of the tool?

Before we launched, customers told us that they wanted the Cybersecurity News Feed to have content from third party resources, not just Cisco-authored content. They told us that they wouldn’t trust the feed if it didn’t support outside perspectives from the security industry, in addition to Cisco thought leadership. Knowing this, we implemented several 3rd party news sources based on the recommendations of our customers.  When the Cybersecurity News Feed was implemented, it was exciting and validating to see that our highest engaging content source was one that was recommended by customers, with Cisco Security Alerts and Twitter posts with the next highest interaction. We also observed that while users engaged with the feed, and that its addition caused more return visits to overall, users did not re-engage with the tool on their return visits. This was very interesting to see and we want to continue doing testing and customer research to learn more and discover the right placement and design that increases engagement on return visits.

3. Is testing done now that the tool has been launched?

Nope! We will continue to iterate on this feed on both a design and functionality level, whether the feedback comes from customer interviews or A/B testing metrics. It’s just the beginning for the Cybersecurity News Feed and we can’t wait to continuously improve the tool so it’s useful and a must-have resource for our customers!

To learn more and get involved, email We’d love to hear from you, and to add your feedback to our design thinking process for current and future projects.




Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Digital Transformation in the US Public Sector – Think Intuitive

– August 30, 2017 – 0 Comments

Today’s federal IT shops face a growing need to serve users and constituents, one that can be hard to stay ahead of. The amount of data to gather, process, and consume is overwhelming, and increases each day.

Agencies have to modernize the infrastructure to support and secure data in order to stay ahead of the ongoing demand for government services in real time.

All the while, IT managers have more and more choices at their disposal, thanks to the availability of regulated software as a service (SaaS) and infrastructure as a service (IaaS) offerings, along with automation and orchestration solutions in the marketplace.

Man working on laptop.

IT leaders have many technology choices.

These capabilities can now truly accelerate the delivery of mission-critical services and offload much of the burden on operations. Further, they complement existing in-house services, helping IT managers to balance their use of technology across the spectrum of private and public offerings and therefore serve employees and constituents effectively and efficiently today and well into the future.

The transition from rigid, monolithic applications to agile, distributed, and scalable services is well underway. Cisco was patient as this market transition progressed, yet persistent in its focus on the network – the true underpinning of digital transformation. The network is the lifeblood of service delivery; it is the nervous system of the secure, intelligent platform that enables digital transformation.

Cisco accelerated its innovation engine over the last several years in anticipation of this profound change, recently introducing the intuitive network – its strategic direction toward a network that constantly learns, adapts, and protects itself and the systems it supports.

This new network turns intent into policy and automates that policy across all systems. It is powered by context; the network analyzes the intelligence within, to provide insights into users, devices, applications, and threats. It then abstracts purpose from interactions on the system, providing assurance of the intent of those interactions.

The network is the lifeblood of service delivery; it is the nervous system of the secure, intelligent platform that enables digital transformation.

The intuitive network extends across the entire infrastructure and into the application environment by way of its Application Centric Infrastructure. The strength of ACI forms the basis for capturing intelligence and applying analytics.

Cisco recognizes that the immense value in the network must be unleashed through analytics in real time – and it has built such a capability organically with Tetration Analytics and via acquisition with AppDynamics. However, as analytics gains prominence in everyday IT system deployment and operations, the ability to rapidly deploy systems and automate services becomes paramount. Cisco provides several key solutions in this regard:

  • As the sole provider of a fully integrated, hyperconverged infrastructure system, HyperFlex, Cisco recently announced its intent to acquire SpringPath, its exclusive HCI technology partner. HyperFlex combines software-defined networking with computing and storage via its Unified Computing System and the HX Data Platform, enabling the rapid deployment and simplified operations needed for today’s fast IT.
  • Through its acquisition of CliQr a year ago, now known as Cisco Cloud Center, Cisco provides the on- and off-premise services orchestration needed to manage workloads across multiple physical instances.
  • Cisco Workload Optimization Manager actively manages workload resources across physical and virtual systems, dynamically scaling and allocating compute and storage capacity without manual intervention.

Each of these solutions interfaces into Cisco UCS Director, an infrastructure management system that allows standard or customized interfaces to deliver maximum flexibility across heterogeneous application environments.

Public sector organizations, as they continue to extend their services footprint beyond premise-based systems, must consider how IT services will be orchestrated, delivered, and consumed, locally and remotely. The myriad of offerings for workload automation and orchestration, performance management, and security can be overwhelming.

What is lacking in the marketplace is a distinct framework for these capabilities, into which each can seamlessly integrate. Most organizations will find that framework right at home and primed to support the ongoing migration to integrated on- and off-premise services.

Cisco’s intuitive network, analytics, orchestration, and optimization solutions provide such a framework that will enable public sector IT for years to come.


Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Vulnerability Spotlight: Multiple Gdk-Pixbuf Vulnerabilities

Today, Talos is disclosing the discovery of two remote code execution vulnerabilities which have been identified in the Gdk-Pixbuf Toolkit. This toolkit used in multiple desktop applications including Chromium, Firefox, GNOME thumbnailer, VLC and others. Exploiting this vulnerability allows an attacker to gain full control over the victim’s machine. If an attacker builds a specially crafted TIFF or JPEG image and entices the victim to open it, the attackers code will be executed with the privileges of the local user.

Read More


Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Black Holes and Dying Stars: A High Performance Computing Use Case

– August 30, 2017 – 0 Comments

High-performance computing – or HPC – fuels groundbreaking research. This isn’t news to scientists, but most researchers studying fields other than quantum mechanics and molecular modeling probably don’t realize that they can use a supercomputer to ramp up their analysis.

This power, historically reserved for the realm of theoretical science, is now available to all – economists and sociologists, engineers and anthropologists, linguists and historians.

Just in case the prospect of using a supercomputer to solve our problems seems like a Cold War-era dream, we want to show you a few ways that high-performance computing is fueling research in a variety of disciplines today. Cue our high-performance computing blog series.

First up, natural sciences.

Unlike Matthew McConaughey (ahem, Interstellar), most of us have never been sucked through a black hole. With the closest one (that we know of) nearly 3,000 light years away, we fortunately won’t get close enough to check them out ourselves.

The good news is that we won’t have to. Using HPC, scientists can simulate what might happen in close proximity to a black hole. They can also simulate possible events that could lead to the creation of a black hole, such as the collapse of a star.

Discoveries about black holes aren’t the only scientific advances made possible by supercomputers. Molecular modeling, climate forecasting, and a variety of other types of research in the natural sciences rely on HPC to crunch the data and power their simulations.

Next up in our high-performance computing use case series? Big data.

Can’t wait for the next post? Learn more about HPC below.

Learn More


Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

Tetration in a Nutshell

– August 30, 2017 – 0 Comments

To learn more about how we’ve used Tetration Analytics — how it’s helped us view, secure, and migrate our networks — please visit the links below and read the case studies in full.

Includes information from:

Cisco Tetration Analytics: Initial Implementation

How Cisco Tetration Analytics Helps Enable Cisco IT Migrate to ACI

Cisco Tetration Analytics product information



Leave a comment

We’d love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.